Bestgamingpro

Product reviews, deals and the latest tech news

AI art lawsuit from Stable Diffusion, along with advice from OpenAI and DeepMind

As I reported back in October, experts I consulted believed that any legal disputes involving AI artwork and copyright infringement may stretch on for years, possibly even reaching the Supreme Court.

This past Friday marked the beginning of these wars, when the first copyright infringement case centred on AI art was filed against two open-source generative AI art firms, Stability AI (the creators of Stable Diffusion) and Midjourney, and the online art community, DeviantArt.

A group of artists has accused AI models of creating “derivative artwork.”

Using the same legal team that previously sued Microsoft, GitHub, and OpenAI over the generative AI programming paradigm CoPilot, the three artists filed action against the companies. Artists argue that Stable Diffusion and Midjourney illegally copied billions of works from the Internet, including their own, and utilised the copies to create “derivative works.”

Butterick said on his blog that if Stable Diffusion is allowed to spread, “irreparable harm will be caused to artists, now and in the future.”

After promising last month to respect artists’ requests to opt out of future Stable Diffusion training, Stability AI CEO Emad Mostaque told VentureBeat that the business has “not received anything to date” in regards to the lawsuit, and that “once we do we may consider it.”

Sam Altman of OpenAI and Demis Hassabis of DeepMind both sound warnings.

I’ll be writing more about this lawsuit later, but I found it interesting that it broke just as OpenAI (which recently released DALL-E 2 and ChatGPT to much fanfare) and DeepMind (which has avoided releasing creative AI models to the public) issued warnings about the potential dangers of generative AI.

DeepMind CEO Demis Hassabis was recently interviewed by Time magazine, where he cautioned readers to exercise caution around artificial intelligence (AI). To paraphrase one author: “Not everyone is considering such issues. In the same way that many chemists and other experimentalists don’t understand they’re working with potentially lethal substances, this is the case. He cautioned his rivals against making hasty decisions, saying, “I would suggest not going quickly and damaging things.”

Meanwhile, OpenAI CEO Sam Altman advocated for quick action a year ago when he tweeted, “Move quicker. A sluggish pace everywhere is an excuse for a slow pace anyplace. Krystal Hu, a Reuters reporter, tweeted that last week he had a different tune to sing: “@sama stated OpenAI’s GPT-4 would debut only when they can do it securely & responsibly.” “In general, we will roll out new technologies much more slowly than the public would want. We’re just going to wait a long time to act on this…'”

Artificial intelligence that generates new content can change sides.

It’s clear that discussions around generative AI have just begun. However, the World Economic Forum has stated that now is the time to have these discussions in an essay published yesterday in conjunction with its annual conference, which is presently taking place in Davos, Switzerland.

According to the article, “much as many have campaigned for the necessity of diverse data and engineers in the AI field,” we also need to include “expertise from psychology, government, cybersecurity, and business” in the AI discussion. “A strategy for considered regulation of generative AI will require open conversation and shared viewpoints amongst cybersecurity professionals, AI developers, practitioners, corporate leaders, government authorities, and people. Everyone’s opinions need to be taken into account. If we band together, we can eliminate this danger to society and the world’s infrastructure once and for all. Together, we can make generative AI our ally rather than our enemy.