Bestgamingpro

Product reviews, deals and the latest tech news

Adobe Reverses Decision on Using Users’ Images for AI Training

The peculiar aspect of massive, incredibly affluent corporations is that they sometimes come across as tone-deaf in their communications with their customers—the very people who contribute to their wealth. It’s not necessarily that they don’t care; it often boils down to a few key issues: they may not vet new policies through focus groups; they might overly rely on legal advice that is riddled with jargon, making it incomprehensible to consumers (and sometimes even to the company itself); or they might not use a comprehensive “flow chart of possible outcomes” that, if utilized correctly, could identify policies likely to provoke backlash.

We are currently at a pivotal moment in the history of image creation, with many content creators (photographers and videographers) feeling threatened by AI advancements. Therefore, it’s no surprise that when Adobe miscommunicated about the potential use of customer images for AI training, it led to significant backlash. After all, who would want to see parts of their work incorporated into AI-generated images without receiving any compensation?

For instance, Canva explains how they generate AI images on their website:

“To create AI-generated images, the machine learning model scans millions of images across the internet along with the text associated with them. The algorithms spot trends in the images and text and eventually begin to guess which image and text fit together. Once the model can predict what an image should look like from a given text, they can create entirely new images from scratch based on a new set of descriptive text users enter on the app.”

While Canva primarily focuses on graphic design, Adobe’s situation is different, as they have been perceived as a champion of the photography industry for decades—thus the intense backlash.

Last week, Adobe announced changes to its Terms of Use regarding how they utilize data to train generative AI. This announcement caused an uproar, prompting Adobe to clarify and reconfigure their AI-generation system. Following a blog post late last week, Adobe published another post, this time announcing plans to improve communication regarding the changes.

“We recently rolled out a re-acceptance of our Terms of Use, which has led to concerns about what these terms mean for our customers. This has caused us to reflect on the language we use in our Terms and the opportunity we have to be clearer and address the concerns raised by the community,” Adobe stated in a new blog post written by Scott Belsky, Adobe’s Chief Strategy Officer, and Dana Rao, Executive Vice President, General Counsel, and Chief Trust Officer.

Much of the concern centered around the notion that Adobe intended to use user data to train AI datasets. However, Belsky and Rao were unequivocal in stating that user data will never be used to train any generative AI tool.

“We’ve never trained generative AI on customer content, taken ownership of a customer’s work, or allowed access to customer content beyond legal requirements,” Belsky and Rao affirmed.

“We will make it clear in the license grant section that any license granted to Adobe to operate its services will not supersede your ownership rights.”

Adobe’s Firefly AI tool, according to the company, is trained only on a dataset of licensed content with permission, such as Adobe Stock, and public domain content where copyright has expired.

“At Adobe, there is no ambiguity in our stance, our commitment to our customers, and innovating responsibly in this space,” Belsky and Rao emphasized.

However, Adobe acknowledges that customers should expect “significant clarification” regarding content ownership, training generative AI models, usage licenses, and content moderation in an upcoming update to its Terms of Use, which they describe as “the right thing to do.”

Revisions to the Terms of Use will focus on numerous areas, including how Adobe trains generative AI models, treats user content, and moderates content.

Adobe also noted that while user data is used to help improve some machine learning features, users always have the option to opt-out.

Additionally, in response to the backlash, Adobe is considering implementing more transparent communication strategies and regular updates to keep users informed about how their data is being used. This includes potential webinars, detailed FAQs, and direct customer support channels to address concerns promptly.

Furthermore, Adobe plans to establish a dedicated feedback loop with its user community, ensuring that any future changes to policies or terms are thoroughly vetted and user-centric. This proactive approach aims to rebuild trust and demonstrate Adobe’s commitment to its customers’ rights and creative integrity.

“In a world where customers are anxious about how their data is used, and how generative AI models are trained, it is the responsibility of companies that host customer data and content to declare their policies not just publicly, but in their legally binding Terms of Use,” write Belsky and Rao.

Leave a Reply

Your email address will not be published. Required fields are marked *