Product reviews, deals and the latest tech news

Why an organisational AI ethics committee is necessary for conducting AI properly

It’s understandable if the concept of artificial intelligence (AI) seems far-fetched at the moment, but the ordinary consumer would be amazed by all the places it’s already at work. It’s no longer something you’ll only see in science fiction movies or in the secretive computer science laboratories of the Googles and Metas of the world. To name just a few examples, AI is now actively creating music, winning art contests, and beating humans at games that have existed for thousands of years, and it is also behind many of the recommendations we see when we shop online or use social media, as well as many customer service inquiries and loan approvals.

In light of the widening knowledge gap about AI’s potential applications, the establishment of an AI ethics committee should be a priority for every group or company that plans to use or deliver the technology. The committee’s primary responsibilities would centre on two areas: outreach and training.

As AI is developed and deployed, an ethics council would ensure that technology is utilised in a responsible manner. It would also collaborate closely with authorities to establish reasonable limits and create policies that prevent people from being unfairly harmed. In addition, it would inform buyers so they could evaluate AI with an open mind. Users should be aware that AI has the potential to dramatically alter our daily lives and working environments, as well as to perpetuate the kinds of prejudice and discrimination that have plagued mankind for ages.

I think we need an AI ethics committee because…

Top AI research labs are among the most cognizant of the field’s potential for both good and evil. Although some may have more expertise than others in the field, companies of all sizes and led by executives with a wide range of backgrounds may benefit from internal auditing and review. Education and internal ethical criteria must be prioritised as well; consider the Google developer who was persuaded that a Natural Language Processing (NLP) model was genuinely sentient AI (it wasn’t). For the sake of AI’s (and humanity’s) future, a solid foundational beginning is essential.

For instance, Microsoft is at the forefront of AI innovation, and the company consistently prioritises ethical issues in its work. The software titan only recently said that artificial intelligence may be used to summarise Teams sessions. This may lead to less note-taking and more smart, in-the-moment decision making. Although the software firm’s AI advancements have improved, they are not without flaws despite their recent victory. Microsoft abandoned its artificial intelligence (AI) facial-analysis capabilities over the summer due to the possibility of bias.

Although progress wasn’t always smooth, this highlights the need for ethical benchmarks to be used when assessing danger. In the instance of Microsoft’s AI face analysis, those criteria judged that the danger was higher than the gain, safeguarding us all from something that may have potentially disastrous results, such as a person being awarded an urgently needed monthly assistance check or unjustly denied help.

Opt for an active AI rather than a passive one

AI ethics committees inside organisations may act as a check on the rapid adoption of cutting-edge innovation. As an added bonus, they allow a company to provide thorough information to authorities and develop unified positions on how to safeguard the populace from potentially dangerous AI. Although the White House’s proposed AI Bill of Rights indicates that forthcoming legislation would be proactive, industry experts will still need to provide informed views on what’s best for individuals and organisations in terms of safe AI.

In contrast to taking a passive stance, organisations that commit to establishing an AI ethics committee should take the following three measures:

Construct with forethought

The first thing to do is to come together with the committee and settle on a final objective. Try your best to find the answers you need. It is important to get feedback from a wide variety of people inside the company, including technical leaders, communicators, and anybody else who may have an opinion on the committee’s future course of action. Without clear goals and objectives in place at the outset, the ultimate outcome of the AI ethical committee risks deviating from its intended purpose. Figure out what to do, make a plan, then keep to it.

Don’t start a fire in the sea

Like the world’s huge, blue oceans, the field of artificial intelligence is broad and deep, with numerous uncharted abysses. When forming your committee, it’s important to avoid taking on too much at once. Maintain concentration and purpose in your AI preparations. Find out what problems or gains you want to achieve by using this technology.

It’s important to consider other points of view

While familiarity with deep technology is beneficial, a truly balanced committee would include members with expertise outside of the field. Thanks to this variety, insightful perspectives on the dangers of unethical AI may be shared. The legal staff, the creative department, the media team, and the engineering staff should all be involved. This will ensure that the business and its customers are well-represented if potential ethical concerns emerge. To increase participation, consider issuing a company-wide “call to action” or conducting a survey to establish priorities.

Learning and participation are the heroes

AI ethics committees help organisations succeed in two ways crucial to their use of AI: education and employee buy-in. Organizations will be better able to inform regulators, customers, and others in the sector and create an active and informed society about artificial intelligence if they educate everyone inside the business, from engineers to Todd and Mary in accounting, on the risks associated with AI.