Ethics in AI Research
By
Ed Watal, Founder & Principal, Intellibus

AI research, which seeks to explore and develop the capabilities of artificial intelligence, has entered a new era. Industry has now become the dominant player in a space traditionally ruled by academia, investing enormous sums in AI research. Reports show that Apple — which is just one of many global tech firms investing in AI research — is set to spend $1 billion per year strictly on developing generative AI.

This shift has led to growing concern over the proper application of ethics in AI research. As an MIT report recently explained, research driven by private industry rather than academic institutions could prioritize profits over public good. The March 2023 open letter calling for a moratorium on AI experimentation — which was signed by Elon Musk, Steve Wozniak, and thousands of other business leaders and academic researchers — pleaded for AI research to be focused on developing systems that are transparent, interpretable, and safe.

As AI development continues to evolve, the following are some of the key ethical considerations that should play a role in AI research, regardless of the parties involved in that research.

Transparent Data Usage

Machine learning is central to the recent advances in AI, as it is the component that empowers AI-driven platforms to act autonomously, learning from data and adapting based on their experiences. For this reason, ensuring transparency surrounding the data used to train AI is a top ethical concern that involves providing ongoing access to reliable information on how data is collected, stored, and used. When sensitive data is used for machine learning, ethics demand that consent be obtained and privacy be maintained through proper safeguards.

AI’s “black box problem” is another issue that has led to ethical concerns surrounding transparent data usage. This problem refers to developers’ inability to identify AI’s thought processes, which essentially happen within a black box. If biased data is used in the training process and concealed by a lack of transparency, it could lead to outcomes that betray the public good.

Fair Experimentation Practices

Unfair experimentation in AI research can take on many forms. Turning a blind eye to biased data is one example of unethical experimentation that could drive harmful outcomes. Failing to invest in sufficient safeguards for data gathered for AI research is another example of unfair experimentation.

Providing proper accountability in the AI research process is central to ensuring fair experimentation and that potential harm is detected and addressed. The March 2023 open letter on AI experimentation called upon AI developers to work with policymakers on establishing “robust AI governance systems” that would include the creation of regulatory authorities focused on monitoring AI development.

Responsible Publication

As AI research shifts from academia to industry, the responsible publication of research findings and methodologies becomes a key concern. The expectation is that research findings will be shared with the broader development community via reports that accurately represent the work being done and its accomplishments, detailing the potential for problems as well as gains.

Private companies can benefit from responsible publication of their research findings, as doing so can help improve their reputation, attract top talent, and foster public trust. However, sharing the findings of AI research also has the potential to threaten the commercial gains that can result from the findings. Ethical AI research requires finding a balance between public and private interests.

Continuous Ethical Review

While AI research has made historic progress in recent years, it is a field that is still in its infancy. Ensuring ethical concerns are addressed moving forward requires establishing protocols for continuous review, as the ethical practices considered essential will undoubtedly evolve as the capabilities of AI expand. Developers must be prepared to embrace and adapt to new practices or risk undermining public trust in the future of AI research.

 


 

Ed Watal is an AI Thought Leader and Technology Investor. One of his key projects includes BigParser (an Ethical AI Platform and Data Commons for the World). He is also the founder of Intellibus, an INC 5000 “Top 100 Fastest Growing Software Firm” in the USA, and the lead faculty of AI Masterclass — a joint operation between NYU SPS and Intellibus. Forbes Books is collaborating with Ed on a seminal book on our AI Future. Board Members and C-level executives at the World’s Largest Financial Institutions rely on him for strategic transformational advice. Ed has been featured on Fox News, QR Calgary Radio, and Medical Device News.