Google, Microsoft and OpenAI promise responsible AI development

The development of artificial intelligence causes concern in many people. With this in mind, Google, Microsoft, OpenAI and Anthropic have created the Frontier Model Forum, which aims to ensure the safe and responsible development of AI models. The initiative, however, is viewed with skepticism: experts consider that it may just be a way to avoid regulation of the sector.

Among the group’s objectives are:

advance AI security research to promote responsible model development;
identify best practices for development and implementation;
collaborate with policymakers, researchers, civil society and companies to share knowledge about security risks;
Support initiatives to develop applications for major societal challenges such as climate change, cyber threats and cancer prevention.
The four companies are among the top names in generative artificial intelligence.

OpenAI is responsible for ChatGPT.
Microsoft has put such technologies into products like Bing, Edge, and Microsoft 365.
Google introduced its chatbot, called Bard.
Anthropic, made up of former OpenAI members, is working on its model, called Claude.
The group will be able to accept new members, but the criteria are quite restricted. Participation is limited to companies that build “large-scale machine learning models that exceed the capabilities currently present in the most advanced models.”

As the Financial Times notes, this suggests that the group is not keeping an eye on the current problems of artificial intelligence, such as copyright infringement, privacy risks and unemployment caused by human labor substitution.

More advanced models can bring new problems, and that’s what the Frontier Model Forum is thinking about.

AI is in the crosshairs of new regulations
The topic of artificial intelligence attracts the attention of authorities. In the U.S., President Joe Biden has promised executive action to promote “responsible innovation.” The Federal Trade Commission (FTC) is investigating OpenAI for possible unfair or deceptive practices involving data protection.

Recently, the four companies in the Frontier Model Forum, Meta, Inflection, and Amazon agreed to follow the White House guidelines for AI.

The European Union has advanced talks to approve a set of rules for AI, aimed at promoting rights and values such as human oversight, transparency, non-discrimination and social welfare.

The topic has been under discussion since 2020 in the National Congress, with the Artificial Intelligence Framework. In 2023, a new text was proposed.

In an interview with the Financial Times, Emily Bender, a researcher at the University of Washington, said the Frontier Model Forum initiative is a way to “avoid regulation” and try to “claim the possibility of self-regulation.”

During talks and debates in recent months, Sam Altman, CEO and one of the founders of OpenAI, has been arguing that it is not the time to regulate AI because it would hurt innovation.

For him, this should be done only when a model as intelligent as human civilization emerges. Even in this case, Altman considers that the most appropriate would be the creation of a global agency, such as the International Atomic Energy Agency, linked to the United Nations (UN).