Google removes language on weapons from public AI principles
3 mins read

Google removes language on weapons from public AI principles

Alphabet Inc.s Google removed a passage from its principles for artificial intelligence that promised to avoid using the technology in potentially harmful applications, such as weapons.

The company’s AI principles previously included a passage entitled “AI applications that we will not strive for”, such as “techniques that cause or are likely to cause overall damage”, including weapons, according to screenshots as Bloomberg is seen. That language is no longer visible on the page.

In response to a request for comment, a spokesman for Google shared a blog post published Tuesday.

“We believe that democracies should lead in AI development, governed by core values ​​such as freedom, equality and respect for human rights,” wrote James Manyika, a senior vice president on Google and Demis Hassabis, who leads the AI ​​Laboratory Google Deepmind, wrote in post. “And we believe that companies, governments and organizations that share these values ​​should work together to create AI that protect people, promote global growth and support national security.”

Removing the clause of “injury” can have consequences for the type of work that Google will conduct, says Margaret Mitchell, who consulted Google’s ethical AI team and is now Chief Ethic’s researcher for AI start who hugs face.

“Having the deleted is to delete the work that so many people in the ethical AI space and activist space had also done on Google, and more problematically it means that Google will probably now work on distributing technology directly that can kill people, “She said.

The change is part of a broader change in politics among large technology companies. In January, Meta Platform’s Inc. many of its diversity and inclusion efforts were dissolved and told employees that they would no longer have to interview candidates from under-represented backgrounds for open roles. In the same month, Amazon.com Inc. stopped some diversity and inclusion programs, with a leading human resources manager who called them “outdated.” The administration of President Donald Trump has expressed resistance to diversity initiatives.

Tracy Pizzo Frey, who monitored what is called responsible AI on Google Cloud from 2017 to 2022, said that the AI ​​principles led the work that her team did every day. “They asked us to deeply interrogate the work we did over each of them,” she said in a message. “And I basically think this made our products better. Responsible AI is a Trust Creator. And trust is necessary for success. “

Google employees have long discussed how to balance ethical problems and competition dynamics in the AI ​​area, especially since the launch of Openai’s chatgpt turned up the heat at Siggiganten. Some Googlers expressed concern for Bloomberg in 2023 that the company’s driving force to regain land in AI had led to ethical decay.