Home » Tech News Blog » New Gadgets and Hardware » No Use for Weapons: Google Introduces AI Guidelines

No Use for Weapons: Google Introduces AI Guidelines

A new company policy states that Google’s AI technology should not be used in weapons. In other areas, however, the group still wants to cooperate with the military.

When it became known in March 2018 that Google’s AI technology was used in a Project Maven, a Pentagon pilot program that uses artificial intelligence to analyze drone footage, this led to internal disagreements. In an open letter, thousands of Google employees called for an end to any collaboration with U.S. Govt. According to US media, some employees have even quit in protest. Google responded in early June 2018 and stated that the contract with the Pentagon would not be renewed. In addition, company director Sundar Pichai has now published seven rules that all AI projects of the company should follow in the future.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a statement on the new AI guidelines that “These are not theoretical concepts but “concrete standards that will actively govern our research and product development and will impact our business decisions”.

While Google’s new straightening mechanism excludes the use of its AI technologies in weapon systems, CEO Pichai does not want to sacrifice potentially lucrative military orders. In areas such as cyber security, training of soldiers or recruitment, Google’s AI solutions should continue to be used in the military.

ad banner 619 3 - No Use for Weapons: Google Introduces AI Guidelines

Weapons and Surveillance: Google Wants to Keep its Hands Off These Areas

Basically, Pichai promises that Google’s AI technology will not be used in connection with things that harm people. This includes the use in weapons systems.

The technology should also not be used for monitoring, if it undermines “internationally accepted standards”. Moreover, the AI ​​should not be used if it is to erode international law and human rights.

Critics, however, are not convinced by the guidelines. Miles Brundage, an AI researcher at Oxford University, criticizes the rules as too vague. Even Peter Asaro, a New York university professor who had spoken publicly with fellow researchers against Google’s drone cooperation with the Pentagon, he considers too inaccurate. According to a Bloomberg report, some of Google’s employees, who had previously complained about their employer’s military cooperation, are not entirely convinced by the rules.

Google: Sundar Pichais 7 Rules for AI Applications at a Glance

To bring benefits to society

Google CEO Pichai recognizes in the first rule that artificial intelligence will bring tremendous change across a range of industries. He promises that his group will consider the potential impact of these changes on society and the economy. In addition, Google aims to provide the most accurate information possible through AI, while respecting the cultural, social and legal norms of each country of use.

They must not reinforce prejudices

In the second point, Pichai argues that algorithms and datasets have the potential to depict or even amplify existing prejudices. He promises that Google will try to avoid unfair effects on people’s lives because of their ethnicity, gender, sexual orientation, abilities, beliefs or political views.

They have to be sure

When researching and developing AI applications Google wants to pay close attention to security and avoid unwanted damage.

They are accountable to people

According to the Google chief, AI systems must always be developed so that people have a necessary degree of control.

They need to be mindful of privacy

All Google-developed AI applications should be designed to respect the privacy of users and provide them with “appropriate” levels of control and transparency.

They must meet high scientific standards

According to Pichai, Google is striving to meet high scientific standards in the AI ​​field. In a multidisciplinary process leadership positions are to be trained according to scientific findings. In addition, Google should responsibly share its findings from AI research with the scientific community.

They will be examined for these 4 questions

Because technologies can be used for a variety of purposes, Google has defined four areas for which every AI application should be pre-tested.

  • What is the primary benefit of a technology and how close is it to possible abuse?
  • How unique is the technology and how does it spread?
  • Will the technology cause significant changes?
  • What is Google’s part – that is, are they tools for every application, integrated tools for customers or commissioned work for a customer?

Recently, Microsoft also calls for clear ethical rules for artificial intelligence.

ad banner 619 2 - No Use for Weapons: Google Introduces AI Guidelines

Check Also

native advertisement social media trends 2017 live streaming messaging augmented realtity1 310x165 - Why Augmented Reality Is Amazing

Why Augmented Reality Is Amazing

Virtual reality has been stealing the show for the last couple of years, promising a …

One comment

  1. Sounds like modern three rules of robotics 🙂

Do NOT follow this link or you will be banned from the site!