Google’s Seven Principles for the Development and Use of Artificial Intelligence
In a significant move, Google has released seven principles to guide its future work in artificial intelligence (AI). These principles are designed to ensure that AI is developed and used in ways that benefit society, while minimizing the risk of harm. The principles are a response to the controversy surrounding Google’s involvement in the Maven project, which aimed to use AI to identify and detect drone cameras in vehicles and other objects.
The Principles
- Socially Beneficial: AI should be developed and used to benefit society as a whole. Google aims to use AI to produce transformational impact in areas such as healthcare, security, energy, transportation, manufacturing, and entertainment.
- Avoid Creating or Strengthening Unfair Prejudice: Google recognizes that AI algorithms and data sets can reflect, reinforce, or reduce unfair prejudice. The company will strive to avoid unfair influence, particularly in areas such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious beliefs.
- Establish and Test Security: Google will continue to develop and apply strong security measures to avoid the risk of accidental injury. The company will design AI systems to maintain an appropriate level of caution and seek ways to develop them according to best practice safety of artificial intelligence research.
- The Person Responsible: Google will design AI systems to provide appropriate opportunities for feedback, interpretation, and appeal. The company’s AI technology will be subject to human direction and control.
- Adhere to High Standards of Scientific Excellence: Google will pursue high standards of scientific excellence in the development and use of AI. The company will work with stakeholders to promote thought leadership in this area and share knowledge through educational materials, best practices, and research publication.
- Use of Inspection Techniques: Google will evaluate the possible use of AI technology according to factors such as the main purpose and use, the nature and uniqueness of the technology, and the scale of its impact.
- Transparency and Accountability: Google will be transparent about its AI development and use, and will be accountable for its actions.
AI Applications We Will Not Touch
In addition to these principles, Google will not design or deploy AI in the following applications:
- Causing or Threatening to Cause Injury: Google will not develop or use AI that exists a significant risk of harm, unless the benefits far outweigh the risks and appropriate security measures are in place.
- Weapons or Other Direct Harm to Human Technology: Google will not develop or use AI that is used for weapons or other direct harm to humans.
- Violation of International Norms: Google will not develop or use AI that violates international norms, including the collection or use of information to monitor individuals.
- Violation of Human Rights: Google will not develop or use AI that violates widely accepted principles of international law and human rights.
Long-Term Development of AI
Google’s seven principles for the development and use of AI are designed to ensure that the company’s work in this area is guided by a commitment to social benefit, security, and transparency. The principles are a response to the controversy surrounding Google’s involvement in the Maven project, and demonstrate the company’s commitment to responsible AI development and use.
In the long term, Google believes that AI will have a significant impact on society, and the company is committed to using its technology to benefit society as a whole. The seven principles are a key part of this commitment, and will guide Google’s work in AI for years to come.
Google’s Commitment to AI
Google’s commitment to AI is not limited to these seven principles. The company is investing heavily in research and development of AI, and is making its tools and open source code widely available. Google’s goal is to make AI technology available to everyone, and to use this technology to benefit society as a whole.
In conclusion, Google’s seven principles for the development and use of AI are a significant step forward in the company’s commitment to responsible AI development and use. The principles demonstrate Google’s commitment to social benefit, security, and transparency, and will guide the company’s work in AI for years to come.