AI DAMN/Google Allows Generative AI in High-Risk Areas with Oversight

Google Allows Generative AI in High-Risk Areas with Oversight

date
Dec 18, 2024
damn
language
en
status
Published
type
News
image
https://www.ai-damn.com/1734563088752-202307181418301728_3.jpg
slug
google-allows-generative-ai-in-high-risk-areas-with-oversight-1734565550560
tags
GenerativeAI
HumanOversight
AIRegulations
BiasInAI
AutomatedDecisionMaking
summary
Google has revised its generative AI usage policy, permitting the application of its AI tools in high-risk sectors like healthcare and employment, contingent upon human supervision. This update contrasts with stricter regulations by competitors and highlights ongoing concerns about potential bias in automated decision-making.
Google has recently updated its terms of use for generative AI, allowing clients to utilize its generative AI tools for automated decision-making in high-risk sectors such as healthcare and employment, provided there is human oversight. This update is part of the company’s latest generative AI usage policy.
 
notion image
 
According to the revised policy, clients can leverage Google's generative AI to make automated decisions that may have a significant adverse impact on individual rights, under supervision. The high-risk areas identified include employment, housing, insurance, and social welfare. Previously, the terms seemed to impose a blanket ban on high-risk automated decision-making; however, Google clarified that it has permitted such practices with human oversight from the outset.
 
A Google spokesperson addressed the media, stating, "The requirement for human oversight has always been part of our policy, covering all high-risk areas. We have simply reclassified some terms and provided clearer examples for user understanding."
 
This approach marks a shift from the policies of major competitors such as OpenAI and Anthropic, which maintain stricter regulations regarding high-risk automated decision-making. OpenAI prohibits the use of its services for automated decisions related to credit, employment, housing, education, social scoring, and insurance. In contrast, Anthropic allows its AI to make automated decisions in high-risk fields like law, insurance, and healthcare, but only under the supervision of qualified professionals and mandates that clients clearly disclose their use of AI for such decisions.
 
Regulatory bodies have voiced concerns over AI systems employed for automated decision-making, citing the potential for biased outcomes. Research has indicated that AI can perpetuate historical discrimination in loan and mortgage application approvals, raising alarm among advocates for equitable practices.
 
Organizations like Human Rights Watch have specifically called for a ban on social scoring systems, arguing that these mechanisms threaten individuals' access to social security and may violate privacy rights, leading to biased profiling.
 
In the European Union, the AI Act imposes stringent regulations on high-risk AI systems, particularly those involving personal credit and employment decisions. Providers of these systems must register in a database, implement quality and risk management protocols, employ human supervisors, and report incidents to relevant authorities.
 
In the United States, Colorado has enacted a law requiring AI developers to disclose information about high-risk AI systems and publish summaries detailing their capabilities and limitations. Meanwhile, New York City has prohibited employers from utilizing automated tools to screen candidates unless the tool has undergone a bias audit within the past year.
 
Key Points:
  1. Google allows the use of generative AI in high-risk areas, but requires human oversight.
  1. Other AI companies like OpenAI and Anthropic have stricter limitations on high-risk decisions.
  1. Regulatory bodies in various countries are reviewing AI systems for automated decision-making to prevent biased outcomes.

© 2024 Summer Origin Tech

Powered by Nobelium