OpenAI and Other Tech Giants Will Have to Warn the US Government When They Start New AI Projects

Raimondo’s announcement comes on the identical day that Google touted the discharge of latest information highlighting the prowess of its newest synthetic intelligence mannequin, Gemini, exhibiting it surpassing OpenAI’s GPT-4, which powers ChatGPT, on some trade benchmarks. The US Commerce Department might get early warning of Gemini’s successor, if the undertaking makes use of sufficient of Google’s ample cloud computing sources.

Rapid progress within the subject of AI final yr prompted some AI specialists and executives to name for a short lived pause on the event of something extra highly effective than GPT-4, the mannequin presently used for ChatGPT.

Samuel Hammond, senior economist on the Foundation for American Innovation, a assume tank, says a key problem for the US authorities is {that a} mannequin doesn’t essentially must surpass a compute threshold in coaching to be doubtlessly harmful.

Dan Hendrycks, director of the Center for AI Safety, a non-profit, says the requirement is proportionate given current developments in AI, and considerations about its energy. “Companies are spending many billions on AI training, and their CEOs are warning that AI could be superintelligent in the next couple of years,” he says. “It seems reasonable for the government to be aware of what AI companies are up to.”

Anthony Aguirre, government director of the Future of Life Institute, a nonprofit devoted to making sure transformative applied sciences profit humanity, agrees. “As of now, giant experiments are running with effectively zero outside oversight or regulation,” he says. “Reporting those AI training runs and related safety measures is an important step. But much more is needed. There is strong bipartisan agreement on the need for AI regulation and hopefully congress can act on this soon.”

Raimondo stated on the Hoover Institution occasion Friday the National Institutes of Standards and Technology, NIST, is presently working to outline requirements for testing the protection of AI fashions, as a part of the creation of a brand new US authorities AI Safety Institute. Determining how dangerous an AI mannequin is often includes probing a mannequin to attempt to evoke problematic habits or output, a course of often known as “red teaming.”

Raimondo stated that her division was engaged on tips that can assist firms higher perceive the dangers that may lurk within the fashions they’re hatching. These tips might embody methods of making certain AI can’t be used to commit human rights abuses, she recommended.

The October government order on AI offers NIST till July 26 to have these requirements in place, however some working with the company say that it lacks the funds or experience required to get this accomplished adequately.

aialgorithmsartificial intelligenceasbehaviorbidenBusinessBusiness / Artificial IntelligenceChatGPTCloud ComputingCommercecompaniescomputingcongressdandataDefenseenoughfridayfundsfuturegetGooglegovernmentherHuman rightsIndustryInnovationintelligenceitJoe BidenmodelsneedNextNISTOpenAIotherpartplacepowerRegulationrunningSafetyTechnologythatthetrainingtryUSUS GovernmentWhite House
Comments (0)
Add Comment