Generative AI Learned Nothing From Web 2.0

If 2022 was the yr the generative AI increase began, 2023 was the yr of the generative AI panic. Just over 12 months since OpenAI launched ChatGPT and set a file for the fastest-growing shopper product, it seems to have additionally helped set a file for quickest authorities intervention in a brand new expertise. The US Federal Elections Commission is wanting into misleading marketing campaign advertisements, Congress is looking for oversight into how AI corporations develop and label coaching information for his or her algorithms, and the European Union handed its new AI Act with last-minute tweaks to reply to generative AI.

But for all of the novelty and velocity, generative AI’s issues are additionally painfully acquainted. OpenAI and its rivals racing to launch new AI fashions are dealing with issues which have dogged social platforms, that earlier era-shaping new expertise, for almost twenty years. Companies like Meta by no means did get the higher hand over mis- and disinformation, sketchy labor practices, and nonconsensual pornography, to call only a few of their unintended penalties. Now these points are gaining a difficult new life, with an AI twist.

“These are completely predictable problems,” says Hany Farid, a professor on the UC Berkeley School of Information, of the complications confronted by OpenAI and others. “I think they were preventable.”

Well-Trodden Path

In some circumstances, generative AI corporations are instantly constructed on problematic infrastructure put in place by social media corporations. Facebook and others got here to depend on low-paid, outsourced content material moderation staff—usually within the Global South—to maintain content material like hate speech or imagery with nudity or violence at bay.

That identical workforce is now being tapped to assist practice generative AI fashions, usually with equally low pay and tough working circumstances. Because outsourcing places essential capabilities of a social platform or AI firm administratively at arms size from its headquarters, and infrequently on one other continent, researchers and regulators can wrestle to get the complete image of how an AI system or social community is being constructed and ruled.

Outsourcing may obscure the place the true intelligence inside a product actually lies. When a bit of content material disappears, was it taken down by an algorithm or one of many many hundreds of human moderators? When a customer support chatbot helps out a buyer, how a lot credit score is because of AI and the way a lot to the employee in an overheated outsourcing hub?

There are additionally similarities in how AI corporations and social platforms reply to criticism of their ailing or unintended results. AI corporations discuss placing “safeguards” and “acceptable use” insurance policies in place on sure generative AI fashions, simply as platforms have their phrases of service round what content material is and isn’t allowed. As with the principles of social networks, AI insurance policies and protections have confirmed comparatively straightforward to bypass.

adsaialgorithmsartificial intelligenceasBusinesschatbotChatGPTcompaniescongressconsumercontentcontent moderationcreditcriticismcustomerCustomer servicedatadecadesdisinformationelectionsEuropean UnionFacebookFightinggetGooglegovernmenthate speechiinformationInfrastructureinsideintelligenceitLabormachine learningMediaMetamodelsmonthsnudityOpenAIoverplaceplatformspornographyproductRecordschoolservicesocialSocial MediaTechnologythatthetraintrainingUSviolenceweb 2.0wellworkers