London24NEWS

He Helped Invent Generative AI. Now He Wants to Save It

In 2016, Google engineer Illia Polosukhin had lunch with a colleague, Jacob Uszkoreit. Polosukhin had been frustrated by a lack of progress in his project, using AI to provide useful answers to questions posed by users, and Uszkoreit suggested he try a technique he had been brainstorming that he called self-attention. Thus began an 8-person collaboration that ultimately resulted in a 2017 paper called “Attention Is All You Need,” which introduced the concept of transformers as a way to supercharge artificial intelligence. It changed the world.

Eight years later, though, Polosukhin is not completely happy with the way things are shaking out. A big believer in open source, he’s concerned about the secretive nature of transformer-based large language models, even from companies founded on the basis of transparency. (Gee, who can that be?) We don’t know what they’re trained on or what the weights are, and outsiders certainly can’t tinker with them. One giant tech company, Meta, does tout its systems as open source, but Polosukhin doesn’t consider Meta’s models as truly open: “The parameters are open, but we don’t know what data went into the model, and data defines what bias might be there and what kinds of decisions are made,” he says.

As LLM technology improves, he worries it will get more dangerous, and that the need for profit will shape its evolution. “Companies say they need more money so they can train better models. Those models will actually be better at manipulating people, and you can tune them better for generating revenue,” he says.

Polosukhin has zero confidence that regulation will help. For one thing, dictating limits on the models is so hard that the regulators will have to rely on the companies themselves to get the job done. “I don’t think there’s that many people who are able to effectively answer questions like, ‘Here’s the model parameters, right? Is this a good margin of safety?’ Even for an engineer, it’s hard to answer questions about model parameters and what’s a good margin of safety,” he says. “I’m pretty sure that nobody in Washington, DC, will be able to do it.”

This makes the industry a prime candidate for regulatory capture. “Bigger companies know how to play the game,” he says. “They’ll put their own people on the committee to make sure the watchers are the watchees.”

The alternative, argues Polosukhin, is an open source model where accountability is cooked into the technology itself. Even before the transformers paper was published in 2017, Polosukhin had left Google to start a blockchain/Web3 nonprofit called the Near Foundation. Now his company is semi-pivoting to apply some of those principles of openness and accountability to what he calls “user-owned AI.” Using blockchain-based crypto protocols as a model, this approach to AI would be a decentralized structure with a neutral platform.

“Everybody would own the system,” he says. “At some point you would say, ‘We don’t have to grow anymore.’ It’s like with bitcoin—the price can go up or down, but there’s no one deciding, ‘Hey, we need to post $2 billion more revenue this year.’ You can use that mechanism to align incentives and build a neutral platform.”

According to Polosukhin, developers are already using Near’s platform to develop applications that could work on this open source model. Near has established an incubation program to help startups in the effort. One promising application is a means to distribute micropayments to creators whose content is feeding AI models.