The extent to which academics are likely to follow the media attention, money, and Nobel Prize committee plaudits is a question that vexes Julian Togelius, an associate professor of computer science at New York University’s Tandon School of Engineering who works on AI. “Scientists in general follow some combination of path of least resistance and most bang for their buck,” he says. And given the competitive nature of academia, where funding is increasingly scarce and directly linked to researchers’ job prospects, it seems likely that the combination of a trendy topic that—as of this week—has the potential to earn high-achievers a Nobel Prize could be too tempting to resist.
The risk is this could stymie innovative new thinking. “Getting more fundamental data out of nature, and coming up with new theories that humans can understand, are hard things to do,” says Togelius. But that requires deep thought. It’s far more productive for researchers instead to carry out simulations enabled by AI that support existing theories and involve existing data—producing small hops forward in understanding, rather than giant leaps. Togelius foresees that a new generation of scientists will end up doing exactly that, because it’s easier.
There’s also the risk that overconfident computer scientists, who have helped advance the field of AI, start to see AI work being awarded Nobel Prizes in unrelated scientific fields—in this instance, physics and chemistry—and decide to follow in their footsteps, encroaching on other people’s turf. “Computer scientists have a well-deserved reputation for sticking their noses into fields they know nothing about, injecting some algorithms, and calling it an advance, for better and/or worse,” says Togelius, who admits to having previously been tempted to add deep learning to another field of science and “advance” it, before thinking better of it, because he doesn’t know much about physics, biology, or geology.
Hassabis is an example of using AI well in order to advance science. He was a neuroscientist by training, gaining a PhD in the subject in 2009, and has credited that background to helping advance AI via Google DeepMind. But even he acknowledged a change in how the sector ekes out efficiencies. “Today, [AI] has become more engineering-heavy,” he said in his Nobel Prize press conference. “We have a lot of techniques now that we’re improving just algorithmically, without reference to the brain anymore.”
That too could have an impact on what kind of research gets done—and who does it, their level of knowledge of the field, and the incentives behind them entering it. Rather than researchers who have devoted their lives to a specialism, we could see more research by computer scientists, detached from the reality of what they’re looking at.
But that’s likely to take a backseat to the celebrations for Hassabis, Jumper, and the colleagues they both thanked for helping them win the Nobel Prize this week. “We’re very close to cleaning up the [AlphaFold3] code to release it for the academic community to freely use,” he said earlier today. “Then we’ll keep progressing from there.”