Zeynep Tüfekçi reminds us to look past the benchmarks
Is the tipping point of AI already behind us?
1.76 trillion parameters. Training runs that cost as much as a small country's GDP. Reasoning capabilities that put my math olympiad scores to shame.
Today, Large Language Models (LLMs) are ubiquitous. The last few years have been a race between tech companies to outperform benchmarks, to show technical supremacy, to prove that their model is faster, stronger, and better at composing poetry. But the factor that will truly drive their adoption might be something else.
“Cars beat horses not because they were faster, but because they were so much more easily available,” Zeynep Tüfekçi. Tüfekçi is a professor of sociology and public affairs at Princeton University and a writer for the New York Times. She was awarded the 2025 Morison Prize from MIT’s Program in Science, Technology, and Society, which honors “an outstanding individual who embodies the Morison family ideal of combining humanistic values with effectiveness in the world of practical affairs, and in particular, in science and technology.”
I listened to her Morison Lecture “What if the Real Threat is Artificial Good-Enough Intelligence?” on Feb. 24 in the Nexus of Hayden Library. She gave the impression of a soft but stern grandmother; she spoke incisively, gestured frequently, and used the phrase “fabric of society” about 26 different times.
Tüfekçi, originally from Turkey, began her career as a programmer for IBM. Working to build Turkey’s internet infrastructure, she saw the power of technological change ― and the transitions it would bring.
In the early 2010s, resentment in the Middle East against corruption and economic strife sparked a chain reaction of anti-government protests: the Arab Spring. In times of political instability, social media enabled activists to organize and share information with unprecedented speed. During the protests, the number of Facebook users in Arab countries doubled, Tüfekçi said, and “there was an incredible celebration about how Facebook was great for democracy.” This wasn’t limited to the Middle East. In 2012, an article in the MIT Technology Review described how Facebook — and its data-driven algorithms — drives young American voters to the polls.
At the time, Silicon Valley was hailed as a force for democratic change and social progress, with governments turning to technology to amplify their platforms and track their constituents. “There was a bromance between the Obama Administration and Silicon Valley,” Tüfekçi remarked. But she was already uneasy. She wrote an op-ed in the New York Times — her first article, and, by her words, a rather small one — warning about the risks of unchecked algorithmic power in targeted campaigns.
“People thought I was exaggerating,” she said. But within a few years, the same algorithms that convinced young people to vote were being used to drive division, misinformation, and mass political manipulation. The technology had scaled, but few had considered how that growth would change its impact.
“The classic thing is the underestimation of changes from scale,” Tüfekçi said. In the case of LLMs, she argued that scale, and how they integrate into daily life, should be the primary consideration, rather than the goals of “Is it better? Is it smarter? Does it play chess?”
It is not the raw intelligence of LLMs that matters. “If they do better on the SAT, that is not the point,” she said. Though Artificial General Intelligence and superintelligence that can outperform humans in every task has not yet been achieved, models that already exist could be considered “Artificial Good-enough Intelligences,” in her words. These are models that may not be as smart as a human, but are cheap, scalable, and widely available. These characteristics, Tüfekçi argued, are what drives change.
LLMs are already capable of making many processes faster and more efficient, in both good ways and bad. When the printing press was invented, the Catholic Church saw it as a more efficient way to print indulgences and generate profits. At the same time, the mass production of texts also fueled the Protestant Reformation, leading to upheaval and conflict that reshaped Europe. Technologies often disrupt existing power structures in unpredictable and, at times, destabilizing ways.
“There is a complicated Jenga in this world. All sorts of things are connected, and if we move one thing, all sorts of things are going to be broken,” Tüfekçi said. There are things in society whose function depends on their difficulty ― like friction holding up a table. She gave the example of job applications, where the friction lies in the time required to complete them. This, she suggests, acts as a natural filter, ensuring a level of sincerity or at least investment in the process. Now, it is easy to build an LLM pipeline to mass-produce custom-tailored cover letters. It is also easy to build an LLM pipeline to mass-screen the resulting pile of indistinguishable, polished applications. The friction force of the cover letter disappears amidst an arms race of automation.
Given these societal implications, there is mounting pressure on governments to do something about generative AI ― to regulate it, to restrict it, to control it. But Tüfekçi expressed that this is the wrong approach. The U.S. closely guarded the secret of nuclear weapons, and yet they still proliferated. And while bombs require technical knowledge and enrichment facilities, it is relatively simple to copy an LLM with the code and the model weights. Trying to lock it away is futile. “The idea that this is going to be your tool and your tool alone,” she said, “is the most naïve thought in all of history.” Regulation through government or financial means is unlikely to work in the long run. Instead, she argued, the focus should center on handling the shift, rather than resisting it.
“I’m not nostalgic for the past,” she clarified in response to a question asking if she would rather go back to the way things were. She argued that even if that were possible, there are benefits to new technologies that enhance access to information, improve efficiency, and unlock new creative possibilities. At their best, they empower individuals, amplify voices, and drive innovation in ways that were previously unimaginable. But technological transitions, she warns, “can be incredibly turbulent.”
So what happens next? Tüfekçi didn’t claim to know. “I don’t have crystal balls,” she said. “But we can look at this and say, ‘this is not the only thing that is going to happen.’” She noted that the ripple effects will go far beyond what even the most sophisticated forecasting models can predict. LLMs’ true impact will not be measured by benchmarks or competition wins. It will be measured by how invisibly — how fundamentally — it weaves itself into the fabric of society.
And by the time society notices, Tüfekçi argued, it will already be the default.