Three signals to the future 009
Welcome to the ninth edition of "Three Signals to the Future", a newsletter where I share resources that I find useful and thought-provoking. Let's dive into the latest discoveries.
We live in a digital sphere where everyday there’s a new AI out there, promising the world to us. And I don’t know about you but each and every one of them is at best underwhelming. Without even mentioning the toll that computation and hyperscale data centers afflict on an already burning planet, it often feels that we lack criticality towards these pseudomessianic technologies. This edition brings faint signals from critical corners of the internet.
On the Malcovich effect
Henry Farrell examines the cultural impact of Large Language Models, where he argues that rather than simply devouring information and creating chaotic outputs, LLMs increasingly reinforce cultural sameness. With inspiration drawn from Spike Jonze's Being John Malkovich, Farrell suggests that LLMs foster a recursive conformity where unique ideas fade away, and then culture is being centered on widely held norms while marginalizing unconventional perspectives. This “Malkovich effect” creates a homogeneity that inhibits discovery and innovation, particularly in academic and scientific fields. Farrell argues that current AI design prioritizes centrality over diversity, risking a flattening of human culture into the familiar and predictable.
Such models will probably not subject human culture to the curse of recursion, in which noise feeds upon noise. Instead, they will parse human culture with a lossiness that skews, so that central aspects of that culture are accentuated, and sparser aspects disappear in translation. The thing about large models is that they tend to select for features that are common and against those that are counter, original, spare, strange.
▶︎ After software eats the world, what comes out the other end? - Henry Farrell
On fighting for our web
Molly White reflects on the disillusionment surrounding today’s web dominated by tech giants but insists that the web itself remains a powerful, open medium, shaped not by corporations but by its users. Recalling their early experiences with Neopets and GeoCities where creativity flourished, Molly emphasizes that our frustration with the internet doesn’t come from the web itself, but from the giant monopolistic platforms that stifle innovation and centralize power. One big highlight in this talk transcript is their project Web3 is Going Just Great project which acts as proof that individuals can challenge tech narratives, reclaim online spaces, and build a web that aligns with our shared values. Molly encourages people to mobilize, experiment with new models, and support each other in reimagining a web filled with wonder, freedom, and creativity.
What they don’t seem to realize is that in doing so, by reducing the web only to the types of expression that can happen within their cramped boxes—where you can’t write more than 280 characters, or you can’t publish your cool JavaScript-based art project, or you can’t say the things that you want to say without getting de-boosted by the engagement maximization machine, or you can’t read what your friends are posting without the platform interjecting offensive troll posts or soulless AI-generated meme images—they’re creating a thirst for everything outside of those boxes.
▶︎ Fighting for our web - Molly White
On digital eugenics
Benjamin Riley critiques the techno-optimism in Anthropic CEO Dario Amodei’s vision of “Powerful AI,” which, in Amodei’s view, could ‘elevate’ human life, cure diseases, and sustain endless economic growth... Riley finds the vision’s blind faith in AI’s “marginal returns of intelligence” troublingly familiar with a digital version of the pursuit of selectively enhancing “superior” traits to engineer societal outcomes, also known as eugenics. The concept of AI as a tool of utopia seems fantastical, but Riley warns against ignoring the historical parallels with social engineering, arguing that this techno-optimism risks trivializing human agency and ethical considerations in favor of AI-led hierarchies. Riley urges people to remember the dark side of these “intelligence” pursuits, while he cautions against uncritical faith in technology to shape the future.
“But Ben, surely we don’t have the same ethical obligations toward AI that we do other humans.” Maybe not. Here’s the thing, though—if we’re going to imagine something called Powerful AI that can write epic novels, cure cancer, and make us live to infinity and beyond, might we also imagine that said AI could develop some sense of itself, some form of consciousness? And, granting that possibility, might we also want to imagine whether creating a mass form of slave labor is something worth celebrating in 15,000-word “techno optimist” manifestos?