Three signals to the future 010
Welcome to the tenth edition of "Three Signals to the Future", a newsletter where I share resources that I find useful and thought-provoking. Let's dive into the latest discoveries.
I haven’t done this in a while dear reader, so while I am struggling to finish some essays (and actively looking for work, which honestly is pretty tiring in this market situation we are in), I thought I would share some resources that are top of my mind. And since AI is all the rage now, everyone talks about it, I think it makes sense to talk about that subject here too. Of course, the caveat is that I will share with you only critical sources. Not that these conversations do not exist anywhere else, but since the algorithm across social media seems to favor pro-AI discussions and the hype it generates, I always like to take a step back and examine things more critically. And so you get some resources here that you hopefully have not stumbled upon yet, or simply are reminded to save, consume, or share them.
On plastic burnout
In this post, Jim Mott critically analyzes the "ChatGPT barbie doll trend" where people create AI images of themselves as action figures. He argues this trend represents more than nostalgia—it signals concerning self-commodification where people cheerfully participate in their own reduction to purchasable products with job-related accessories. Drawing on Byung-Chul Han's "Burnout Society" and Foucault's concept of "homo œconomicus," Mott suggests this trend exemplifies how we've internalized market capitalism's logic of self-exploitation while mistaking it for creative freedom. He advocates for resisting passive AI consumption by demanding transparency in AI systems, supporting interpretability efforts like Anthropic's research, and embracing solutions that allow users to modify algorithm parameters, transforming our relationship with AI from passive consumers to co-productive agents with genuine control and creative autonomy.
Ultimately this trend has found itself a home on LinkedIn precisely because it so neatly fits with the ethos of the platform: “Here’s me and the tools of my trade: pre-packaged, entirely confected, mass-produced, and ready for purchase!”
▶︎ Plastic Burnout: Self-exploitation and the AI Selfie - Jim Mott
On AI and Democracy
Gina Neff critically examines the political dimensions of AI technologies and their implications for democratic societies. She argues that AI systems inherently embody politics through their centralizing tendencies, serving as powerful technologies of control that concentrate power in few corporate hands like Amazon Web Services. Neff highlights how AI fuels economic growth ideologies that primarily benefit Western economies while exploiting labor in the Global South, creating deeply uneven playing fields. She criticizes how AI development prioritizes efficiency over accountability, with companies positioning themselves as humanity's only protection against AI risks while resisting democratic oversight. Drawing on Crawford's analysis linking AI to fascist control mechanisms and Schaake's concept of a "tech coup," Neff warns that the growing infrastructure for AI lacks transparency and public accountability.
Such a view of AI as agrowth engine does what I call in my next bookfuturing work, activities and actions that leadersand practitioners put in motion to shape how new technologies might be used. Futuring workhelps people both see what AI could be for and create new ways to position themselves andtheir decisions in relation to changes from technology.
▶︎ Can democracy survive AI? - Gina Neff
On what will people do
This article by Daniel Susskind examines what human work might persist in a world where AI can perform all economically useful tasks more productively than humans. Rather than suggesting all human work will disappear, Susskind identifies three persistent limits to complete automation: "general equilibrium limits" (where humans retain comparative advantage in certain tasks even if AI has absolute advantage); "preference limits" (where humans prefer goods and services created by other humans for aesthetic, achievement or empathy reasons); and "moral limits" (where moral reasoning necessitates human judgment in certain domains). The paper critically evaluates the robustness of these limits, noting that wage levels for human work could still decline significantly, preferences could evolve as AI capabilities increase, and moral intuitions might be tested by increasingly capable AI systems.
This is a consequence of perhaps the most important development in the field of AI in the last forty years, what I call the ‘Pragmatist Revolution’: a shift from building systems that copy some aspect of human beings acting intelligently—their thinking processes, the reasoning they engaged in, even their anatomy—to building systems that perform tasks in fundamentally different ways to human beings