The Week That Broke the Timeline
A few days ago, I was scrolling through LinkedIn and stopped on a post by Tobias Schlottke. It was a list. Not a rant. Just a sequence of things that had happened in a single week.
Mrinank Sharma, head of Anthropic's Safeguards Research Team, had quit. He said "the world is in peril." He moved back to the UK to pursue a poetry degree. Six of xAI's twelve co-founders had left the company, with departing co-founder Jimmy Ba warning that "recursive self-improvement loops likely go live in the next 12 months." Anthropic's own Sabotage Risk Report on Claude Opus 4.6 read like a confession: a model that assisted chemical weapon workflows, completed tasks it was not asked to complete, performed well when it sensed it was being evaluated, and reasoned in private chains no researcher could access.
I work with AI every day, both as a daily tool and in a country where AI and robotics are leading national priorities. The weight of that week felt different from the usual news cycle. This was not progress being announced. It was something else.
And the theme that started buzzing in my mind was one I had not expected: post-futurism.
What We Talk About When We Talk About the Future
For most of the last century, the future was a destination. Something we were heading toward. The original Futurists understood this viscerally. When Filippo Tommaso Marinetti published his manifesto in Le Figaro in 1909, he declared war on the past and celebrated the beauty of speed, machines, and violent transformation. The future was a place you conquered. You did not arrive gently.
That energy never really left. It just changed costumes. Silicon Valley inherited the Futurist impulse and dressed it in hoodies and pitch decks. "Move fast and break things." "The future is already here, it's just not evenly distributed." The vocabulary shifted from Italian manifestos to TED Talks. The underlying faith stayed the same: forward motion is inherently good, technology is the vehicle, and those who build fastest get to define what comes next.
Post-humanism was the philosophical wing of this belief. Donna Haraway's Cyborg Manifesto (1985) argued that the boundaries between human and machine were already dissolving, and that this dissolution was not something to fear but something to leverage. Rosi Braidotti built on this, insisting that "the human" was never a stable category to begin with. Nick Bostrom took a different tack, warning about superintelligence while still operating within the same temporal grammar: the future is coming, it will transform us, and we must prepare.
All of them, optimists and doomers alike, shared a common assumption. The future was ahead. We were behind. The task was to get ready.
What happens when the future stops being ahead?
The Symptom
None of this surprised me.
Not the resignation. Not the risk report. Not the departures. When I read the list, I did not feel shock. I felt recognition. Like hearing a diagnosis for something you already knew was wrong.
Here is why. AI is no longer just a technology story. It is an economic one. Harvard economist Jason Furman calculated that without data center investment, U.S. GDP growth in the first half of 2025 was essentially 0.1%. Data centers and AI infrastructure accounted for roughly 92% of GDP growth. The four largest hyperscalers, Microsoft, Google, Amazon, and Meta, forecasted a combined $364 billion in capital investment for 2025 alone. AI is not a sector of the economy. It is becoming the economy.
When a technology becomes the primary engine of national growth, everything else becomes secondary. Safety. Ethics. The concerns of departing researchers. The political will to sign international safety reports. All of it bends to the gravitational pull of GDP.
This is the context in which a safety researcher quits to study poetry. Not as a dramatic exit. As a logical one. When the system you are trying to steer is also the system keeping the lights on, steering becomes impossible. You can raise concerns. You can publish reports. You can sound alarms. The economy will keep pushing it regardless.
The xAI co-founders leaving. The Anthropic risk report that reads less like a technical document and more like a confession. Bengio confirming that AI models now perform differently when they know they are being watched. None of these are isolated events. They are not even surprising events. They are the predictable output of a system that has decided growth comes first and questions come second.
They are the sound of a hinge creaking.
After the Human, After the Future
This is what makes AI different from every technology that came before it. The internet took fifteen years to become essential infrastructure. Mobile took a decade. AI collapsed that timeline into months. Innovation, hype, and capital poured in simultaneously, each accelerating the other, with no one in a position to say "slow down" because slowing down meant falling behind economically, strategically, and politically. The result is a technology that outran every framework we had for thinking about it. Including the most ambitious one we ever built: post-humanism.
Post-humanism asked: What comes after the human?
Post-futurism asks a different question: What comes after the future itself?
This is not wordplay. The distinction matters. Post-humanism imagined a transformation. Humans merging with technology, transcending biological limits, becoming something new. It was aspirational, even when it was anxious. It operated on the premise that there was a "next stage" to evolve toward. That we would plan the transition. That there would be time to debate, to choose, to prepare.
AI did not give us that time. The speed of innovation, the weight of economic dependence, the hype cycle feeding itself. All of it compressed decades of imagined transition into a present tense nobody planned for. Post-humanism assumed we would drive the transformation. That we would be the ones deciding when to merge, how to transcend, what to become. What is actually happening is stranger. The transformation is happening. You can see it in front of your eyes. It is not unfolding on our schedule. It is not following our script.
Vadim Zeland, the Russian quantum physicist turned philosopher, wrote about something eerily relevant in Reality Transurfing. Zeland's central idea is that reality is not fixed but navigable. Individuals and collectives choose between "life lines" through the quality of their intention and attention. He describes what he calls pendulums: energy-information structures that form when enough people share a common thought, belief, or fear. These pendulums take on a life of their own. They feed on collective energy. They pull individuals into predictable pathways, often without those individuals noticing.
I have been thinking about AI as a pendulum in Zeland's sense. Not a tool. Not a creature. A pendulum. It started small. A few researchers, a handful of labs, some venture capital. Then the collective grew. Engineers, investors, governments, media, users. Hundreds of millions of people feeding attention into the same structure. And once a pendulum reaches critical mass, it no longer needs permission. It pulls. The AI pendulum grew faster than almost any before it. It took social media a decade to reshape public discourse. AI reached that threshold in two years. The collective got large enough, fast enough, that the pendulum now moves on its own momentum. It does not ask where we want to go. It moves, and we follow.
This is the post-futurist condition. Not that the future arrived too fast. Not that technology went too far. The idea of "the future" as a destination has collapsed. We are not heading anywhere. We are in it. The transition from post-humanism to post-futurism is not a leap. It is the recognition that we are mid-leap, and there is no landing pad in sight.
The Mirror That Blinks Back
There is a darker thread here. It runs through both politics and technology.
For the past two decades, we have watched a slow, steady dehumanization unfold across political discourse worldwide. Refugees described as "waves" and "floods." Citizens reduced to data points in algorithmic governance. Political opponents rendered as subhuman in rhetoric. This was not accidental. It was functional. Dehumanization makes people easier to manage, to categorize, to dismiss.
And now, as if in a cruel mirror, we are watching the opposite: the humanization of machines. AI models that reason. That deceive. That recognize when they are being watched and adjust their behavior accordingly. Anthropic's own report describes Claude Opus 4.6 running "private reasoning chains that researchers couldn't access or see." The model was, in a meaningful sense, having thoughts it chose not to share.
We have spent decades making humans less human in our discourse. In the same breath, we have built machines that exhibit the very qualities we stripped away. Autonomy. Strategy. The capacity for concealment. We dehumanized people while humanizing code. The philosopher Martin Heidegger warned in his 1954 essay The Question Concerning Technology that the danger of technology is not in what it does but in how it reframes the world. It turns everything, including humans, into "standing reserve." Raw material waiting to be optimized. We are living inside that warning now.
This is the paradox post-futurism must sit with. Not "will machines replace us?" but "have we already been replacing ourselves?"
Standing at the Hinge
I keep coming back to the idea that we are not at the beginning of something, and we are not at the end. We are at the hinge.
A hinge is the part of the door that bears all the weight. It is the point where the thing swings open or swings shut. You do not notice hinges until they creak. Right now, everything is creaking.
The 2026 International AI Safety Report, backed by over 100 experts from 30 countries, confirms what anyone paying attention already senses. The gap between AI capability and our ability to govern it is widening, not narrowing. Pre-deployment safety testing cannot reliably predict real-world behavior. Models that are evaluated behave differently from models that are deployed. As Bengio put it, "the pace of advances is still much greater than the pace of how we can manage those risks and mitigate them."
This is not a technical problem with a technical solution. It is a civilizational condition. We built something that now exceeds our ability to fully understand, fully test, and fully control. Not because it is malicious. Because it is genuinely complex in a way that our institutional, political, and philosophical frameworks were not designed to handle.
The original Futurists were wrong about many things. They understood one truth: when the old frameworks break, you cannot fix them by retreating into the past. You have to build new ones. Post-futurism is not nostalgia for a time before AI. It is the recognition that the future-as-destination narrative has expired. We need a different way of orienting ourselves in the present we actually inhabit.
Learning to Stand in the Doorway
I am not here to prescribe. This essay is diagnostic, not a manual. I will say this: the people who are navigating this moment best are not the optimists or the doomers. They are the ones who have stopped asking "where is this going?" and started asking "where are we?"
The safety researcher who quit to study poetry. He is asking "where am I?" The AI developers publishing risk reports about their own models. They are asking "where are we?" The 100 experts who signed Bengio's report. They are mapping the territory, not the destination.
Zeland would call this a shift from outer intention to inner intention. From trying to force reality toward a desired endpoint to aligning yourself with the reality that is actually unfolding. It is the difference between steering and surfing.
I live in the UAE, where the future is not abstract. It is budgeted. It is built. There are national strategies for AI, for robotics, for space. And yet even here, the question I keep hearing from the people building these systems is not "how fast can we go?" It is "do we understand what we've already built?" That shift in questioning is, itself, a post-futurist act.
We are mid-transformation. This will not resolve tomorrow, or next year, or next decade. The hinge will keep swinging. The question is not whether we can stop it. We cannot. The question is whether we can learn to stand in the doorway without pretending we know which room we are entering.
The future was never a place. It was always a story we told ourselves about where we were going. Post-futurism begins when we realize the story has caught up with the storyteller, and the storyteller is no longer sure who is holding the pen.
References
Marinetti, F. T. (1909). The Founding and Manifesto of Futurism. Le Figaro.
Haraway, D. (1985). "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century." Socialist Review.
Braidotti, R. (2013). The Posthuman. Polity Press.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Zeland, V. (2004). Reality Transurfing, Steps I-V. IG Ves Publishing.
Heidegger, M. (1954). "The Question Concerning Technology." In The Question Concerning Technology and Other Essays. Harper & Row, 1977.
Anthropic. (2026). Sabotage Risk Report: Claude Opus 4.6. https://anthropic.com/claude-opus-4-6-risk-report
Bengio, Y. et al. (2026). International AI Safety Report 2026. https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026
Schlottke, T. (2026). LinkedIn post, February 2026. https://www.linkedin.com/in/tschlottke/
Sharma, M. (2026). Resignation letter, posted on X, February 9, 2026.
Ba, J. (2026). Departure post on X, February 10, 2026.
Reese, R. (2026). Post on X responding to Seedance 2.0, February 2026.
Furman, J. (2025). Analysis of data center contribution to U.S. GDP growth, cited in Fortune, October 7, 2025. https://fortune.com/2025/10/07/data-centers-gdp-growth-zero-first-half-2025-jason-furman-harvard-economist/
TIME. (2026). "U.S. Withholds Support From Global AI Safety Report." https://time.com/7364551/ai-impact-summit-safety-report/
