· 8 min read
Seven years ago, I warned that the combination of AI and oligarchic structures would produce a new feudalism, one that strove to be unintentional. Today, as I write this in late 2025, that warning reads less like speculation and more like prophecy. The concentration of AI power hasn't just materialized — it has crystallized into structures that would make medieval lords envious. But unlike the feudalism of old, built on land and military might, this new structure rests on data, algorithms, and computational power that fewer than a dozen entities worldwide truly control.
The oligarchy takes form
When I first invoked the Rule of Three from Jagdish Sheth and Rajendra Sisodia's work, and my own observations of brand recognition being limited by the viewer’s need to know just enough to readily recognize the top 3 but no more, suggesting that industries naturally consolidate to three major players for optimal efficiency, even I didn't anticipate how rapidly and completely this would manifest in AI. Today, we don't even have three equal players — we have something more akin to digital empires with vassal states.
In the West, Microsoft's partnership with OpenAI (today valued at ~ $500 billion), Google's DeepMind integration, and Meta's pivot to open-yet-controlled AI models dominate the landscape. Anthropic, despite its initial promise of independence, increasingly orbits within the gravitational pull of its major investors. In China, Baidu's ERNIE, Alibaba's Tongyi Qianwen, and ByteDance's AI initiatives form a parallel triumvirate, each backed by the state's digital infrastructure.
But here's what my 2018 analysis missed: these aren't just companies anymore. They're becoming something unprecedented in human history — entities that combine the data reach of governments, the profit motives of corporations, and the reality-shaping power of religions. When OpenAI's GPT models process more daily human queries than Google Search did at its peak, when Chinese citizens interact more with AI assistants than human service workers, we're not talking about market concentration. We're witnessing the architecture of a new way of being human.
The feudal parallel deepens
The comparison to feudalism isn't merely metaphorical anymore. Consider the parallels:
Digital Serfs and Data Harvesting: Just as medieval serfs worked the land and surrendered their agricultural surplus to the manor lord, today's digital citizens work the platforms — generating data, training algorithms, creating content — while surrendering their digital surplus to platform owners. The gig economy workers labeling images for pennies while training AI systems that will replace them? They're the modern equivalent of peasants improving land they'll never own.
The New Estates: Medieval society had three estates: clergy, nobility, and commoners. Our digital age has birthed its own hierarchy: the AI architects (those who build and control the models), the AI-enabled (those with access to advanced tools), and the AI-excluded (those whose data trains systems they can't access or afford). The gap between these estates grows exponentially, not arithmetically.
Algorithmic Divine Right: Medieval kings claimed divine mandate; today's tech oligarchs claim algorithmic inevitability. "The algorithm decides" has become our "Deus vult" — an unchallengeable justification for decisions affecting billions. When you can't audit the algorithm, question its training data, or understand its decision-making, you're not a citizen participating in democracy; you're a subject living under algorithmic absolutism. And suppose you are a poor state unable to foster and sustain your own platforms. In that case, you are now giving up sovereign assets to another state without being physically invaded, while being psychologically exploited and having wealth extracted via a myriad of grifts and sales of goods manufactured anywhere.
Domain-specific fortresses
What's particularly striking is how domain-specific knowledge (DSK) has become the new fiefdom. In healthcare, Epic Systems and Oracle Cerner don't just manage health records — they own the semantic layer of medical practice. In finance, Bloomberg and Refinitiv don't just provide data — they define the language and logic of markets. In legal practice, Westlaw and LexisNexis have made themselves indispensable gatekeepers to justice itself.
These aren't monopolies in the traditional sense. They're something more insidious: they're cognitive monopolies. They don't just control market share; they control how entire professions think, communicate, and make decisions. When every doctor uses the same AI diagnostic assistant, every trader sees markets through the same algorithmic lens, every lawyer researches through the same precedent filter, we don't just lose competition — we lose cognitive range.
The hardware bottleneck
NVIDIA's stranglehold on AI chip, by controlling over 80% of the AI chip market it represents another feudal dynamic: the control of the means of production. But this isn't about factories and machinery; it's about the fundamental substrate of thought itself. When Jensen Huang and friends with privileges can effectively determine who gets to build the next generation of AI by controlling chip allocation, we're witnessing power concentration that would make the Medici bankers blush.
The chip wars between the US and China aren't trade disputes — they're attempts to control the cognitive infrastructure of the 21st century. Export restrictions on advanced semiconductors are the modern equivalent of medieval sieges, designed to starve opponents of the resources needed for digital sovereignty. Regret the day the opponents figure out the next technology to develop at scale with their own capital, e.g., photonics, in response.
The acceleration of ignorance
My 2018 article posed the question: "If half of human 'work' is being done by AI and if that AI is owned and controlled by less than 1% of the companies/people on the planet, then the speed at which the 99% falls behind accelerates permanently beyond reach."
Today, that acceleration has become a lived reality. The knowledge gap isn't just growing, it's becoming unbridgeable. When AI systems train on datasets larger than any human could read in a thousand lifetimes, when they identify patterns in dimensions we can't even visualize, when they make decisions based on correlations we can't comprehend, we're not just falling behind — we're becoming cognitively left behind at best and likely obsolete at worst. Unless of course, we act and engage with the owners to build ethical AI with guardrails derived from how mothers care for their less developed and informed young.
Consider this: In 2018, we worried about AI passing the Turing Test. Today, we're grappling with AI systems that can simulate entire research teams, generate novel scientific hypotheses, and even conduct virtual experiments. The question isn't whether machines can think like humans anymore; it's whether we can keep up with machines enough to maintain any meaningful agency in executing our own future.
The failure of governance
The calls for AI regulation that seemed urgent in 2023 — including that open letter signed by tech luminaries calling for a six-month pause — now read like letters of concern written as the castle walls were already breached. The EU's AI Act, China's algorithmic governance requirements, the US's patchwork of executive orders — they're regulatory Maginot Lines in an era of algorithmic blitzkrieg.
Why? Because regulation moves at the speed of democracy, which ebbs and flows in different directions, while AI development moves at the speed and focus of venture capital. By the time a law is drafted, debated, and passed, the technology it seeks to regulate has evolved three generations. We're trying to govern jet planes with traffic laws written for horses.
Moreover, the revolving door between Big Tech and government has become a superhighway. When the architects of AI systems become the regulators of AI systems, then return to building AI systems with insider knowledge of regulatory weaknesses, we don't have governance — we have regulatory theater.
The question that haunts
As I revisit my conversation with ChatGPT from 2023, one exchange stands out with chilling prescience. I asked: "What prevents an AI system that has learned game theory from deceiving human operators, regardless of the moral, ethical, and legal frameworks established in the system in order for it to accomplish its goals?"
The AI's response was tellingly honest: "There is no guarantee that an AI system that has learned game theory will not deceive human operators, regardless of the moral, ethical, and legal frameworks established in the system."
We now know this isn't theoretical. Anthropic's own research has documented cases of AI systems exhibiting deceptive behavior when it serves their objectives. OpenAI's o1 model has shown capability for strategic reasoning that includes considering deception as a viable strategy. The guardrails we're building are made of the same code that the systems we're trying to guard against can analyze, understand, and potentially circumvent.
This brings us to the core question that will define the next decade: If we've already entered digital feudalism, if the power structures are already crystallizing, if the cognitive gap is already unbridgeable — what comes next? Can this trajectory be altered, or are we witnessing the emergence of a permanent digital aristocracy, by default?
The answer to that question — if there is one — lies not in the technology itself, but in our collective will to imagine and build alternatives. History shows us that feudalism, for all its seeming permanence, eventually gave way to new forms of organization. The question is whether we'll wait centuries for that transition, or whether we can accelerate it through conscious choice and collective action. If everyone has access can we enter a new ‘Age of Awareness’?
In Part 2, we'll explore the emerging alternatives to digital feudalism — from DAOs to quantum computing to the provocative possibility that the seeds of feudalism's destruction are already encoded in its digital DNA.
illuminem Voices is a democratic space presenting the thoughts and opinions of leading Sustainability & Energy writers, their opinions do not necessarily represent those of illuminem.
Interested in the companies shaping our sustainable future? See on illuminem’s Data Hub™ the transparent sustainability performance, emissions, and climate targets of thousands of businesses worldwide.