L.S. Lowry - Coming from the Mill (1930)

ESSAY 1 — APRIL 2026

Privacy, Profit and Power

Privacy, Profit and Power

Privacy, Profit and Power

In 2020, I argued that human behaviour was becoming an optimisation problem. Privacy dies twice. By the hand of two different actors with two different logics.

"Power games are embedded in and emboldened by profit games. While profit games justify data collection, power games justify data surveillance."


"Just as the ultimate goal for corporations is optimising individual human behaviour to maximise profits, the nation state optimises aggregate behaviour to achieve good citizenship."

Profit optimisation kills privacy instrumentally: data extraction to target ads and maximise engagement left the information environment degraded. Privacy had to die so the machine could learn your preferences.

“Companies optimise resources based on the rules of the market. If the goal is to maximise profit, high-quality data will be hoarded to optimise advertisement conversion. If the goal is to maximise ad conversion, click-bait will be pushed to maximise user stickiness and engagement. If the goal is to build a long-term competitive edge, data is captured centrally to train AI systems to stay one step ahead of rivals.”

“Embracing the thinking that the only way for the Internet to be open was to be free, businesses had to create value elsewhere. The result is that the Internet is neither free, nor open.” [...] “Even when marketed as free, subsidising true cost with advertising has made the upfront cost of general knowledge high.”

Power optimisation kills privacy structurally: governments needed what platforms had, so they absorbed it. National security does not need preferences. It needs compliance.

Profit optimisation kills privacy instrumentally: data extraction to target ads and maximise engagement left the information environment degraded. Privacy had to die so the machine could learn your preferences.

Power optimisation kills privacy structurally: governments needed what platforms had, so they absorbed it. National security does not need preferences. It needs compliance.

“How to guarantee that you will find a needle in a haystack? Collect the whole haystack.”

Identity Reboot was published before generative AI entered the public domain. Meta was still called Facebook, and ChatGPT and Claude would not arrive for several years. Privacy, I argued, preceded human ability to reason, and with the advance of AI, would be the canary in the coal mine of shrinking human choices and agency. Identity Reboot was not supposed to be about artificial intelligence. Its sole focus was one question: what role does digital identity play in our societies? Instead of a narrow question, it turns out I had accidentally asked a big one. One weaving all the way from privacy to power, and ultimately illuminating the stakes of the AI race. Not for countries, but for the everyday person. Why we should care about why, how and where AI is developed, and how it is used. This is not confined to ethics. It is a question of political systems and power.



In 2010-2020, the private sector's ad-fuelled drive to extract profit from individuals wrapped the information environment of the social internet in a sticky extractive layer. Meanwhile, seeking good citizenship, certain governments found the aggregate, casting a wide net via backdoors, sovereign infrastructure, alliances, co-opted commercial technology, and the domestic security apparatus. This is capacity for control. Power optimisation ran through privately controlled infrastructure, which forced states to negotiate access to their own capacity to act. Rather than breaking up big tech, which was the expectation in the 2010s, the United States government is absorbing big tech anno 2026. Private technology infrastructure has become indispensable to state capacity itself.



The scope of control in 2010-2020 looks different from 2020-2030. In 2010-2020, knowing (social newsfeeds, algorithmic curation and amplification), interacting (messaging, payments), moving (geolocation, facial recognition) and even loving (dating apps) were digital signals of preferences and predictions. This erosion of privacy erodes information environments, human autonomy (power to decide), and human agency (power to do). In 2020-2030, AI assistants take the shared self out of social media, create a personalised informational black-box, delegate authority to agents, and are breaking through a previously unbreakable wall: connecting everything.


Meanwhile, the failure modes of the social media era are still there. Election interference, persuasion campaigns, identity theft, data breaches, cyber attacks, societal fragmentation, declining trust, and an evermore elusive shared truth. We know now, as we knew then, that this leads to democratic fragility. 

“Social platforms do not welcome reputation-eroding disinformation, yet interactions drive profits. Governments do not wish to have citizens exposed to foreign influence campaigns, but the University of Oxford found evidence of organised social media manipulation campaigns in 70 countries in 2018 alone. Citizens speak out against disinformation, but six-in-ten news items shared on social media were not even read by those who reshared them. A healthy information ecosystem is critical for democratic societies to function.”


“In democracy, “kratos” (power) of the “demos” (people) is bolstered by privacy.”

By allowing profit games to persist, Western democracies allowed democracy to become a puppet show, argued historian Yuval Noah Harari. In 2026, liberal governments with shrinking mandates must navigate the upcoming AI transition period.    



The social media era was propelled, and capped by profit, while governments tagged along for the ride to reap national security benefits. AI, however, is drummed along by governmental national security stakes. Military, economic, and political capabilities tally towards a changing balance of power.

“When it comes to artificial intelligence, China and the United States are arguably light years ahead of the other 195 countries. By extension, the ethical decisions that will matter most in the next few decades will orbit these two countries. Breaking this hegemony is squarely in the interest of the other 195 making up the world ranks. If this does not come to pass, only big countries and big platforms will have skin in the game. And those not in the game will be forced to play by another’s rules."


“AI will be the technology of domination in the 21st century. Those left behind could be exploited, or even conquered, by those who forge ahead. Nobody wants to stay behind.”

Themes which have come up time and time again are trust and control. Control to force trust versus trust in just control. 

“We cannot ensure the defence of the West if our allies grow dependent on the East,” said US Vice President Mike Pence at the 2019 Munich Security Conference. “The United States has also been very clear with our security partners on the threat posed by Huawei and other Chinese telecom companies, as Chinese law requires them to provide Beijing’s vast security apparatus with access to any data that touches their network or equipment. We must protect our critical telecom infrastructure, and America is calling on all our security partners to be vigilant and to reject any enterprise that would compromise the integrity of our communications technology or our national security systems.”


“As superpower dynamics change, dominant value systems change in parallel — as China’s companies increasingly export their products and services, its value system spreads.”


“The best way for democracies to stop the rise of digital authoritarianism is to prove that there is a better model for managing the Internet.”

In the book I sketched a scenario for 2025:

“Meanwhile, the last few years have seen heated public debate on the role of the social contract between governments and citizens. The differences between the value systems of the United States, the European Union and China have amplified. International political misalignment in data treatment has undermined international trust. Governmental intervention of mergers and acquisitions from a national security perspective has increased. Cyber warfare has become common. In the background, the global AI race is in full swing.”

This reads quite close to reality. What I did not expect at the time was the pendulum swinging the other way. By co-opting the AI labs, governments are opening themselves up for public insistence on responsibility, as well as a broadening scope of demands. Where the price for commercial centralisation is responsibility, the task of the government might be that, plus provision of public goods. 

“Maximising the gains of artificial intelligence has been overwhelmingly commercially focused, with societal implications developing largely unchecked. Many missed the gathering storm. Centralisation of data via centralised platforms masks a deeper trend: centralisation of artificial intelligence.”

The jagged frontier refers to asymmetrical adoption of AI. We call this diffusion. For Middle Powers, diffusion and sovereign AI infrastructure, such as data centres, dominate 2025-2030 national strategies. A clear-eyed observer would conclude that decentralised diffusion of benefits is paired with narrow, centralised capture of profits. 



When thinking about AI power concentration, the closest historical parallel is British control of maritime trade in the 17th-19th centuries. Not because ships resemble algorithms, but because real power came from controlling infrastructure, not just having better technology. The key insight: while technological capabilities spread quickly, the systems determining who profits persist much longer. This creates a paradox where everyone's situation improves absolutely, but relative positions diverge sharply.



British dominance worked across three layers with different timescales. At the surface (ship design, navigation, tactics) Britain's edge lasted 5-10 years. France reverse-engineered warships, the Dutch hired shipwrights, and Americans copied what they saw. However, real power sat one layer deeper. The East India Company did not just build better ships. It controlled coal, owned shipyards, dominated shipping insurance, maintained fortified ports, and developed financial instruments making long-distance commerce practical. By 1803, the EIC employed 260,000 soldiers—twice Britain's regular army—operating as a quasi-sovereign state. Building comparable systems took competitors 30-50 years, requiring coordinated capital and political relationships across continents. Deepest were ecosystem advantages persisting over a century. London became the financial capital through self-reinforcing dynamics: capital attracted capital, expertise clustered, legal innovations reduced transaction costs. English frameworks for joint-stock companies became international standards. These outlasted Britain's naval supremacy by decades. Between 1780-1860, global trade tripled and many nations grew economically. A merchant in Boston or Bombay could trade globally, but using British insurance, British bills of exchange, through British ports, under British legal precedents. Formal independence coexisted with deep economic dependence. By 1860, Britain controlled 20% of world manufacturing despite representing 2% of the global population—a tenfold concentration. Britain's advantage eventually eroded, but it took a century, two world wars, and competing industrial powers. Technical edge became irrelevant by 1900; integration advantages persisted until the interwar period; ecosystem advantages until mid-century. Frontier AI follows the same pattern. 

William Turner - The Battle of Trafalgar (1822)

Nations access AI on platform providers' terms, like 18th-century merchants accessing trade on British terms. More critically, nations today experience absolute gains while losing relative economic position. Model weights and training techniques on the frontier layer likely diffuse via open source. Cloud market share, integration depth, chip capacity, talent clustering, and regulatory authority on the integration and ecosystem layers likely compound. History suggests that technological frontier advantages fade quickly, integration advantages persist for decades, and ecosystem advantages last generations. 



I would like to say these warnings were heeded and the risk of centralised AI is solved. That is not what happened. AI scaled through centralisation (training) and selectively decentralised (inference). Privacy receded into the background to make way for national security. We are heading for a transition period with international inequality and instability. 



“If we believe AI will bring huge risks and huge benefits”, I wrote, “we need to understand what we can do now to improve the chances of reaping the benefits and avoiding the risks.” Decentralised collaboration as a response to centralised AI was an area that I was hopeful about, and spent the subsequent five years working on after publishing the book. In particular, open decentralised artificial intelligence models. 

“We need to collectively move away from the rationale that there are problems that only companies such as Facebook or Google can solve. The challenge then becomes to match problems looking for data with data looking for problems, and to do so at scale.” 

I maintain that the opportunity of open models to tackle ownerless, niche, big and shared problems persists, but advanced open AI models are not without danger. China overtook the United States in open models in 2025. It is now my position that competition and collaboration between the United States and China must be understood in parallel, i.e. what collaboration does competition permit, and that open source is one area of converging national security interests.


 

As China exports its value systems via open source, and the United States leads on the frontier, let us remember that a single point of power is a single point of failure. Politically, economically, but also morally. A great unknown remains how AI-run societies will treat outliers. 

"The upside or downside of optimisation of human behaviour hinges on the parameters for which it is optimised."

If a system decides on a perfect model citizen, it has the ability to identify those not fitting that description: “anomalies”. The Third Reich is an obvious worst case scenario, with ethnic profiling of Uyghurs in Xinjiang (新疆) reeducation camps a modern warning. We are not on the eve of 1939, you might reason. Perhaps you find safety in the majority. Such a line of thinking carries danger. Anyone anywhere translates to everybody everywhere. What you are really saying is: I am going to give up my rights because I do not think I will need them. Empathetically, privacy is not measured by current rules of society. Privacy is temporal, measured by societal rules in the past, present and future. This is why it is important to overinvest: once lost, privacy cannot be recovered. The ultimate proxy for freedom is deceptively simple: can you opt out?

“Despite all the challenges that were pointed out, this is an optimistic book. Acknowledging that something is wrong is inherently optimistic. Starting an open dialogue means that there is still hope to create the right solution. Sticking your head in the sand, however, is the opposite. It means we have given up.”

It is 2026 at time of writing: AI is here and it is probable that AGI is coming. The reason why profit games were condoned 2010-2020, the pursuit of AGI, is holding up for 2020-2030. The result is that profit games have expanded in capacity and power games have grown in legitimacy.

"Artificial intelligence evokes a mythical, objective omnipotence, but it is backed by real-world forces of money, power and data."

The data race was always about the AI race, and the AI race is about power.

The next two essays in this three-part series revisit the information environment and human agency in Autonomy, Agency and AGI, and how identity treatment shapes political systems in Humans, Robots, and Identity.

The data race was always about the AI race, and the AI race is about power.



The next two essays in this three-part series revisit the information environment and human agency in Autonomy, Agency and AGI, and how identity treatment shapes political systems in Humans, Robots, and Identity.

ESSAY 1 — APRIL 2026

Privacy, Profit and Power

In 2020, I argued that human behaviour was becoming an optimisation problem. Privacy dies twice. By the hand of two different actors with two different logics.

"Power games are embedded in and emboldened by profit games. While profit games justify data collection, power games justify data surveillance."


"Just as the ultimate goal for corporations is optimising individual human behaviour to maximise profits, the nation state optimises aggregate behaviour to achieve good citizenship."

Profit optimisation kills privacy instrumentally: data extraction to target ads and maximise engagement left the information environment degraded. Privacy had to die so the machine could learn your preferences.

“Companies optimise resources based on the rules of the market. If the goal is to maximise profit, high-quality data will be hoarded to optimise advertisement conversion. If the goal is to maximise ad conversion, click-bait will be pushed to maximise user stickiness and engagement. If the goal is to build a long-term competitive edge, data is captured centrally to train AI systems to stay one step ahead of rivals.”

“Embracing the thinking that the only way for the Internet to be open was to be free, businesses had to create value elsewhere. The result is that the Internet is neither free, nor open.” [...] “Even when marketed as free, subsidising true cost with advertising has made the upfront cost of general knowledge high.”

Power optimisation kills privacy structurally: governments needed what platforms had, so they absorbed it. National security does not need preferences. It needs compliance.

Power optimisation kills privacy structurally: governments needed what platforms had, so they absorbed it. National security does not need preferences. It needs compliance.

“How to guarantee that you will find a needle in a haystack? Collect the whole haystack.”

Identity Reboot was published before generative AI entered the public domain. Meta was still called Facebook, and ChatGPT and Claude would not arrive for several years. Privacy, I argued, preceded human ability to reason, and with the advance of AI, would be the canary in the coal mine of shrinking human choices and agency. Identity Reboot was not supposed to be about artificial intelligence. Its sole focus was one question: what role does digital identity play in our societies? Instead of a narrow question, it turns out I had accidentally asked a big one. One weaving all the way from privacy to power, and ultimately illuminating the stakes of the AI race. Not for countries, but for the everyday person. Why we should care about why, how and where AI is developed, and how it is used. This is not confined to ethics. It is a question of political systems and power.



In 2010-2020, the private sector's ad-fuelled drive to extract profit from individuals wrapped the information environment of the social internet in a sticky extractive layer. Meanwhile, seeking good citizenship, certain governments found the aggregate, casting a wide net via backdoors, sovereign infrastructure, alliances, co-opted commercial technology, and the domestic security apparatus. This is capacity for control. Power optimisation ran through privately controlled infrastructure, which forced states to negotiate access to their own capacity to act. Rather than breaking up big tech, which was the expectation in the 2010s, the United States government is absorbing big tech anno 2026. Private technology infrastructure has become indispensable to state capacity itself.



The scope of control in 2010-2020 looks different from 2020-2030. In 2010-2020, knowing (social newsfeeds, algorithmic curation and amplification), interacting (messaging, payments), moving (geolocation, facial recognition) and even loving (dating apps) were digital signals of preferences and predictions. This erosion of privacy erodes information environments, human autonomy (power to decide), and human agency (power to do). In 2020-2030, AI assistants take the shared self out of social media, create a personalised informational black-box, delegate authority to agents, and are breaking through a previously unbreakable wall: connecting everything.


Meanwhile, the failure modes of the social media era are still there. Election interference, persuasion campaigns, identity theft, data breaches, cyber attacks, societal fragmentation, declining trust, and an evermore elusive shared truth. We know now, as we knew then, that this leads to democratic fragility. 

“Social platforms do not welcome reputation-eroding disinformation, yet interactions drive profits. Governments do not wish to have citizens exposed to foreign influence campaigns, but the University of Oxford found evidence of organised social media manipulation campaigns in 70 countries in 2018 alone. Citizens speak out against disinformation, but six-in-ten news items shared on social media were not even read by those who reshared them. A healthy information ecosystem is critical for democratic societies to function.”


“In democracy, “kratos” (power) of the “demos” (people) is bolstered by privacy.”

By allowing profit games to persist, Western democracies allowed democracy to become a puppet show, argued historian Yuval Noah Harari. In 2026, liberal governments with shrinking mandates must navigate the upcoming AI transition period.    



The social media era was propelled, and capped by profit, while governments tagged along for the ride to reap national security benefits. AI, however, is drummed along by governmental national security stakes. Military, economic, and political capabilities tally towards a changing balance of power.

“When it comes to artificial intelligence, China and the United States are arguably light years ahead of the other 195 countries. By extension, the ethical decisions that will matter most in the next few decades will orbit these two countries. Breaking this hegemony is squarely in the interest of the other 195 making up the world ranks. If this does not come to pass, only big countries and big platforms will have skin in the game. And those not in the game will be forced to play by another’s rules."


“AI will be the technology of domination in the 21st century. Those left behind could be exploited, or even conquered, by those who forge ahead. Nobody wants to stay behind.”

Themes which have come up time and time again are trust and control. Control to force trust versus trust in just control. 

“We cannot ensure the defence of the West if our allies grow dependent on the East,” said US Vice President Mike Pence at the 2019 Munich Security Conference. “The United States has also been very clear with our security partners on the threat posed by Huawei and other Chinese telecom companies, as Chinese law requires them to provide Beijing’s vast security apparatus with access to any data that touches their network or equipment. We must protect our critical telecom infrastructure, and America is calling on all our security partners to be vigilant and to reject any enterprise that would compromise the integrity of our communications technology or our national security systems.”


“As superpower dynamics change, dominant value systems change in parallel — as China’s companies increasingly export their products and services, its value system spreads.”


“The best way for democracies to stop the rise of digital authoritarianism is to prove that there is a better model for managing the Internet.”

In the book I sketched a scenario for 2025:

“Meanwhile, the last few years have seen heated public debate on the role of the social contract between governments and citizens. The differences between the value systems of the United States, the European Union and China have amplified. International political misalignment in data treatment has undermined international trust. Governmental intervention of mergers and acquisitions from a national security perspective has increased. Cyber warfare has become common. In the background, the global AI race is in full swing.”

This reads quite close to reality. What I did not expect at the time was the pendulum swinging the other way. By co-opting the AI labs, governments are opening themselves up for public insistence on responsibility, as well as a broadening scope of demands. Where the price for commercial centralisation is responsibility, the task of the government might be that, plus provision of public goods. 

“Maximising the gains of artificial intelligence has been overwhelmingly commercially focused, with societal implications developing largely unchecked. Many missed the gathering storm. Centralisation of data via centralised platforms masks a deeper trend: centralisation of artificial intelligence.”

The jagged frontier refers to asymmetrical adoption of AI. We call this diffusion. For Middle Powers, diffusion and sovereign AI infrastructure, such as data centres, dominate 2025-2030 national strategies. A clear-eyed observer would conclude that decentralised diffusion of benefits is paired with narrow, centralised capture of profits. 



When thinking about AI power concentration, the closest historical parallel is British control of maritime trade in the 17th-19th centuries. Not because ships resemble algorithms, but because real power came from controlling infrastructure, not just having better technology. The key insight: while technological capabilities spread quickly, the systems determining who profits persist much longer. This creates a paradox where everyone's situation improves absolutely, but relative positions diverge sharply.



British dominance worked across three layers with different timescales. At the surface (ship design, navigation, tactics) Britain's edge lasted 5-10 years. France reverse-engineered warships, the Dutch hired shipwrights, and Americans copied what they saw. However, real power sat one layer deeper. The East India Company did not just build better ships. It controlled coal, owned shipyards, dominated shipping insurance, maintained fortified ports, and developed financial instruments making long-distance commerce practical. By 1803, the EIC employed 260,000 soldiers—twice Britain's regular army—operating as a quasi-sovereign state. Building comparable systems took competitors 30-50 years, requiring coordinated capital and political relationships across continents. Deepest were ecosystem advantages persisting over a century. London became the financial capital through self-reinforcing dynamics: capital attracted capital, expertise clustered, legal innovations reduced transaction costs. English frameworks for joint-stock companies became international standards. These outlasted Britain's naval supremacy by decades. Between 1780-1860, global trade tripled and many nations grew economically. A merchant in Boston or Bombay could trade globally, but using British insurance, British bills of exchange, through British ports, under British legal precedents. Formal independence coexisted with deep economic dependence. By 1860, Britain controlled 20% of world manufacturing despite representing 2% of the global population—a tenfold concentration. Britain's advantage eventually eroded, but it took a century, two world wars, and competing industrial powers. Technical edge became irrelevant by 1900; integration advantages persisted until the interwar period; ecosystem advantages until mid-century. Frontier AI follows the same pattern. 

William Turner - The Battle of Trafalgar (1822)

Nations access AI on platform providers' terms, like 18th-century merchants accessing trade on British terms. More critically, nations today experience absolute gains while losing relative economic position. Model weights and training techniques on the frontier layer likely diffuse via open source. Cloud market share, integration depth, chip capacity, talent clustering, and regulatory authority on the integration and ecosystem layers likely compound. History suggests that technological frontier advantages fade quickly, integration advantages persist for decades, and ecosystem advantages last generations. 



I would like to say these warnings were heeded and the risk of centralised AI is solved. That is not what happened. AI scaled through centralisation (training) and selectively decentralised (inference). Privacy receded into the background to make way for national security. We are heading for a transition period with international inequality and instability. 



“If we believe AI will bring huge risks and huge benefits”, I wrote, “we need to understand what we can do now to improve the chances of reaping the benefits and avoiding the risks.” Decentralised collaboration as a response to centralised AI was an area that I was hopeful about, and spent the subsequent five years working on after publishing the book. In particular, open decentralised artificial intelligence models. 

“We need to collectively move away from the rationale that there are problems that only companies such as Facebook or Google can solve. The challenge then becomes to match problems looking for data with data looking for problems, and to do so at scale.” 

I maintain that the opportunity of open models to tackle ownerless, niche, big and shared problems persists, but advanced open AI models are not without danger. China overtook the United States in open models in 2025. It is now my position that competition and collaboration between the United States and China must be understood in parallel, i.e. what collaboration does competition permit, and that open source is one area of converging national security interests.


 

As China exports its value systems via open source, and the United States leads on the frontier, let us remember that a single point of power is a single point of failure. Politically, economically, but also morally. A great unknown remains how AI-run societies will treat outliers. 

"The upside or downside of optimisation of human behaviour hinges on the parameters for which it is optimised."

If a system decides on a perfect model citizen, it has the ability to identify those not fitting that description: “anomalies”. The Third Reich is an obvious worst case scenario, with ethnic profiling of Uyghurs in Xinjiang (新疆) reeducation camps a modern warning. We are not on the eve of 1939, you might reason. Perhaps you find safety in the majority. Such a line of thinking carries danger. Anyone anywhere translates to everybody everywhere. What you are really saying is: I am going to give up my rights because I do not think I will need them. Empathetically, privacy is not measured by current rules of society. Privacy is temporal, measured by societal rules in the past, present and future. This is why it is important to overinvest: once lost, privacy cannot be recovered. The ultimate proxy for freedom is deceptively simple: can you opt out?

“Despite all the challenges that were pointed out, this is an optimistic book. Acknowledging that something is wrong is inherently optimistic. Starting an open dialogue means that there is still hope to create the right solution. Sticking your head in the sand, however, is the opposite. It means we have given up.”

It is 2026 at time of writing: AI is here and it is probable that AGI is coming. The reason why profit games were condoned 2010-2020, the pursuit of AGI, is holding up for 2020-2030. The result is that profit games have expanded in capacity and power games have grown in legitimacy.

"Artificial intelligence evokes a mythical, objective omnipotence, but it is backed by real-world forces of money, power and data."

The data race was always about the AI race, and the AI race is about power.

The next two essays in this three-part series revisit the information environment and human agency in Autonomy, Agency and AGI, and how identity treatment shapes political systems in Humans, Robots, and Identity.