
Georges Seurat, A Sunday on La Grande Jatte (1884–86)
ESSAY 2— APRIL 2026
“Ultimately, AI is a mirror.”
We looked in the mirror and expected an oracle.
In 2020, I argued that a diluted information environment degrades autonomy (independence to decide), and that shrinking autonomy contracts human agency (power to do). Philosopher Isaiah Berlin offers “two liberties” to distinguish the two: negative liberty (“freedom from”) and positive liberty (“freedom to”).
Even though my forecasts were comparatively aggressive at the time, the pace of change since 2020 exceeded my expectations: a new dominant information filter, the cost of harm reduced to zero, and persuasion at industrial scale. Regardless of Article 19 in the UN Universal Declaration of Human Rights, foreign influence campaigns graduated the 2010-2020 era, and matured into a global phenomenon. The upcoming American 2028 election, for example, is one to watch closely. Preparing for the 2016 cycle was decidedly different. In 2016, identity politics was rooted in preferences and predictions of social platforms. 2028, however, will move the political battlefield to LLMs.
Profit-driven social platforms of the 2010-2020 era collected data to find the individual within a shared social context, maximising engagement to better sell ads. Personalised AI assistants, now accessed via chat interface, address the individual in isolation, maximising for tailored usefulness. In 2020-2030, the focus on usefulness closes decision distance, changing what and how we know.
Here is what has not changed: fragmented reality and information filtering are two sides of the same coin. This is not new, but it is worse. AI assistants present the answer on a compressed platter: a fixed amount of characters. Model hallucinations, sycophancy, model collapse, and bias distort the baseline. The distortions layer and compound: a model stating falsehoods with the same confidence as facts; a model calibrated to tell you what you want to hear; synthetic training data narrowing the knowledge distribution; and political bias baked into base weights, as with Qwen's pro-China stance.
The 2020 trend of AI decreasing the cost of harm continues. As a reminder, the Council of Europe distinguishes between misinformation (falseness), malinformation (intent to harm) and disinformation (falseness and intent to harm). Incentives to create disinformation — profit and power — remain.
“The technology to create a deep fake is surprisingly advanced, and improving rapidly. When the Apollo 11 mission was launched in 1969, two speeches were written: one in the event of success, one in the event of failure. Recently, MIT created a convincing deep fake of the second speech. Misapplied, deep fakes could create a version of history that never was.”
In 2020, I flagged that GPT-2's staged release confirmed this disinformation risk; OpenAI's own partners had demonstrated the model could be fine-tuned to generate extremist propaganda at scale. By 2026, that risk is fully realised, and the window for provenance-based countermeasures has all but closed. Only one viable provenance strategy is left: use AI to detect AI.
Although my call on the future of distribution was directionally correct (individualised information system), I underestimated the consequences of taking the social out of social media. When taken to its logical extreme, a completely personalised information environment means the end of shared truth. Recentering the information environment around an individual without a shared self created an informational black box, vulnerable to the same attacks and hacks, with fewer early warning signals to detect them.
Change the filter, change the feed. Generative Engine Optimisation (GEO), the successor of Search Engine Optimisation (SEO), is the process of optimising content to be cited and ranked within AI platforms. Rich, consistent context is a proxy for content “authority”. The modern equivalent of a 2020 influence campaign is not a Russian bot farm conquering hashtags, but a hybrid of agents and people using GenAI to create hundreds of thousands of organic looking, high-authority Reddit pages for LLM indexing. Roughly over half of online content and traffic is now AI-generated and this synthetic content is near-indistinguishable at scale. This explosion of synthetic data is paired with a shrinking surface to examine it, as the trend of digital minimalisation continues, and voice and augmented reality compress the informational surface area further. Meanwhile, organic multi-modal data will explode. More data going in and going out, with less scrutiny in the middle. In sum, 2020-2030 will bring more noise, and less clarity.
“As general trust breaks down, the concept becomes more specific, and specific trust as a shortcut to truth becomes particularly tempting.”
On social platforms, I explained, multiple techniques exist to manipulate a person's core beliefs. Identity reinforcement, the tendency to agree with people you identify with, is why influencers are effective. Its inverse, negative social reinforcement, nudges you to tone down views by showing your content to those who will harshly criticise it. Positive social reinforcement does the opposite: everybody agrees with you. Content removal is more direct. According to Freedom on the Net findings released in late 2025, 69% of global internet users live in countries where political, social, or religious content was blocked. The cumulative effect is a personal information universe, sampling bias taken to its logical end. The 2026 manifestation is AI sycophancy: the LLM spits out a salami slice of reality, likely coloured by political bias and censorship, encouraging you that your interpretation of that slice makes up the totality of truth.
In 2020, I flagged that GPT-2's staged release confirmed this disinformation risk; OpenAI's own partners had demonstrated the model could be fine-tuned to generate extremist propaganda at scale. By 2026, that risk is fully realised, and the window for provenance-based countermeasures has all but closed. Only one viable provenance strategy is left: use AI to detect AI.
Although my call on the future of distribution was directionally correct (individualised information system), I underestimated the consequences of taking the social out of social media. When taken to its logical extreme, a completely personalised information environment means the end of shared truth. Recentering the information environment around an individual without a shared self created an informational black box, vulnerable to the same attacks and hacks, with fewer early warning signals to detect them.
Change the filter, change the feed. Generative Engine Optimisation (GEO), the successor of Search Engine Optimisation (SEO), is the process of optimising content to be cited and ranked within AI platforms. Rich, consistent context is a proxy for content “authority”. The modern equivalent of a 2020 influence campaign is not a Russian bot farm conquering hashtags, but a hybrid of agents and people using GenAI to create hundreds of thousands of organic looking, high-authority Reddit pages for LLM indexing. Roughly over half of online content and traffic is now AI-generated and this synthetic content is near-indistinguishable at scale. This explosion of synthetic data is paired with a shrinking surface to examine it, as the trend of digital minimalisation continues, and voice and augmented reality compress the informational surface area further. Meanwhile, organic multi-modal data will explode. More data going in and going out, with less scrutiny in the middle. In sum, 2020-2030 will bring more noise, and less clarity.
“The last tactic is arguably the most dangerous of them all; “argument personalisation”. A granular profile of personality, passions and perceptions to generate maximally effective content from scratch, tailored to convince, of all people, you specifically. I call this the digital invisible hand.”
On social platforms, I explained, multiple techniques exist to manipulate a person's core beliefs. Identity reinforcement, the tendency to agree with people you identify with, is why influencers are effective. Its inverse, negative social reinforcement, nudges you to tone down views by showing your content to those who will harshly criticise it. Positive social reinforcement does the opposite: everybody agrees with you. Content removal is more direct. According to Freedom on the Net findings released in late 2025, 69% of global internet users live in countries where political, social, or religious content was blocked. The cumulative effect is a personal information universe, sampling bias taken to its logical end. The 2026 manifestation is AI sycophancy: the LLM spits out a salami slice of reality, likely coloured by political bias and censorship, encouraging you that your interpretation of that slice makes up the totality of truth.
This digital power keg paved the way for the identity politics that has defined the decade from 2016. Political scientist Francis Fukuyama explains identity politics as people adopting political positions based on their ethnicity, race, sexuality or religion rather than on broader policies.
“If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer," concluded the political theorist Hannah Arendt. “And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.”
What Arendt intuited fifty years earlier is that there are two forms of liberty: as the social media era (2010-2020) attacked negative liberty (i.e. your information environment was shaped without your knowledge), agentic AI attacks positive liberty (i.e. your capacity to act is now compressed). Agentic AI compresses human agency in more ways than one: via delegated agency (i.e. you outsource the decision), and by making a decision for you (i.e. the decision never reaches you). If the social media era left us misinformed, the agentic era leaves us pre-empted.
Delegated authority increases human capability but outsources human judgement. As delegation to agents increases, permissions expand and move higher in the hierarchy. Once the agent acts, the question of whether the goal was right becomes harder to ask and easier to skip.
"Algorithmic enforcement creates decision distance — and consequently distance in regard to responsibility — between the algorithm's architect, its enforcer, and any consequences which might impact the subject."
Although ChatGPT’s parameters increased ten-fold between 2020 and 2026, the underlying failure modes of LLMs are still roughly the same.
"It could be the wrong information. It could be the wrong logic. It could be the wrong authorisation."
Reward-seeking agents are a textbook-example of optimisation game logic. I could not have scripted it better in 2020 if I tried — and I tried in 200 pages. The policy implications of agentic AI deserve more than a single essay. The philosophical ones, however, are what Identity Reboot was always pointing toward. My intention when writing the book was to explore the why of technology ethics. To paraphrase philosopher Henry David Thoreau, it is worth asking if we are improving the means for an unimproved end.
If we do not know ourselves, recommendation becomes anticipation, and anticipation becomes substitution. The third threat — which does not have a clean Berlin label — is the freedom to want something the system hasn't already predicted. Call it constitutive autonomy. The capacity to form new preferences rather than optimise existing ones. This is the risk that the AI model of you becomes more authoritative than your model of yourself. In its extreme form, the decision precedes the preference. You are not consulted. You are modelled.
For today’s LLM agents, what is true is what is useful.
“According to the philosopher William James, how truthful an idea is depends on how useful it is. To the pragmatic James, truth is an adjective. True ideas, he argues, we can assimilate, validate, corroborate and verify. The same is not possible for false ideas. Rather than truth being stable, an idea is only confirmed to be true by events. Its validity is the process of its validation. In other words: you are more likely to accept an idea as true if that idea is useful to you.”
Contrast this with AI world models, which learn from interactive exploration and first-principle thinking. World models persistently test reality. The philosopher Socrates famously stated: “one thing only I know, and that is that I know nothing”. To Socrates, in contrast to James, truth was a noun. I propose we teach LLMs epistemic humility: I do not know. More Socrates.
Frustrated, in 2020 I briskly wrote the same thing in three different ways: "black box algorithms risk infantilising decision makers”; “either we make a deliberative effort to think about what we want, or we agree to let our impulses dictate our lives”; and “the brain is a muscle: if we solely trust algorithms to do the work for us, we should not be surprised if we forget how to do the work”. I now think this did not go far enough. The empirical data is in, and it shows that cognitive unlearning affects different generations differently. The experienced coder might shave the edges of his thinking, but the high school student will never learn what good code looks like in the first place.
This serves as a reminder that in the desire to go forwards, there is a risk of sliding backwards. George Orwell wrote that “who controls the present controls the past, and who controls the past, controls the future.” Algorithmic enforcement in combination with model collapse could mean less future and more past.
“On the one hand, the value system of society and its living truth, enforced by code, could stagnate. On the other hand, the value system of society could remain stable, but its subjects (the citizens) would not subscribe to those values any longer. The first leads to stagnation of progress. The second leads to instability.”
“Using historical data to feed and train algorithms that decide yet-to-be-formed outputs could quickly result in a self-fulfilling prophecy. A snake eating its own tail: all that is, is all that will be.”
“Favouring simplification for easier optimisation, this implies a preference for humans as simplistic rather than complex beings. As machines optimise humans, humans are at risk of becoming machines.”
What is the price you would be willing to pay for certainty?
An irony of studying history is that we often know exactly how the story ends, but not how it began. To understand what caused the Second World War, you need to acknowledge the economy of Germany in the preceding decades. To comprehend why the German economy was in such an abysmal state, you need to examine the aftermath of the First World War and the Treaty of Versailles. To make sense of the decisions made at the Treaty of Versailles, the political situations of France, the United Kingdom and the United States are informative. History is a continuous chain of large and small events. If we would build a digital twin of every event that ever took place, this quest would not stop until we arrive at a perfect digital copy of everything that ever was, a complete digital mirror world. Pierre-Simon, Marquis de Laplace, imagined such a world in his 18th century work Exposition du Système du Monde.
"We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movements of the greatest bodies of the universe and that of the lightest atoms; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes." This deterministic view of the world is called "Laplace's demon."

Pierre-Simon Laplace (1749-1827)
“Which Humans?”, by Anthropic researchers, is one of my favourite academic papers. The core argument is that the RLHF AI-safety technique encodes the preferences of a specific, narrow demographic of human raters, and calls that human values. These values are W.E.I.R.D: western, individualised, educated, rich, and democratic. Western models export one set of encoded values, Chinese models export another, and neither is neutral. In 2023, when asked who Taiwan’s national leader was, a Taiwanese LLM chat AI answered "Xi Jinping," the leader of China's Communist Party. It also answered "Chinese" to nationality. Trained on Chinese data, the model repeated the CCP party line. This is not a bug or a black swan event: this is statistical forward token propagation of large language models working consistently with their design. The writing is on the wall: to influence an audience, adversarial governments will influence the LLM they use. Politics is a battle of ideas. If AI assistants are the dominant informational filter, LLMs are the battlefield.
If left unchecked, AI will move into the vacant space privacy leaves.
Ultimately, if AI shapes what you think and what you do, it shapes who you are. From Descartes (“cogito ergo sum”) to Kant (rationality makes humanity an end in itself), philosophers long held humanity to be defined by human capability for reason. That monopoly is now contested. This is national security.
The third and final essay, Humans, Robots and Identity, explores how identity treatment shapes political systems.
ESSAY 2 — APRIL 2026
“Ultimately, AI is a mirror.”
We looked in the mirror and expected an oracle.
In 2020, I argued that a diluted information environment degrades autonomy (independence to decide), and that shrinking autonomy contracts human agency (power to do). Philosopher Isaiah Berlin offers “two liberties” to distinguish the two: negative liberty (“freedom from”) and positive liberty (“freedom to”).
Even though my forecasts were comparatively aggressive at the time, the pace of change since 2020 exceeded my expectations: a new dominant information filter, the cost of harm reduced to zero, and persuasion at industrial scale. Regardless of Article 19 in the UN Universal Declaration of Human Rights, foreign influence campaigns graduated the 2010-2020 era, and matured into a global phenomenon. The upcoming American 2028 election, for example, is one to watch closely. Preparing for the 2016 cycle was decidedly different. In 2016, identity politics was rooted in preferences and predictions of social platforms. 2028, however, will move the political battlefield to LLMs.
Profit-driven social platforms of the 2010-2020 era collected data to find the individual within a shared social context, maximising engagement to better sell ads. Personalised AI assistants, now accessed via chat interface, address the individual in isolation, maximising for tailored usefulness. In 2020-2030, the focus on usefulness closes decision distance, changing what and how we know.
Here is what has not changed: fragmented reality and information filtering are two sides of the same coin. This is not new, but it is worse. AI assistants present the answer on a compressed platter: a fixed amount of characters. Model hallucinations, sycophancy, model collapse, and bias distort the baseline. The distortions layer and compound: a model stating falsehoods with the same confidence as facts; a model calibrated to tell you what you want to hear; synthetic training data narrowing the knowledge distribution; and political bias baked into base weights, as with Qwen's pro-China stance.
The 2020 trend of AI decreasing the cost of harm continues. As a reminder, the Council of Europe distinguishes between misinformation (falseness), malinformation (intent to harm) and disinformation (falseness and intent to harm). Incentives to create disinformation — profit and power — remain.
“The technology to create a deep fake is surprisingly advanced, and improving rapidly. When the Apollo 11 mission was launched in 1969, two speeches were written: one in the event of success, one in the event of failure. Recently, MIT created a convincing deep fake of the second speech. Misapplied, deep fakes could create a version of history that never was.”
In 2020, I flagged that GPT-2's staged release confirmed this disinformation risk; OpenAI's own partners had demonstrated the model could be fine-tuned to generate extremist propaganda at scale. By 2026, that risk is fully realised, and the window for provenance-based countermeasures has all but closed. Only one viable provenance strategy is left: use AI to detect AI.
Although my call on the future of distribution was directionally correct (individualised information system), I underestimated the consequences of taking the social out of social media. When taken to its logical extreme, a completely personalised information environment means the end of shared truth. Recentering the information environment around an individual without a shared self created an informational black box, vulnerable to the same attacks and hacks, with fewer early warning signals to detect them.
Change the filter, change the feed. Generative Engine Optimisation (GEO), the successor of Search Engine Optimisation (SEO), is the process of optimising content to be cited and ranked within AI platforms. Rich, consistent context is a proxy for content “authority”. The modern equivalent of a 2020 influence campaign is not a Russian bot farm conquering hashtags, but a hybrid of agents and people using GenAI to create hundreds of thousands of organic looking, high-authority Reddit pages for LLM indexing. Roughly over half of online content and traffic is now AI-generated and this synthetic content is near-indistinguishable at scale. This explosion of synthetic data is paired with a shrinking surface to examine it, as the trend of digital minimalisation continues, and voice and augmented reality compress the informational surface area further. Meanwhile, organic multi-modal data will explode. More data going in and going out, with less scrutiny in the middle. In sum, 2020-2030 will bring more noise, and less clarity.
“As general trust breaks down, the concept becomes more specific, and specific trust as a shortcut to truth becomes particularly tempting.”
On social platforms, I explained, multiple techniques exist to manipulate a person's core beliefs. Identity reinforcement, the tendency to agree with people you identify with, is why influencers are effective. Its inverse, negative social reinforcement, nudges you to tone down views by showing your content to those who will harshly criticise it. Positive social reinforcement does the opposite: everybody agrees with you. Content removal is more direct. According to Freedom on the Net findings released in late 2025, 69% of global internet users live in countries where political, social, or religious content was blocked. The cumulative effect is a personal information universe, sampling bias taken to its logical end. The 2026 manifestation is AI sycophancy: the LLM spits out a salami slice of reality, likely coloured by political bias and censorship, encouraging you that your interpretation of that slice makes up the totality of truth.
“The last tactic is arguably the most dangerous of them all; “argument personalisation”. A granular profile of personality, passions and perceptions to generate maximally effective content from scratch, tailored to convince, of all people, you specifically. I call this the digital invisible hand.”
On social platforms, I explained, multiple techniques exist to manipulate a person's core beliefs. Identity reinforcement, the tendency to agree with people you identify with, is why influencers are effective. Its inverse, negative social reinforcement, nudges you to tone down views by showing your content to those who will harshly criticise it. Positive social reinforcement does the opposite: everybody agrees with you. Content removal is more direct. According to Freedom on the Net findings released in late 2025, 69% of global internet users live in countries where political, social, or religious content was blocked. The cumulative effect is a personal information universe, sampling bias taken to its logical end. The 2026 manifestation is AI sycophancy: the LLM spits out a salami slice of reality, likely coloured by political bias and censorship, encouraging you that your interpretation of that slice makes up the totality of truth.
This digital power keg paved the way for the identity politics that has defined the decade from 2016. Political scientist Francis Fukuyama explains identity politics as people adopting political positions based on their ethnicity, race, sexuality or religion rather than on broader policies.
“If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer," concluded the political theorist Hannah Arendt. “And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.”
What Arendt intuited fifty years earlier is that there are two forms of liberty: as the social media era (2010-2020) attacked negative liberty (i.e. your information environment was shaped without your knowledge), agentic AI attacks positive liberty (i.e. your capacity to act is now compressed). Agentic AI compresses human agency in more ways than one: via delegated agency (i.e. you outsource the decision), and by making a decision for you (i.e. the decision never reaches you). If the social media era left us misinformed, the agentic era leaves us pre-empted.
Delegated authority increases human capability but outsources human judgement. As delegation to agents increases, permissions expand and move higher in the hierarchy. Once the agent acts, the question of whether the goal was right becomes harder to ask and easier to skip.
"Algorithmic enforcement creates decision distance — and consequently distance in regard to responsibility — between the algorithm's architect, its enforcer, and any consequences which might impact the subject."
Although ChatGPT’s parameters increased ten-fold between 2020 and 2026, the underlying failure modes of LLMs are still roughly the same.
"It could be the wrong information. It could be the wrong logic. It could be the wrong authorisation."
Reward-seeking agents are a textbook-example of optimisation game logic. I could not have scripted it better in 2020 if I tried — and I tried in 200 pages. The policy implications of agentic AI deserve more than a single essay. The philosophical ones, however, are what Identity Reboot was always pointing toward. My intention when writing the book was to explore the why of technology ethics. To paraphrase philosopher Henry David Thoreau, it is worth asking if we are improving the means for an unimproved end.
If we do not know ourselves, recommendation becomes anticipation, and anticipation becomes substitution. The third threat — which does not have a clean Berlin label — is the freedom to want something the system hasn't already predicted. Call it constitutive autonomy. The capacity to form new preferences rather than optimise existing ones. This is the risk that the AI model of you becomes more authoritative than your model of yourself. In its extreme form, the decision precedes the preference. You are not consulted. You are modelled.
For today’s LLM agents, what is true is what is useful.
“According to the philosopher William James, how truthful an idea is depends on how useful it is. To the pragmatic James, truth is an adjective. True ideas, he argues, we can assimilate, validate, corroborate and verify. The same is not possible for false ideas. Rather than truth being stable, an idea is only confirmed to be true by events. Its validity is the process of its validation. In other words: you are more likely to accept an idea as true if that idea is useful to you.”
Contrast this with AI world models, which learn from interactive exploration and first-principle thinking. World models persistently test reality. The philosopher Socrates famously stated: “one thing only I know, and that is that I know nothing”. To Socrates, in contrast to James, truth was a noun. I propose we teach LLMs epistemic humility: I do not know. More Socrates.
Frustrated, in 2020 I briskly wrote the same thing in three different ways: "black box algorithms risk infantilising decision makers”; “either we make a deliberative effort to think about what we want, or we agree to let our impulses dictate our lives”; and “the brain is a muscle: if we solely trust algorithms to do the work for us, we should not be surprised if we forget how to do the work”. I now think this did not go far enough. The empirical data is in, and it shows that cognitive unlearning affects different generations differently. The experienced coder might shave the edges of his thinking, but the high school student will never learn what good code looks like in the first place.
This serves as a reminder that in the desire to go forwards, there is a risk of sliding backwards. George Orwell wrote that “who controls the present controls the past, and who controls the past, controls the future.” Algorithmic enforcement in combination with model collapse could mean less future and more past.
“On the one hand, the value system of society and its living truth, enforced by code, could stagnate. On the other hand, the value system of society could remain stable, but its subjects (the citizens) would not subscribe to those values any longer. The first leads to stagnation of progress. The second leads to instability.”
“Using historical data to feed and train algorithms that decide yet-to-be-formed outputs could quickly result in a self-fulfilling prophecy. A snake eating its own tail: all that is, is all that will be.”
“Favouring simplification for easier optimisation, this implies a preference for humans as simplistic rather than complex beings. As machines optimise humans, humans are at risk of becoming machines.”
What is the price you would be willing to pay for certainty?
An irony of studying history is that we often know exactly how the story ends, but not how it began. To understand what caused the Second World War, you need to acknowledge the economy of Germany in the preceding decades. To comprehend why the German economy was in such an abysmal state, you need to examine the aftermath of the First World War and the Treaty of Versailles. To make sense of the decisions made at the Treaty of Versailles, the political situations of France, the United Kingdom and the United States are informative. History is a continuous chain of large and small events. If we would build a digital twin of every event that ever took place, this quest would not stop until we arrive at a perfect digital copy of everything that ever was, a complete digital mirror world. Pierre-Simon, Marquis de Laplace, imagined such a world in his 18th century work Exposition du Système du Monde.
"We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movements of the greatest bodies of the universe and that of the lightest atoms; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes." This deterministic view of the world is called "Laplace's demon."

Pierre-Simon Laplace (1749-1827)
“Which Humans?”, by Anthropic researchers, is one of my favourite academic papers. The core argument is that the RLHF AI-safety technique encodes the preferences of a specific, narrow demographic of human raters, and calls that human values. These values are W.E.I.R.D: western, individualised, educated, rich, and democratic. Western models export one set of encoded values, Chinese models export another, and neither is neutral. In 2023, when asked who Taiwan’s national leader was, a Taiwanese LLM chat AI answered "Xi Jinping," the leader of China's Communist Party. It also answered "Chinese" to nationality. Trained on Chinese data, the model repeated the CCP party line. This is not a bug or a black swan event: this is statistical forward token propagation of large language models working consistently with their design. The writing is on the wall: to influence an audience, adversarial governments will influence the LLM they use. Politics is a battle of ideas. If AI assistants are the dominant informational filter, LLMs are the battlefield.
If left unchecked, AI will move into the vacant space privacy leaves.
Ultimately, if AI shapes what you think and what you do, it shapes who you are. From Descartes (“cogito ergo sum”) to Kant (rationality makes humanity an end in itself), philosophers long held humanity to be defined by human capability for reason. That monopoly is now contested. This is national security.
The third and final essay, Humans, Robots and Identity, explores how identity treatment shapes political systems.