By Esther Claudette Gittens
Artificial intelligence (AI) has rapidly become a powerful force in modern society, influencing decision-making in business, governance, healthcare, and personal interactions. AI systems can process massive amounts of data and generate information at an unprecedented speed. However, the assumption that AI always provides “truth” is not only flawed but also dangerous. AI does not “think” in the human sense, nor does it possess consciousness or moral judgment. Instead, it is a system that processes inputs based on algorithms designed by humans—who may have their own biases, limitations, and agendas.
Moreover, AI is already subject to manipulation, particularly in countries like China and the United States, where governments and corporations actively shape the way AI systems present information. This article will explore why AI users should be cautious about “trusting” AI, the crucial distinction between information and truth, and how AI is manipulated to serve political and corporate interests.
AI: A Generator of Information, Not Truth
Defining the Difference Between Information and Truth
One of the most fundamental errors in assuming AI is reliable is the failure to distinguish between information and truth. Information consists of data points, facts, and structured responses generated from various sources. Truth, on the other hand, is a philosophical and ethical construct that demands verification, context, and moral reasoning.
AI operates by retrieving and synthesizing data from pre-existing sources, but it does not inherently verify their accuracy beyond programmed parameters. AI does not have the capability to apply ethical reasoning or judgment—it merely provides responses based on patterns and probability.
For example:
- If AI is asked, “What is the capital of France?” it will correctly answer, “Paris,” because this is an uncontroversial, widely agreed-upon fact.
- However, if asked, “Was the 2020 U.S. Presidential Election stolen?” the answer will depend on the sources the AI has been trained on, political influences, and content moderation policies.
Thus, AI-generated information may not always reflect objective truth but rather the prevailing narrative chosen by those who control the data sources and programming.
How AI Can Be Manipulated
Governmental Control Over AI in China
China has taken aggressive steps to ensure AI aligns with state-approved narratives. The Chinese Communist Party (CCP) has strict regulations on AI and large language models (LLMs), such as ChatGPT-like systems, which must comply with the government’s censorship policies. AI in China cannot produce results that contradict the official government stance on sensitive issues such as:
- Tiananmen Square Massacre (1989) – Any AI system operating within China will not provide an honest or full account of the incident, as the Chinese government has erased or distorted records.
- Taiwan’s Sovereignty – AI is programmed to assert that Taiwan is part of China, reinforcing the CCP’s political position.
- Xinjiang and Human Rights Violations – Reports of Uyghur repression and forced labor camps are often omitted or downplayed by AI systems operating under Chinese jurisdiction.
By tightly regulating AI training data and algorithms, China ensures that AI serves as a tool of state propaganda rather than an objective information source.
Corporate and Political Manipulation of AI in the USA
While the U.S. operates under a more open system than China, AI is not free from political and corporate influence. American AI systems are often developed by tech giants such as Google, Microsoft, and OpenAI, all of which have their own biases, whether intentional or not.
- AI Moderation and Political Bias
AI models trained in the U.S. are subject to content moderation that often aligns with corporate and governmental interests. For example:
- AI models may suppress controversial political viewpoints, particularly those that go against mainstream media narratives.
- Large tech companies have been accused of modifying search algorithms to prioritize certain political ideologies or candidates.
- The Role of Big Tech in Censorship
Companies like Google, Facebook, and OpenAI have extensive policies on what AI can and cannot say. These policies are shaped by corporate interests, partnerships with governments, and pressure from advocacy groups. As a result:
- AI may avoid discussing controversial topics, such as election integrity or pharmaceutical industry malpractices.
- Certain topics, like COVID-19’s origins or alternative energy research, may be presented in a way that aligns with prevailing corporate interests.
- AI in Media and Journalism
AI is increasingly used in journalism, where algorithms decide what news is promoted or suppressed. This poses a significant risk of AI being used to push specific narratives while burying inconvenient facts.
For example:
- News outlets that rely on AI for content curation may prioritize stories favorable to advertisers or political allies.
- AI-driven “fact-checking” can be used selectively, reinforcing the credibility of some sources while discrediting others without objective review.
The Dangers of Blindly Trusting AI
- AI Can Perpetuate Falsehoods
If an AI model is trained on biased or incomplete data, it will reflect those biases. Unlike human researchers who can critically assess sources, AI lacks the ability to challenge inconsistencies in its data.
For example:
- AI has been known to generate inaccurate legal advice due to outdated or incorrect training data.
- AI-generated news summaries can misrepresent nuanced stories by oversimplifying complex issues.
- AI Can Be Weaponized for Propaganda
AI’s ability to quickly generate large-scale content makes it a powerful tool for propaganda. Governments, political groups, and corporations can use AI to:
- Flood social media with automated, AI-generated responses supporting specific viewpoints.
- Create deepfake content that manipulates public perception.
- Amplify misleading narratives by prioritizing them in search engine results.
- AI Users Become Passive Consumers of Information
One of the most concerning aspects of AI’s role in information dissemination is that it encourages passive consumption. Users may rely on AI-generated answers without critical thinking, assuming that AI is neutral or objective when it may be subtly reinforcing a particular agenda.
For instance:
- AI-powered educational tools might present a one-sided view of historical events.
- AI in social media algorithms can create echo chambers by showing users content that reinforces their existing beliefs.
How to Approach AI with Caution
Given these concerns, users should be mindful of the following principles when interacting with AI:
- Verify Information Independently
Always cross-check AI-generated responses with multiple sources. Treat AI as a starting point for research, not an ultimate authority. - Be Aware of Biases
Recognize that AI is shaped by the interests and policies of its developers. Consider whose data is being used and whether it reflects a balanced perspective. - Question Censorship and Omission
If AI refuses to answer a question or provides a vague response, it may be due to internal policies or content moderation biases. Look for alternative sources to fill in the gaps. - Think Critically About AI’s Role in Society
Rather than blindly embracing AI as a neutral tool, consider how it is being used to influence public opinion, governance, and corporate interests.
Conclusion
AI is a powerful tool, but it should never be blindly trusted. The distinction between information and truth is crucial, as AI does not possess the ability to discern truth in the way that humans do. Moreover, AI is already being manipulated by powerful entities in China, the U.S., and elsewhere to shape public perception and control narratives. To navigate the AI-driven world responsibly, users must remain critical, independent thinkers who seek verification and remain aware of AI’s inherent limitations. Trusting AI without scrutiny is not just naïve—it is a dangerous surrender to unseen forces that control the flow of information.