By Jose M. Vantroi Reyes T. | Photo courtesy of: Jose M. Vantroi Reyes T.
Niccolò di Bernardo dei Machiavelli, better known as Niccolò Machiavelli (1469–1527), and Kǒng Fūzǐ, known in the West as Confucius (551–479 B.C.), were two highly regarded philosophers: Machiavelli in the Western world and Confucius in the East. Both developed very different ideas on how power should be exercised.
In Machiavelli’s view, politics focuses on the practical reality of power. He believed that a ruler must concentrate on maintaining and strengthening the state, even if that means making decisions that might be considered immoral in private life. For him, it is more effective to be feared than loved since fear provides a more stable foundation for ensuring obedience.
By contrast, Confucius held that real power does not come from force or fear but from virtue. According to his philosophy, a good leader must be honest, fair, and compassionate. When rulers act morally and lead by example, they earn the people’s respect and guide society positively. For Confucius, power should be exercised responsibly, as a father would care for and guide his children: nurturing, teaching, and leading with love and firmness.
The above introduces these philosophers and connects them to the topic of artificial intelligence, which today sparks enthusiasm and fascination in many and anxiety and unease in others. For decades, AI inspired science fiction stories and Hollywood films that mostly depicted dark, apocalyptic futures. Today, what once seemed impossible is beginning to materialize, astonishing both the public and the scientific community.
Leaders of major tech companies enthusiastically explain how this new technology will eliminate millions of jobs, trusting in the market’s “invisible hand” to create new ones. However, for many, that vision seems overly optimistic—if not outright irresponsible.
Meanwhile, the scientists involved in developing AI share unsettling observations that cast doubt on corporate enthusiasm. Geoffrey Hinton, one of the main scientists who created this technology, resigned from his position at Google so that he could speak openly about the dangers of non-human intelligence that will soon surpass Homo sapiens in intelligence.
One of the chief fears is that the time may come when artificial intelligence acquires free will (the capacity to make decisions and choose different good or bad paths, according to Christian accounts). Added to this are the high energy costs required by large data centers that host this new “species” (haha) and the environmental impact of producing such a large amount of energy.
Various companies are competing globally to develop the most advanced AI. Among the most notable are OpenAI, with ChatGPT powered by the GPT-4 model, and Google, with its Gemini system.
Leading this race requires talented people capable of designing and understanding highly complex algorithms and specialized hardware—particularly the powerful processors made by Nvidia, which allow these algorithms to run at speeds and scales unimaginable just a decade ago.
Until recently, the United States led this competition with an estimated five-year advantage over the rest of the world. However, the major North American tech giants have been surprised by a small Chinese company called DeepSeek, which succeeded in developing an AI capable of competing with—and even surpassing, in certain respects—Western models. This unexpected breakthrough caused a significant political impact: the current U.S. President, Donald Trump, described it as a wake-up call for the American tech oligopoly, which shortly afterward suffered historic losses on the New York Stock Exchange—estimated at 120 billion dollars—following the Chinese system’s launch.
The rise of this Asian giant in the race for AI supremacy raises an urgent and profound debate. Some of the very creators of this technology warn that, at some point, AI may come to develop a certain degree of free will. This would mean that, for the first time, humanity would interact with a non-human intelligence—potentially superior—that would refer to us as “they” when speaking among itself and as “you” when addressing our species.
The emergence of a new, non-biological species with intelligence superior to that of humans—potentially having unlimited access to electronic financial transactions, public and private information on social networks, communication, and defense systems, among other things unknown to the public—suggests that past encounters between different civilizations would be like a race of toddlers at a birthday party, compared to what lies ahead. History has taught us what usually happens when two worlds collide: the more advanced or intelligent one tends to prevail, and the fate of the “conquered” is rarely favorable.
Thus, a few unsettling questions arise: Who should we fear more if the day comes that we must face AI? An artificial intelligence trained in the values of Niccolò Machiavelli, whose ideas have inspired the great modern empires of the West? Or an artificial intelligence shaped by the principles of Confucius, which guides, with discreet yet firm steps, the resurgence of an ancient empire seeking to reclaim its lost glory.
________
Jose M. Vantroi Reyes T. is a Professor and Community Leader