To assess, analyze, and understand the fundamental questions surrounding governance, statecraft, justice, economic and political affairs, philosophy and history are crucial disciplines for inquiry. Philosophy encourages critical thinking, while history offers a framework for comprehending the present and contemplating the future.
The increasing presence of Artificial Intelligence (AI) in our daily lives—from commerce and security to work and interpersonal interactions—is a reality we cannot ignore. AI has also blurred the lines between truth and falsehood, a concern that poses significant consequences for democracies. The evidence of AI’s transformative impact is growing daily, and the timeline for its effects is not distant; many changes are already unfolding.
The question at hand is quite straightforward: Will AI radically transform work, governance, policy-making, education, corporations, health sciences, economies, commerce, security, and society in a short period, or will it not? If we prepare our societies and economies for change driven by AI and the anticipated transformation does not occur, we will have invested a vast amount of money in technology, enacted laws, and implemented policies aimed at ensuring that AI benefits humanity. Consequently, there will be fluctuations in the stock market. New AI entities will emerge while older ones may disappear. Many individuals may feel let down by the unfulfilled promises of AI, leading to their exit from the financial market. This situation might be referred to as the “AI bubble.” However, these outcomes might be a reasonable trade-off for the billions spent on the promise of AI and the technological advancement.
If we fail to prepare for the rapid changes brought on by AI—changes that are already happening in many areas—we risk facing mass unemployment, outdated educational institutions, and widespread social, economic, and political chaos that we may not have experienced before. Philosophy teaches us critical thinking, and the philosophical perspective is that the transformation due to AI is inevitable; the evidence of this is all around us in contemporary society. However, assessing the historical implications of the AI transformation is complex and multifaceted.
Let’s examine the labor market, which is rapidly feeling the impact of AI at every level, from recruitment and selection to the creation of interview questions. Many technology leaders, including Dario Amodei (CEO of Anthropic) and Eric Schmidt (former CEO of Google), have stated in various forums that AI could eliminate up to 50% of all entry-level white-collar jobs within one to five years. Whether they are correct or not remains to be seen, but they are not alone in their concerns. Nobel laureate Geoffrey Hinton, known as the “Godfather of AI,” along with many other prominent academics, economists, policymakers, and tech executives, have warned of an impending “jobpocalypse.” I wrote many moons ago: Unemployment is the biggest social shame and danger. Inequality is another.
Technological advances are not a new phenomenon. Since the days when humans lived in caves, they utilized their hands to hunt for food. As they learned to carve stones, their abilities improved, leading to an increase in their food supply. The invention of the bow and arrow was another significant advancement, and this progress continued throughout history.
Historically, societies and economies have successfully navigated automation and technological progress. With every major technological advancement, the labor market, economies, and cultures have adapted. New technologies consistently create a wave of high-paying job opportunities that replace those jobs that may be lost. This pattern has held true in the past and reflects a historical perspective on technological change.
In today’s world, critical thinking provides a different perspective on artificial intelligence (AI). This is important because previous waves of automation primarily replaced physical labor and muscle effort in manufacturing processes. In contrast, the current wave of automation is replacing judgment and, crucially, human intelligence and reasoning abilities. The long-term implications of easily scalable AI systems commoditizing human intelligence and critical thinking remain uncertain. Unlike past trends in automation, which gradually displaced factory workers over the course of a century or caused small labor-market disruptions that could be addressed through retraining programs for a limited number of service jobs, this situation poses a more significant challenge. We are facing the potential displacement of a substantial portion of the white-collar workforce in a very short period. While the technology continues to advance at lightning speed, state and regulatory response remains painfully lethargic.
There is no doubt that the potential benefits of AI—ranging from advancements in science and medicine to genetic analysis, agricultural production, transportation, and infrastructure development—are significant and cannot be ignored. However, if we do not ask critical questions about its implications, we risk navigating through a thick, dark forest without any guidance. We may be captivated by the bright light of AI’s promise, drawn by its allure, yet oblivious to whether it is truly beneficial for our existence. In essence, that is our relationship with AI. While no one can deny the superiority of AI in solving complex, abstract problems that previously only humans could tackle, it’s important to note that AI itself does not think. At least, not yet. Instead, its power lies in its unparalleled capacity for memorization and computation.
Because of its inherent superiority in these fields, AI is likely to win any task assigned to it. But what we need to remember (history) as a species that the tasks are not only about doing them; they are also about thinking. AI is a mathematical computation process and should not be confused as a thought process. As a thinking beings, we should not allow the computation to replace critical thinking and ability to reason. If we do not that, we are in danger of losing the capacity that has been the essence of human cognition. If in the distant future, AI develops a capacity to think independently and rationalize its existence and tasks, that is a future we have not even ponder; let alone speculate. And perhaps we should.


