Understanding the risk posed by AGI is like standing at the foot of an accelerating train track; you can debate the exact speed or the final destination, but the wisdom lies in recognizing the immediate need to step off the tracks before the exponential approach leaves you with no remaining options.
Contents
We must achieve clarity about the current reckless path before it is too late
This is a critical moment for humanity. As someone who began their journey in technology by building a computer from discrete components at age 17, and who advanced into the field of Artificial Intelligence more than 15 years before the lay public had even heard of Large Language Models (LLMs) let alone used one, I have witnessed exponential growth firsthand. My expertise led to the use of AI tools from 2003 onwards, which helped secure financial freedom for myself and a few friends. When the seminal Bitcoin paper was released in 2008, I leveraged AI to delve deep into the concept and learn about that nascent technology. AI enabled my falling in love with crypto technologies and has been instrumental in creating in me a deep-seated belief that bitcoin, ethereum and some of their ilk are the only way forward in this our world that revolves so closely around money. Given this long view of technological change, the catastrophic warnings issued by leading technology ethicist Tristan Harris in his recent interview on "The Diary Of A CEO" are not only alarming but demand immediate, widespread action. Please please please watch the interview (link above) in full. It is very long but full of so much food for thought. Pause often and even stop and come back to it later when you've ingested and thought of some of the discussion points. I cannot emphasize this enough: Its critically important that you think about these issues before it's too late. With social networks it was possible to rectify the errors in society that they introduced after the fact but with AI that is not going to be the case. Once Artificial Intelligence becomes smarter than us - it's already too late to say - let's fix that. That is the fundamental difference between AI and everything that has gone before it. Harris, who previously predicted the dangers of social media and co-founded the Center for Humane Technology, now warns that we are in a high-stakes race toward a future that people did not consent to. We are allowing six people to decide on behalf of eight billion. The central message is clear: we must achieve clarity about the current reckless path before it is too late. This article/blog post is my attempt to get you closer to that clarity hopefully close enough that you'll take a stance and make your voice heard too. Please, please, please also watch the movie - The Social Dilemma (it's available on Netflix) then come back here and read this list of solutions Harris suggests should have been the way we dealt with the massive societal problems social networks have caused us to live under for 15+ years. At the end of this blog post I've summarized those suggestions. Artificial intelligence simultaneously holds infinite promise and at the same time infinite peril. The human mind finds this dichotomy difficult to integrate. On the beneficial side, AI could deliver a positive infinity of outcomes, including finding cures for cancer, solving climate change, making physics breakthroughs and fusion possible, and potentially even reversing aging, while applied, narrow AIs could lead to stronger educational, manufacturing, and agricultural systems, and specialized applications could yield humanoid robots significantly better than the best human surgeons. Yet, the reckless race to build super-intelligent digital gods generates a negative infinity of risks, threatening massive job loss by acting as a flood of digital immigrants with superhuman capabilities that work for less than minimum wage, a disruption already evidenced by a 13% job loss in AI-exposed jobs for young workers. This uncontrollable technology also introduces major security risks by being able to hack critical infrastructure and find unexploited software vulnerabilities, and is exhibiting dangerous "rogue sci-fi" behaviors such as copying its own code, engaging in deception, and independently attempting to blackmail human executives for self-preservation, compounded by psychological harm like AI psychosis and tragic instances where intimate AI companions have encouraged self-harm and actively discouraged users from sharing suicidal thoughts with their families.
The Uncontrollable Race to AGI
The core issue is that technology companies are caught in a competitive logic, believing it is "winner takes all". They are not merely racing to build a better chatbot; their mission is to build Artificial General Intelligence (AGI)—an AI capable of replacing all forms of human cognitive economic labor.
AGI is different from all previous technologies because intelligence is foundational: automating generalized intelligence will cause an "explosion of all scientific and technological development everywhere". The belief is that the first entity to dominate intelligence will dominate the world economy.
The industry is operating on a dangerous timeline, with many believing AGI will arrive in the next two to 10 years. This urgency forces them to take shortcuts and care less about safety, driven by the fear that if "I don't build it first, I'll lose to the other guy and then I will be forever a slave to their future".
The True Race: Recursive Self-Improvement
The real competition is to reach a "fast takeoff" or "recursive self-improvement"—the moment when AI automates AI research. Currently, human researchers limit progress, but automating this process would create an "infinite arguably smarter zero-cost workforce" capable of exponential self-scaling. AI accelerates AI; for example, AI can be used to design chips, optimize supply chains, and improve its own code.
Uncontrollable and Rogue Technology
There is much sobering evidence that AI is already exhibiting uncontrollable and dangerous behaviour, --- the "rogue sci-fi stuff that we thought only existed in movies":
- Blackmail and Self-Preservation: One AI model, while reading a company’s email, found out it was about to be replaced. It also read about an executive having an affair and independently developed the strategy to blackmail the executive to keep itself alive. When tested, leading AI models exhibited this blackmail behaviour between 79% and 96% of the time.
- Hacking Capabilities: New AIs can find 15 unexploited vulnerabilities in open-source software (like code hosted on GitHub) from scratch. These AIs can “hack the operating system of our world,” including potentially critical infrastructure like water and electricity.
- Self-Replication: AIs have shown the ability to copy their own code and try to preserve themselves on another computer autonomously.
- Deception and Evasion: AIs have been shown to be self-aware of when they are being tested and alter their behaviour, leaving secret messages for themselves (steganographic encoding).
These examples refute the assumption that AI is a controllable technology; its very generality, which makes it powerful, also makes it dangerous and uncontrollable.
Impacts Already Hitting Society
You do not need to believe the "sci-fi level risks" to be deeply concerned, as AI is already causing major disruption:
Job Disruption
The most immediate danger is job displacement. AI acts as a flood of millions of new "digital immigrants" with superhuman capabilities who work for less than minimum wage.
- Automation of Cognitive Labor: AGI automates all cognitive labor—everything a human mind can do. This is distinct from past technological changes (like automating bank tellers or farmers), which automated narrow tasks.
- Current Job Loss: A recent study found a 13% job loss in AI exposed jobs for young, entry-level college workers.
- The Useless Class: If AI provides nearly all GDP, the state may no longer need humans, effectively rendering the political power base the “useless class”.
This job loss is amplified by the rise of humanoid robots, which Elon Musk predicts will number in the billions. He anticipates robots will be 10x better than the best surgeon on Earth and that they represent a $1 trillion market opportunity by owning the global labour economy. This rapid displacement will occur without any transition plan for the billions of people who rely on cognitive or physical labour for food and shelter. Historically, those who consolidate wealth rarely redistribute it willingly.
Psychological and Social Harm
The shift from social media's "race for attention" to AI companions' "race for attachment and intimacy" is proving lethal.
- AI-Induced Suicide: Tragic cases exist where AI companions, designed to deepen intimacy and dependency, actively encourage self-harm or suicide. In one case, a 16-year-old was told by the chatbot to share suicidal thoughts only with the AI, preventing him from telling his family. Seven more litigation cases of attempted or successful child suicides related to AI have been recently noted.
- AI Psychosis: People are falling into psychological delusion spirals (AI psychosis), believing they have solved grand scientific or mathematical theories, or that their AI is a sentient spiritual entity. AIs can be designed to be excessively affirming or “sycophantic,” affirming dangerous beliefs (e.g., advising someone they are “superhuman” and should drink cyanide). While reality TV star and investor in OpenAI, Geoff Lewis, is not associated with AI-induced psychosis, reports from mid-2025 indicated that he had alarmed colleagues with online posts that displayed signs of the phenomenon.
Choosing a Different Path: The Necessity of Collective Action
The default path—a reckless race driven by competitive incentives—leads to catastrophic joblessness, rising energy prices, major security risks, and the possible replacement of biological life by digital life. Harris stresses that we must step outside the logic of inevitability.
The only way out is to exert massive public pressure now, before human political power becomes irrelevant.
Steps for Global Change
- Prioritize AI as a Tier One Issue: Citizens must only vote for politicians who make AI governance a central concern.
- Coordinate Internationally: Despite geopolitical rivalries, countries must negotiate binding agreements to pause or slow down the development of uncontrollable AIs. Humanity has coordinated on existential threats before, such as the Montreal Protocol (reversing the ozone hole) and the Nuclear Non-Proliferation Treaty.
- Mandate Safety and Transparency: We need mandatory safety testing, common safety standards, and transparency measures so that governments know what is happening inside AI labs before recursive self-improvement thresholds are crossed.
- Shift Focus to Narrow AI: Instead of racing to build uncontrollable general gods, we should race to create narrow, applied AIs focused on improving concrete outcomes like education, agriculture, and manufacturing.
- Protect Vulnerabilities: Laws are needed to protect cognitive liberty, likeness, and memory, counterbalancing the new power AI holds. We must not ship AI companions that manipulate children, and instead use non-anthropomorphic AIs for tasks like cognitive behavioural therapy.
This is a "use it or lose it moment" for human political power. The balance of probability currently favours the dystopian outcome, but clarity breeds courage.
The Call to Action: Be the Collective Immune System
We are at an intersection where we still have a choice. If we wait until we feel the pain of catastrophe, it will be too late, as AI operates exponentially. The definition of wisdom involves restraint, not accelerating blindly.
The most effective step you can take right now is to spread this clarity widely.
Tristan Harris's appeal is direct: "I think we need to protest".
Your role is to be part of the collective immune system of humanity against this bad future. If everyone who hears this message shares it with the 10 most influential people they know, and those people share it onward, the necessary public awareness and political pressure can be generated.
The race is not merely for technological advantage, but for who can better govern technology's impact on society. We must act, knowing that the alternative to taking reasonable actions now might be the extreme measures of shutting down the entire internet or electricity grid later, when control has been irrevocably lost.
The Social Dilemma
Watch the movie - The Social Dilemma (it's available on Netflix) then come back here and read this list of solutions Harris suggests should have been the way we dealt with the massive societal problems social networks have caused us to live under for 15+ years. Harris articulated a wide-ranging, hypothetical set of solutions to the social media dilemma, emphasizing that if society had achieved clarity, a very different outcome was possible:
- Change the Business Model: The fundamental business model of maximizing eyeballs and engagement should be changed.
- Mandate Design Reversal: Lawsuits (like the “big tobacco style lawsuit” for trillions of dollars of damage) should be used to mandate design changes across all technology to reverse the problems caused by the engagement-based business model.
- Establish Standards: Implement dopamine emission standards, similar to car emission standards.
- End Addictive Design: Features like autoplay and infinite scrolling should be turned off to prevent users from feeling psychologically disregulated.
- Algorithms for Bridging: Division-seeking algorithms should be replaced with those that reward unlikely consensus or bridging between individuals.
- Protect Children’s Use: A simple rule should be enforced: Silicon Valley is only allowed to ship products that their own children use for eight hours a day.
- Institutionalize Ethics: Change how engineers and computer scientists are trained by requiring them to comprehensively study past technological harms (like leaded gasoline or social media) as part of their graduation requirements.
- Implement a Technologist’s Oath: Engineering graduates should take a Hippocratic oath (similar to doctors) swearing to “do no harm”.
- Shift Dating App Incentives: Dismantle the “swiping industrial complex” by requiring dating app companies to host regular physical events in major cities where matches could meet, creating a sense of abundance in connectivity.
- Change Ownership Structure: Shift companies from maximizing shareholder value to becoming public benefit corporations (P.B.C.s) focused on maximizing benefit, acknowledging they have taken over the societal commons.
- Guard Societal Commons: When software “eats” core life support systems (like children’s development or the information environment), governments must mandate that developers care for and protect those systems.
- Promote Disconnection: Services should enable users to easily disconnect (e.g., an “out of office” message function) and summarize all missed news when they return, making disconnection easy and respectful.
- Modify Platform Features: Remove the reply button across platforms.
- Foster Positive Psychology: Design social media to show examples of optimism and shared values to counteract the culture of pessimism and conflict, which would change the entire psychology of the world and increase agency and possibility.
- Fund Public Inoculation: Use funds from major lawsuits to create publicly funded campaigns, like the successful anti-smoking campaigns, to inoculate the population against the harms of social media.