Posted by: Nguyen Duc Duy

Calls for Government Intervention In The Face Of Extinction-Level Threats

In a recent multi-sectoral report commissioned by the US State Department, the possible extinction-level threats posed by the high-speed rise of artificial intelligence (AI) have been highlighted. Gladstone AI, the body that commissioned the study, based its findings on a number of conversations with more than 200 people. Their diversity was evident as it encompassed heads of AI industry, cybersecurity professionals, and national security authorities, thereby adding variety to the discussion.

Unpacking the Threats

According to the report, the evolution of AI presents two primary dangers. The first is the potential militarization of advanced AI systems. These systems could be a tool of warfare, causing unprecedented and potentially irreversible damage.  This would, in turn, allow for the weaponization of these systems, and the extent of such damage would be much greater and possibly irreversible. The idea of AI as a weapon is a concern that goes beyond the realm of sci-fi now, and it can no longer be just a theoretical security issue.

The second major concern stems from within the AI development community itself. Developers are growing uneasy that they could inadvertently lose control over the very systems they are creating. This could lead to unanticipated behavior by AI systems, the consequences of which could be disastrous on a global security scale. The potential for such loss of control underscores the need for robust safety measures in AI development.

Calls for Government Intervention

The report draws a stark comparison between the rise of AI and the advent of nuclear weapons. Authors indicate that the emergence of AI without proper control can pose a threat to humanity up to the magnitude of nuclear war. Like the introduction of nuclear weapons leading to an arms race, the world now has to face a similar situation, but this time, the weapon could be AI.

The authors, to answer these apocalyptic visions, alert us to implement radical and prompt political measures. They recommend the creation of a new AI administrative body responsible for dealing with AI as its main task. This agency would be in charge of regulating AI and all other activities related to its interests to make it compliant with national security goals and societal norms.

They also urge the adoption of emergency regulatory safety nets, which would act as a countermeasure against AI manifesting its power excessively. Besides, they suggest imposing constraints on the computational capacity in training AI models. Hence, they attempt to achieve the appropriate mix of technological developments and safety aspects.

The report warns that pursuing technological superiority currently overshadows safety and security’s importance. This imbalance increases the risk of advanced AI systems being misappropriated and weaponized against the United States. Recognizing and addressing this risk is crucial to prevent the potential misuse of AI and to ensure that its development is carried out responsibly and safely.

Industry Perspectives and the Threat of AGI

The US needs to step in since AI could pose extinction-level threats to humanity.

The report adds to the mounting concerns over the existential threats posed by AI, echoed by several industry stalwarts. Geoffrey Hinton, whom we may know as AI’s godfather, has previously warned of a 10% probability of humans getting wiped out by AI within the next 30 years. Furthermore, in the same manner, an alliance of AI industry leaders and academics made a statement last June regarding preventing the extinction risk from AI and that it should be among the major priorities on a global scale.

The report, moreover, covers the dangers of Artificial General Intelligence (AGI), an idea of a super AI capable of learning at human or even above human levels. It views AGI as the main catalyst for catastrophic risks arising from loss of control scenarios. OpenAI, Google DeepMind, Anthropic, and Nvidia have forecasted the creation of AGI by 2028. However, some researchers contend that it is way further in the future than we anticipate.

The Potential Misuse of AI

The report further highlights the grave threat that emerges from the usage of AI systems, especially in the context of cyber warfare operations. AI systems can implement and launch damaging cyberattacks that can eventually result in interruption and bring down critical infrastructures, leading to chaos and general disruption.

Among the many disquieting visions presented by the report is that of AI-fueled disinformation campaigns, which could undermine societies, erode confidence in institutions, and reshape public opinion. Besides that, the report emphasizes the possibility of the use of robotic technologies for military purposes, such as drone swarm attacks, which may cause a lot of harm in wartime.

The report also highlights the issues arising from the potential to manipulate the thought processes of advanced AI systems, in addition to the weaponization of the biological and material sciences. The next horrible issue is that of power-reaching AI systems, which might become uncontrollable and hostile to them.

In the face of these possible risks, the report emphasizes the need for robust control and surveillance policies during the invention and deployment of AI. AI has the potential to bring about substantial benefits, but at the same time, it should cautiously aim at the safety and security of all. The emphasis is on managing these risks to proactively avoid catastrophic outcomes.

Conclusion

Finally, although AI can transform the economy and overcome obstacles thought to be impossible before, there are also critical risks.

With AI development ongoing, it is crucial to identify and control this risk to avoid catastrophic outcomes.

Leave a Comment