Skip to main content

AI: A weapon of Destruction?

As Elon Musk during a recent two-part interview with Tucker Carlson, said that “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production” and that “it has the potential of civilization destruction.” In a widely discussed interview with The New York Times, Geoffrey Hinton said generative intelligence could spread misinformation and, eventually, threaten humanity. Hence, Artificial intelligence (AI) safeguard and control mechanisms are essential to ensure responsible and secure utilization of advanced technologies and striking a balance between innovation and accountability.

“Another thing that’s clear to me is that the future of AI is not as grim as some people think or as rosy as others think.”
Bill Gates
Microsoft co-founder

Navigating the Weighty AI Concerns:

These words carry significant weight, coming from one of the most influential tech giants of our time. The question we must confront as a civilization is how to safeguard ourselves from the potential repercussions of a tool that has become the world’s favored instrument for automating even the most mundane tasks. The landscape of artificial intelligence poses a dual challenge: fostering groundbreaking innovation while upholding a sense of ethical accountability. As we navigate this landscape, our collective responsibility lies in ensuring that we harness the power of automation without letting it lead to the erosion of our fundamental values and well-being.

Implementing Laws for Public Safety:

Artificial Intelligence (AI) has emerged as one of the most transformative technologies of our time, revolutionizing industries, societies, and economies. From healthcare to finance, entertainment to manufacturing, AI systems are rapidly becoming integral to various aspects of our lives. However, as AI’s influence grows, questions about who controls and safeguards AI and the associated ethical, legal, and societal implications have gained prominence. But in his blog post, Microsoft co-founder Bill Gates is a believer in the potential of artificial intelligence, often repeating that he believes models like the one at the heart of ChatGPT are the most important advancement in technology since the personal computer. Gates suggests that the kind of regulation the technology needs is “speed limits and seat belts.”

“Soon after the first automobiles were on the road, there was the first car crash. But we didn’t ban cars — we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road,”
Bill Gates
Microsoft co-founder

The Imminent Potential of AI:

However, Elon Musk’s comment and the skepticism expressed by numerous influential figures in the tech industry have provided a stark reminder of the potential for malevolent consequences. Their concerns offer a sobering perspective on the inherent risks associated with technological advancements. This looming sense of danger highlights a fundamental truth about humanity’s capacity to wield tools intended to serve the betterment of society. It underscores the critical need for ethical considerations, responsible innovation, and vigilant oversight to ensure our creations align with the genuine purpose of aiding humanity.

"I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence".
Geoffrey Hinton
Former Vice President and Engineering Fellow, Google

Humans have inherent motivations, such as finding food and shelter and staying alive, but AI doesn’t. “My big worry is, sooner or later someone will wire into them the ability to create their own subgoals,” Hinton said.

Learn more about Generative Artificial Intelligence here:

Reflecting on Past Pathways of Devastation:

Speaking of weapons of mass destruction, one can’t forget the gunpowder. Gunpowder, also known as black powder, was invited to China by accident. In search of the elixir of immortality, the future weapon of mass destruction was invited.

Gunpowder: From Accidental Discovery to Weapon

The original purpose behind the invention of gunpowder wasn’t explosives but medicinal and alchemical purposes. Implementing laws and regulations to control gunpowder and weapons has been crucial in ensuring public safety and preventing the misuse of potentially dangerous technologies. Over time, the complexity of weapon technologies and their impact on society led to the development of more comprehensive legal frameworks encompassing background checks, licensing, registration, and restrictions on certain weapon types. These laws reflect the collective effort to harness the advantages of weaponry while minimizing the potential harm it can cause, emphasizing responsible ownership and usage in the interest of public welfare.

Is history on its way to repeating itself? Even though the development of AI isn’t a single-day event but a result of a culmination of ideas over decades. The tool developed to help humanity and facilitate on its way to becoming a weapon of destruction. 

The answers are not definite; few are fully aware of the potential of AI and how it will change the course of civilization. Yuval Noah Harari, a famous historian, lecturer, and philosopher, has depicted the future of human sapiens in his book ‘Homo Deus’ in which he explains how humanity is on its way to super-humans or “homo deus” (human god) and in his book, he explains how AI will help accelerate the transition.

“AI is not even near its full potential; it’s just in its infancy. We haven’t seen anything yet.
Yuval Noah Harari
Homo Deus: A Brief History of Tomorrow

Who controls AI?

The question of who controls AI is far from simple, as AI is not a monolithic entity but a diverse ecosystem of technologies, algorithms, and applications. Control is exerted at various levels and by multiple stakeholders, each with their own motivations and responsibilities:

AI for Government Institutes:

Governments worldwide are actively formulating regulations that strike a delicate balance: they seek to harness the advantages of emerging technologies while mitigating potential risks, such as the propagation of bias and disinformation. All this has prompted governments around the world to call for protective regulations. The complexity and potential impact of generative AI demand tailored approaches that account for cultural, legal, and ethical variations. That’s why a universal solution is often elusive when it comes to regulating technology, including generative AI.

AI for Private Sector:

The United Nations Industrial Development Organization (UNIDO) unveiled a groundbreaking initiative, the Global Alliance on AI for Industry and Manufacturing (AIM-Global), during the prestigious World AI Conference of 2023. This significant development marks a major stride in harnessing the transformative potential of artificial intelligence for industrial and manufacturing sectors worldwide. AIM-Global serves as a collaborative platform, uniting governments, industry leaders, academia, and technological innovators to accelerate the responsible adoption of AI technologies.

“It is our shared responsibility to ensure that advancements in the field of AI are made in a manner that is safe, ethical, sustainable and inclusive”.
Mr. Gerd Müller
UNIDO Director General
UNIDO | United Nations Industrial Development Organization

AI for Researchers and Developers:

The individuals and teams creating AI algorithms and models also influence AI’s development. Their decisions impact the capabilities, biases, and limitations of AI systems. The open-source community and academia contribute to democratizing AI research, potentially sharing control more broadly.

The National Artificial Intelligence Research Resource (NAIRR) Task Force has published its final report outlining a roadmap to establish a national research infrastructure. This initiative aims to enhance access to crucial resources for artificial intelligence (AI) research and development.

"OSTP’s work on AI, including the development of an updated National AI R&D Strategic Plan and the Blueprint for an AI Bill of Rights, is intended to maximize its benefits, while ensuring that AI-driven systems do not harm Americans’ rights or freedoms,” 
Alexander Macgillivray.
Deputy Assistant to the President and Principal Deputy U.S. Chief Technology Officer

Conclusion

The question of who controls AI is complex and multifaceted, involving governments, private sector entities, researchers, civil society, international organizations, and users. Effective AI governance requires a collaborative approach considering ethical implications, power dynamics, regulatory frameworks, and international cooperation. As AI continues to evolve, finding the right balance between innovation, responsibility, and accountability will remain critical for societies worldwide. Balancing the dynamic forces of innovation and ethical accountability in the realm of artificial intelligence demands delicate navigation.

We are in a unique situation in human history when, for the first time, we have no idea what the job market will look like in 20 or 30 years. That was never before the case in history.
Yuval Noah Harari
 Homo Deus: A Brief History of Tomorrow

Subscribe

* indicates required

Intuit Mailchimp

Sania Shujaat

A Mechanical Engineer with a keen interest in applying AI to revolutionize Mechanical Engineering.

Leave a Reply