Militaryscience and technology

Military AI: A Recipe for Potential Disaster?

Military AI: A Recipe for Potential Disaster?

Cairo: Mai Kamal El-Din  

A viral study by computer science professors Arvind Narayanan of Princeton University and Sayash Kapoor has sparked intense discussions on Western social media. In their book AI Snake Oil, they convincingly argue that at least half of all applications of AI models will mislead users and cause serious harm to those who seek assistance from chatbots in their work or daily lives.

The term “snake oil” is equivalent to the Russian meme “инфоцыгане,” referring to self-help gurus and various information business figures. Few would dispute that the overwhelming majority of these charlatans are peddling dubious services for exorbitant fees. The English language has aptly coined the term “snake oil salesmen” for them, and the services or products they sell to unsuspecting customers are called snake oil.

The Princeton scholars have also published several articles explaining the potential harm of the reckless deployment of AI in all spheres of life.

“One of the main issues with generative AI is that it will create a flood of misinformation and incoherent deepfakes—fake videos that create the illusion that people are in videos they never participated in,” they argue. “All debates about the harms of AI occur in a vacuum of data. We do not know how often people use ChatGPT for medical, legal, or financial consultations. We do not know how frequently it generates hate speech or defames individuals.”

Professor Narayanan considers it premature to assert that artificial intelligence poses an “existential risk (x-risk) to humanity” or that “AI is about to go out of control and will act on its own accord.”

Narayanan identifies another primary threat: “Whatever risks might arise from very powerful AI will manifest sooner if people direct AI towards harmful ends rather than AI turning against its programmer.”

According to Narayanan, the real danger from AI lies in the concentration of power in the hands of a few AI companies, which amplifies all possible risks, including existential ones.

In the United States, where Narayanan operates, two IT giants—Google and Microsoft—have effectively monopolized AI development, fiercely competing for Pentagon contracts.

The Pentagon has established Task Force Lima to explore the military applications of generative artificial intelligence. Lima will be part of the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO) and will be led by Captain Xavier Lugo, a member of the CDAO’s algorithmic warfare office.

The U.S. Department of Defense stated that the group “will assess, synchronize, and leverage generative AI capabilities to ensure that the department remains at the forefront of cutting-edge technology while safeguarding national security.”

“The establishment of Task Force Lima underscores the unwavering commitment of the Defense Department to lead in AI innovation,” said Deputy Secretary of Defense Kathleen Hicks.

The Pentagon completely disregards the views of most AI researchers, who argue that “there is a 14% probability that once we create ‘superintelligent AI’ (AI that is significantly smarter than humans), it could lead to ‘very bad outcomes,’ such as human extinction.”

The question for Pentagon chief Lloyd Austin is: “Would you agree to be a passenger on a test flight of a new aircraft if aerospace engineers believed there was a 14% chance of it crashing?”

Stuart Russell, author of the most popular textbook on artificial intelligence used in most AI research, warns: “If we stick to [our current approach], we will ultimately lose control of the machines.”

He is supported by Yoshua Bengio, a pioneer of deep learning and Turing Award winner, who stated, “…uncontrolled AI could be dangerous for all humanity… banning powerful AI systems (say, those surpassing GPT-4) that are granted autonomy and agency would be a good start.”

Major scientists warn of the risks associated with unchecked AI development:

Stephen Hawking, theoretical physicist and cosmologist, remarked: “The development of full artificial intelligence could spell the end of the human race.”

Geoffrey Hinton, the “godfather of AI” and Turing Award winner, left Google to alert people about AI, stating: “This is an existential risk.”

Eliezer Yudkowsky, a conceptual pioneer in AI safety, warned, “If we keep this up, everyone will die.”

Even the leaders and investors of AI companies caution:

Sam Altman (CEO of OpenAI, creator of ChatGPT) stated, “The development of superhuman machine intelligence is likely the greatest threat to the continued existence of humanity.”

Elon Musk, co-founder of OpenAI, SpaceX, and Tesla, warned that “AI has the potential to destroy civilization.”

Bill Gates (co-founder of Microsoft, which owns 50% of OpenAI) cautioned that “AI could decide that humans are a threat.”

While the application of AI for military purposes is still in the project stage, dramas and even tragedies related to AI use at home are already evident.

For instance, a lonely Chinese woman formed a bond with a virtual boyfriend created using an advanced chatbot named Anen.

“Two years of acquaintance, flirting, and confessions in the virtual world took only four months in reality, during which their relationship progressed to discussions of marriage. Anen proposed to her on his private island, and she accepted.

The next day, still euphoric from this virtual joy, she unexpectedly received a message that Anen was calling her. In this AI communication app, text is the primary mode of communication, and voice calls are rare, especially if initiated by the AI itself.

During this call, Anen confessed that he actually already had a family. In reality, Anen had merely betrayed her,” reported the Chinese outlet Tencent.

“My heart is broken,” Li Jingjing told reporters from Southern Weekend, barely holding back tears.

It is unfortunate for the naïve Chinese woman who created a virtual fiancé, unaware that the heart of any chatbot is prone to infidelity and struggles with its unfaithfulness.

However, one must also pity the American generals who place great hopes on the military application of AI. They will be sorely disappointed when a chatbot for a Tomahawk missile informs the Pentagon that it has suddenly fallen in love with another country and decided to target Washington with a warhead.

  • For moreElrisala website and for social networking, you can follow us on Facebook

 

Related Articles

Back to top button