Weaponized AI: How Artificial Intelligence Researchers Should Act

artificial_intelligence_researchers

Let’s see how AI researchers can play their part in weaponizing AI more effectively.

There is a general consensus among the security community that artificial intelligence researchers should not just walk away from military projects.

Instead, they need to make sure that no one exploits it for their nefarious means.

Now, the author of an upcoming new book which discusses autonomous weapons mentions that the scientific community working on artificial intelligence projects needed to make more effort in order to prevent bad actors from weaponizing the technology.

Just recently, a total of 2400 artificial intelligence researchers signed a new pledge.

Through this pledge, these artificial intelligence researchers have promised to not have any involvement in the development of so-called AI-enabled autonomous weapons.

But what exactly are these autonomous weapons?

These are simply systems that have the ability to decide who they want to kill without any human input.

In other words, they can work on their own.

This pledge comes right after Google had made the decision to actually not renew the company’s contract which obliged it to supply United States Pentagon with artificial intelligence for advanced analysis of various drone footage.

Of course, it is also known that Google only did that after it had come under huge pressure from hundreds of its employees.

These employees opposed the company’s work on a project that the Pentagon termed as Maven.

The author of the new and upcoming book, Paul Scharre, has titled it Army of None: Autonomous Weapons and the Future of War.

Paul has this belief that artificial intelligence researchers needed to take more responsibility and do a bit more than just opt out of participating in weaponizing artificial intelligence.

According to Paul, it is on artificial intelligence researchers to bring about a huge change in the way militaries all over the world may or may not weaponize artificial intelligence in the future.

Scharre served as an Army Ranger in Afghanistan and Iraq before becoming a senior fellow at a Washington D.C based think tank that goes by the name of Center for a New American Security.

Recently he has argued that artificial intelligence experts needed to engage more with military professionals as well as policymakers in order to explain to them why artificial intelligence researchers had serious concerns.

Scharre also believes that artificial intelligence professionals should help such people to understand the benefits as well as the limitations of artificial intelligence systems.

Recently Scharre gave an interview to Will Knight of MIT Technology Review and talked in length about how the scientific community should move forward to have the best chance of halting potentially dangerous artificial intelligence arms race.

On the questions how keen was the United States military to develop all kinds of artificial intelligence weapons, Scharre mentioned that the United States defense leaders had consistently and repeatedly mentioned that their clear intention was to always keep a human being in the loop.

The United States leaders had also started that they would have a human responsible for all decisions related to the use of lethal force.

Of course, there is a caveat present there as well.

shutterstock_274813067

Mainly, these United States defense leaders have acknowledged the fact that if any other country in the world developed autonomous artificial intelligence weapons then the United States would have little choice but to get forced and follow suit.

In other words, United States defense leaders have more or less confirmed that they will develop autonomous weapons.

Other countries are not going to stop building them just because on country has decided to hold back.

According to Scharre, that is the real potential risk.

If one country manages to cross the autonomous artificial intelligence weapons line then other countries will simply have no choice but to respond in kind to simply make sure that they remain competitive.

On the question of whether the community should trust United States defense leaders, Scharre explained that he thought the United States defense official had shown reasonable sincerity in that they really wanted humans to actually remain responsible for any and all lethal force.

According to Scharre, military professionals in the United States of America certainly did not have any desire to let their artificial intelligence weapons run amok.

With that said, Scharre also mentioned that the question of autonomous weapons remained an open one.

Why?

Because no one had a clear idea on how to accurately translate a really broad concept such as a human being responsible for any lethal force into specific and precise engineering guidance on issues such as the kinds of weapons which can be developed and used.

Already, the scientific community is spit on the definition of an autonomous weapon and what exactly constitutes an autonomous artificial intelligence weapon.

There is a hot contest happening right not within the community where artificial intelligence researchers have differed in views on the methods they can use to put all the above-mentioned principles into practice.

Will Knight also asked Scharre on whether the community had a need to involve technologists.

To that question, Scharre replied that artificial intelligence researchers had a responsibility to become a sizeable part of the whole conversation.

Why?

Because, according to Scharre, artificial intelligence researchers could bring their technical skills and expertise into the equation and shape vital policy choices.

Scharre also added that policymakers as well as the scientific community needed to take into account more information regarding artificial intelligence,

  • Bias
  • Safety
  • Transparency
  • Explainability

amongst many other artificial intelligence related concerns.

According to Scharre, the current state of artificial intelligence technology had, what he likes to call, twin features.

Artificial intelligence did offer powerful solutions to common problems.

But artificial intelligence systems currently came with a lot of vulnerabilities as well.

Of course, such problems also exist with all computer machines and are no different than other cyber risks.

The unfortunate part about the whole situation is that governments all over the world have managed to get past the first part of the very important message that artificial intelligence is indeed powerful.

However, what they have not understood is that artificial intelligence currently comes with a lot of risks as well.

This is where artificial intelligence researchers can assist militaries and government organizations to better understand why the artificial intelligence community has so many concerns about the potential negative consequences of weaponizing the artificial intelligence technology.

In order to develop and present their case effectively, artificial intelligence researchers needed to become a party of a actual constructive dialogue rather than shun AI-enabled weapon discussion altogether.

The interviewer also asked Scharre about the recent pledge that artificial intelligence researchers took against autonomous AI-enabled weapons that the Future of Life Institute organized and what he made of it.

To that, Scharee answered that artificial intelligence researchers had many calls to action events before.

More precisely, artificial intelligence scientist had started to build up on protesting against weaponizing artificial intelligence prior to the latest pledge in the form of letters.

Artificial intelligence researchers wrote these letters on autonomous artificial intelligence-enabled weapons both in 2017 and 2015.

shutterstock_761172127

With that said, Scharre also mentioned that these letters only represented symbolic gestures.

He also said that these letters would probably give out diminishing returns in terms of their effectiveness as AI researchers put more and more of them out.

Scharre also revealed that individual countries had started to discuss autonomous artificial intelligence weapons at forums such as the United Nations as early as 2014.

Now, the increased pressure from the artificial intelligence researcher community added a significant dimension to the whole AI-enabled weapons conversation.

However, it still has not attracted the amount of attention that is required to sway the world’s major military powers such as the US and China along with Russia and France.

Ultimately, the scientific community wants a comprehensive ban on the weaponization of technologies such as artificial intelligence.

According to Scharre, the better way to go about making more impact on this front is to take more artificial intelligence researchers to United Nations meetings and have them attend other major events as well.

In this way, artificial intelligence researchers would find it much easier to help out policymakers in understanding why artificial intelligence scientists have so many concerns about the proper use of artificial intelligence weapons.

On the question of Google making the decision to not renew the company’s contract with the United States Pentagon, Scharre said that he considered it as a surprise.

Why?

Because Project Maven, according to Scharre, did not involve any kind of autonomous targeting and/or weapons.

Moreover, Scharre also believes that to him project Maven appeared to have full compliance with Google’s artificial intelligence principles that the company made public just recently.

However, it doesn’t solve the problem of competition.

More precisely, there is a fierce competition between technology companies for attracting the top talent in the field of artificial intelligence.

Scharre mentioned that this along with many other reasons had pushed Google to come up with its AI principles post.

Otherwise, the company would have risked losing a good portion of its best artificial intelligence engineers who would have resigned from their positions in protest.

When reporters asked Scharre his thoughts on whether such gestures (petitions and letters) would help anyone slow down the actual development of artificial intelligence autonomous weapons, he said that as far as Project Maven was concerned, Google did not even involve itself in building human-controlled weapons let alone autonomous artificial intelligence weapons.

Hence, he did not see any direct connection between the two.

Scharre then added that the pledge letter was of course directed at AI-enabled autonomous weapons.

However, he also said that he did not think that pledges or letters would likely effect in any meaningful way the workings of militaries and how they would incorporate autonomy and artificial intelligence into their existing happens.

Why?

Because technology companies do not develop weapons.

That the job of defense contractors.

They build the majority of weapons.

And will continue to do so for the foreseeable future.

Scharre said that if other major technology companies apart from Google also started to opt out of having any working relationships with the United States military, then it could potentially slow down the process of incorporation of artificial intelligence technology into somewhat vital supporting functions such as data analysis.

This is exactly what the United States military tried to do with Project Maven.

However, it is also true that if Google and other such companies won’t help then other companies will eventually move in and fill the opportunity gap.

Scharre said that after Google announced to not renew its contract with the US military, several other companies had since stepped up to the plate and said publicly that they wanted to form a working relationship with the United States military.

On the question of whether the use of artificial intelligence and other efforts could give rise to various unintended consequences, Scharre replied that if artificial intelligence researchers continued to paint other legitimate use cases of artificial intelligence as totally unacceptable then that could further drive up a huge wedge between the policy communities and technology companies.

More importantly, that would essentially make any reasonable discourse much harder than before.

According to Scharre, engineers had the absolute right to refrain from performing work on any given projects that they cannot fully support.

However, such engineers can easily take those personal motivations and shift them to something which they use to pressurize other engineers from working on legitimate and important national security software application, they actually take a huge part in harming public safety.

Moreover, they also impinge on the due rights of all other engineers who have no problems in pursuing their own motivations and conscience.

There is no doubt about the fact that democratic countries would have this need of making use of artificial intelligence technology for not just the military but also for a wide variety of lawful and important national security aims and purposes.

Such purposes could include,

  • Intelligence
  • Border security
  • Defense
  • Cybersecurity
  • Counterterrorism

Scharre also answered questions about the possibility of the United States of America already running in an artificial intelligence weapons arms race with countries such as China.

He said that China had actually publicly and officially declared that it had the intention of becoming the global leader as far as artificial intelligence was concerned by 2030.

shutterstock_767186554

More importantly though, it is following up on that intention.

The country has started to invest heavily in artificial intelligence research and recruitment.

China is attracting artificial intelligence researchers from all corners of the globe.

Then there is the fact that China strictly works on a model of civil-military fusion.

This means that once artificial intelligence research picks up the pace in China, it will face no hurdles in readily flowing from all the technology firms in CHina directly into the Chinese military.

Such a process would not have to overcome any barriers.

At least not the kind that some Google employees have aimed to erect here in the United States of America.

Scharre also said that China had already started to lay the groundwork and foundations for an artificial intelligence-powered technology-surveillance state.

According to Scharee, if artificial intelligence researchers continued with these tactics and succeeded in slowing down the actual adoption of artificial intelligence tools in democratic and open societies which valued ethical behavior then, in reality, they work would have contributed in ushering a new future.

A future where the most robust and powerful artificial intelligence technology would come into the hands of governments and regimes who cared very little about the rule of law and ethics.

Since Scharre had written a book about autonomous weapons, reporters asked him why he pointed out the fact that defining autonomous weapons could become very tricky and whether or not such trickiness could complicate the discussion of the military’s use of artificial intelligence even more.

To that, Scharre replied that all the authors who signed the recent pledge on futureoflife.org against AI-enabled autonomous weapons actually objected to the use of autonomous weapons which would lead these weapons to kill a person.

However, these same researchers acknowledged the fact that the US military would need some kind of AI-enabled autonomous systems which would defend the country against other such autonomous AI weapons.

Scharre said there existed a reasonable gray area where the US military may need autonomous weapons to defend against other autonomous weapons and a given person still in the loop like in various activities such as targeting submarine and/or fighter jets.

This is where the real challenge would lie.

Scharre also mentioned that the act of balancing various different and competing objectives was not a simple task.

Moreover, we said, US policymakers would have to face some real choices when they do decide to adopt the artificial intelligence technology fully.

Artificial intelligence engineers still have the choice of making a huge impact in shaping the choices that US lawmakers will eventually make.

But in order to do so, they would have to engage in a continuous and constructive dialogue with these US policymakers.

They simply cannot afford to just opt out of the discussion altogether.

Artificial intelligence researchers who really cared about how the US military would use this new technology would actually do a more effective job if they simply tried to move away from all the pressure campaigns.

And instead of doing that, they simply started to help and educate US policymakers about artificial intelligence technology as early as possible.

This education would also inform policymakers about the potential limitations of technologies such as artificial intelligence.

 

Zohair

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Zohair

Latest posts by Zohair (see all)

COMMENTS

WORDPRESS: 0

Weaponized AI: How Artificial Intelligence Researchers Should Act

by Zohair time to read: 10 min
0