AI code of ethics: Why establishing it will be hard.

ai_ethics

Are AI ethics hard? Or confusing?

The first problem with establishing a code of ethics (let alone code of ethics in the field of artificial intelligence) is that ethics are too subjective.

And it could become very problematic to use such subjective concepts to guide the proper use of technologies such as artificial intelligence.

This is what a few legal scholars have mentioned in a recent conference.

Since 2012 (around six years in total) the police department in New York City has managed to compile a rather massive database.

The database contains the personal details and names of, at the very least, 17500 individuals that the department believes are involved in various criminal gangs.

Civil rights activists have already started to criticize the police’s efforts by saying that the database is racially discriminatory and inaccurate.

The director-counsel and president of the NAACP Legal Defense fund, Sherrilyn Ifill recently mentioned at the AI Now Symposium held in New York last Tuesday that she imagined marrying the current facial recognition technology to the process of developing a database which, at least theoretically, presumed that someone was part of a gang.

Researchers along with activists and lawyers emphasize the very real need for accountability and ethics in the implementation and design of various AI systems.

However, such an approach often ignores a few tricky questions.

Questions like, who should enforce AI systems?

And how does one get to define AI ethics that everyone keeps talking about?

Various studies have shown that the facial recognition technology, at its current stage, is imperfect.

In fact, even the leading software in the industry has less accuracy when it comes to dark-skinned women and other individuals.

The other thing is that establishing a set of ethical standards won’t necessarily help to change the related behavior.

shutterstock_135372821

For example, back in June Google finally agreed to to officially discontinue the company’s work on its Project Maven in collaboration with the Pentagon.

Moreover, the company also established a totally new set of ethical AI principles in order to guide its future involvement in various other AI projects.

However, only a few months passed and many employees working at Google felt that the principles which the company had set out to follow just recently were possibly placed by the wayside in order to bid for a massive DoD (Department of Justice) contract worth around $10 billion.

Apart from that a recent study done by people in North Carolina State University found that the simple act of asking various software engineers to go ahead and read a code of ethics document did nothing as far as changing their behavior was concerned.

Philip Alston who is an international legal scholar at the New York University’s School of Law recently proposed a solution to the very unaccountable and ambiguous nature of ethics.

His solutions proposed to reframe all AI-driven real-world consequences in terms of actual human rights.

Alston also said in a conference that human rights are guaranteed in the constitution.

Furthermore, he said, they were present in the bill of rights and they had been interpreted by the country’s courts.

So if a given AI system mistakenly took away the basic rights of people then that should not be acceptable to society.

 

Zohair

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Zohair

COMMENTS

WORDPRESS: 0