AI Now, a research institute, has now identified several technologies chief among which is facial recognition as the key challenges that policymakers and society as a whole need to think about and solve.
However, the real question is:
Is it too late for that?
No one can debate the fact that artificial intelligence has enabled the technology industry to make some major strides in a couple of problem areas especially in the last couple of years or so.
However, now, the same rapid advances have raised some serious and potentially huge ethical conundrums.
The chief among those ethical issues is the issue of machine learning and how this artificial intelligence technique goes about identifying faces of different people in video footage and photos with a high amount of accuracy.
It is true that machine learning algorithms might enable users to unlock their smartphone devices with nothing but a smile.
However, those same machine learning features could also mean that big corporations and governments now have in their hands some really powerful tools with which they can usher a new era of mass surveillance tools.
AI Now Institute has recently published a fresh report on the impact that artificial intelligence enabled tools are having on the society.
Click here for the rather large PDF file.
Readers should know that the AI Now Institute is actually a very influential research institute which is based in the city of New York.
The latest report published there has managed to identify the new facial recognition technology as a critical challenge not just for policymakers but also for society as a whole.
Facial recognition and its related adaptation has grown at a rapid pace.
And the reason for that, after everything is said and one, comes down to the simple fact that the industry has rapidly developed of a very specific type of machine learning technique which is known in the industry as deep learning.
Simply put, Deep learning is the technique that makes use of a large number of tangles of computations.
Some experts have very roughly made an analogy between deep learning and the wiring inside a human (biological) brain.
In any case, the main aim of any deep learning project is to recognize some sort of pattern in the given data.
With the help of techniques such as deep learning, machines now have the ability to carry out tasks such as pattern recognition with an accuracy that is jaw-dropping.
Our research shows that the tasks at which deep learning is best in showing its brilliance include identifying individual faces and objects in general.
Deep learning machines can identify objects and/or faces even when the image is of poor quality.
The same goes for video.
Perhaps that is the reason why technology companies and others have more or less rushed in to adopt such machine learning tools.
On that note, it is also true that the report has called for the United States government to start taking the required general steps in order to enhance and improve the regulation regarding this deep learning technology which is moving very rapidly amid a ton of debate over the technology’s privacy implications.
The report clearly mentioned that the implementation of artificial intelligence system was expanding very rapidly and without the help of adequate governance, accountability regimes and/or oversight, it may lead to system-wide problems.
Moreover, the report also suggested, to take an example, to extend the authority/power of all the current government bodies in the country for the purposes of regulating artificial intelligence issues including the utilization of artificial intelligence applications such a facial recognition.
The report mentioned that domains such as,
- Criminal justice
And many other fields have had their own unique histories, hazards and regularly frameworks.
Apart from that, the report also called for stricter and strong consumer related protections to resist all the misleading claims regarding the use of AI and its benefits.
Additionally, the report also urged technology companies to show some generosity and waive their claims regarding trade-secrets.
Especially when it is the accountability of artificial intelligence systems which is at stake.
Readers should understand that behind the fancy word of AI, there are still those boring old algorithms which machine learning practitioners are making use of in order to make some pretty critical decisions, just to take an example.
Perhaps that is the reason the report also asks those involved that they should begin the process of governing themselves in a much more responsible way when it came to the utilization of technologies such as artificial intelligence.
Furthermore, the document from the US academies has also suggested that the US public should get sufficient warnings when authorities are making use of facial-recognition systems in order to track and monitor them.
Moreover, the report also mentioned that people should have the basic right of rejecting the utilization of technologies such as facial recognition.
Of course, implementing all or any of the recommendations mentioned in the report could and probably would prove extremely challenging.
With that said, readers should know that the toothpaste has already made its way out of the tube.
In other words, authorities and some companies have started to adopt the facial recognition technology and deploy it as well, incredibly quickly.
Technology companies, in particular, are making use of it in their latest products.
Everyone knows that the latest iPhones from Apple are making use of the facial recognition technology to unlock phones and also enable online payments.
There is also Facebook which is busy scanning millions upon millions of photos and videos each and every single day in order to accurately identify specific individuals and/or Facebook users.
Moreover, just last week, the popular United States airline, Delta Airlines, officially announced that it had installed new and improved face-scanning advanced check-in system at the airport in Atlanta.
The United States Secret Service has also begun to develop an advanced facial-recognition system for security to use in the White House.
That’s according to a document that the UCLA recently highlighted.
The report also mentioned that the role of artificial intelligence in widespread and mass-scale surveillance systems had expanded immensely in countries such as China and the United States of America along with many other smaller countries.
The fact is, the artificial intelligence-enabled facial recognition technology has seen more widespread adoption in China than in any other country including the United States of America.
When it comes to China, everything seems like to happen at a much grander scale.
Most of the time, the implementation of artificial intelligence facial recognition systems involves some sort of collaboration between private artificial intelligence companies and the country’s government agencies.
Various media reports have mentioned how police forces in various countries have managed to make use of artificial intelligence in order to identify criminals.
Then there are those numerous studies which have suggested that authorities are making use of artificial intelligence to track down dissidents.
Even if one assumed for a second that no one was making use of artificial intelligence in ethically dubious ways, the fact is, the artificial intelligence technology itself comes with issues which are built-in.
To take an example, there are some artificial intelligence facial recognition systems that, experts have shown, contain bias.
Researchers working at the ACLU demonstrated that Amazon cloud program offered customers a tool that was built and trained to have a higher likelihood of misidentifying minorities as law-breaking citizens or criminals.
Apart from that, the report has also warned the utilization of emotion tracking in voice detection and face-scanning systems.
Because emotion tracking in such a way is still, relatively speaking, unproven.
Even with these problems, authorities are making use of facial-recognition technologies to potentially less-than-fair and/or discriminatory ways.
To take an example, some school have started to make use of such technologies and solutions to track the attention of all the students present in the class.
According to one of the lead authors behind the report and also the co-founder of AINow, Kate Crawford, it was high time that the government started to regulate affect recognition and facial recognition technologies.
Kate that companies claiming to make use of AI to ‘see’ directly into the interior states of other people was neither ethical nor scientific.