No more fake pictures with new DeepFake-busting apps

deepfake_ai_apps

How good is AI at taking care of fake videos and photos?

DeepFake-busting applications now have the ability to spot as little as one pixel out of an entire photo to see if the photo is fake or not.

Two startups have now begun to use computer algorithms to track if and when a given image was edited.

And it can do that from the moment the pictures are taken.

There is no doubt that the ‘simple’ act of falsifying videos and photos used to take a considerable amount of work.

People in this line of work did not have a lot of options.

Either one had to make use of CGI in order to generate sharp photorealistic images from scratch (which required a lot of expenses and expertise) or one had to have a high-level mastery of Photoshop.

In combination with such techniques, one needed to have a lot of time in hand as well in order to convincingly modify existing unadulterated pictures.

Of course, now, things have changed a bit.

With the advent of powerful machine learning techniques and AI-generated imagery, it is easier than ever for anyone with little knowledge of the techniques involved to tweak a given video or an image with results that are realistic enough to at least confuse people.

Earlier in the year, Will Knight, the senior AI editor of the MIT Technology Review made use of an off-the-shelf software application to actually forge one of this own fake video.

His fake video feature Ted Cruz, a US senator.

At the time of publishing, the fake video did look slightly glitchy.

However, there is no doubt that the video won’t stay glitchy for long.

Less ethical people are making use of the same technology to create a growing class of photos and footage which are known as deepfakes.

These deepfakes have the potential to not only undermine the truth but also sow discord and confuse viewers at a much bigger scale than what people have already seen with fake news based on text alone.

And these possibilities are the reason why a computer science professor at the Dartmouth College, Hany Farid, feels disturbed.

Farid has spent over 20 years debunking fake imagery.

Now he is warning that we, as a society, are not ready yet.

shutterstock_595118426

However, he does hope that the growing awareness about such issues and all the new technological developments could actually better prepare the society at large to discern original and true images from fake and/or manipulated creations.

At the moment, there are a total of two main methods with which the community can deal with the hard challenge of actually verifying images.

According to Farid’s explanation, the first thing to look out for are modifications that someone could have made in a given image.

This is where image forensics experts make use of various computational techniques in order to pick out whether someone has altered the metadata or pixels.

Usually, according to Farid, such people look out for reflections and/or shadows that do not seem to follow the natural laws of physics.

But that’s just an example.

Forensics experts also check an image for the number of times someone or some application may have compressed a given image in order to determine whether the image has been saved a multiple number of times.

But there is a second method, which is much newer as well.

This method involves verifying a given image’s integrity at the very moment someone has taken it.

Such a work usually involves an expert performing a decent number of checks in order to make sure that the photographer taking the photo did not try to spoof the image-taking-device’s timestamp and location data.

And there are lots of other questions that need to be answered.

Questions such as whether or not the given camera’s time zone, altitude, any and all nearby WiFi networks and coordinates corroborate each other.

There are other question as well such as does the light in the given image properly refract as the light would for a given three-dimensional scene?

Or whether or not someone is actually taking a picture of another photo which is two-dimensional.

According to Farid, the approach which shows the most promise is the second one.

And he might be right given that the world uploads around two billion photos to the internet on a daily basis.

Farid thinks that such an approach would perfectly suit activities which involve verifying images at a massive scale.

As mentioned before, currently there are two startups that are working to improve the situation with deepfakes.

One of them is Truepic which is based in the United States of America and has hired Farid as a consultant.

Then there is Serelay which is based in the United Kingdom.

Both are working to fully commercialize the aforementioned idea.

And both have taken pretty similar approaches to solve the problem of deepfakes.

shutterstock_1023890800

Each startup offers users free Android and iOS camera apps which makes use of a proprietary algorithm to verify photos when the user takes them, automatically.

Moreover, if a given image does go viral then the apps can compare the viral image against the original in order to check whether the image in question has managed to retain its integrity.

Of course, there are some stark differences in how both apps go about their business.

For example, when it comes to the users’ images, Truepic uploads all of them and stores them on its own servers.

On the other hand, Serelay actually stores a virtual fingerprint, of sorts, of the given image by computing around a hundred or more mathematical values which are related to it.

Serelay has also claimed that the values it computes are more than enough to successfully detect an edit which is as small as a single pixel.

After that, the company is able to determine approximately, which section of the given image did someone change.

On the other hand, Truepic has stated in the past that the company chose to store the entire images to handle the cases where users have a need to delete any of their sensitive photos because of safety reasons.

For instance, if some Truepic users are operating in high-threat scenarios, such as an active war zone, they would need to remove the given fake-picture-spotting app immediately after they have managed to document various scenes.

In contrast, Serelay has the belief that by not storing images they are able to afford their users a greater extent of privacy.

According to Farid, it is vitally important that companies in this field do their work and be transparent about all of their processes.

Moreover, they have to work with some trusted partners.

Farid is of the opinion that would help companies maintain a greater amount of user trust and also keep all the bad actors someplace where they can’t do any damage.

He also says that the industry still had a long way to go before becoming fully prepared for the inevitable proliferation of deepfakes.

However, again, he has hope.

He recently said that the Serelay-type technology and the Truepic-type technology was in good shape and that meant that the industry was on its way in getting ready for what’s coming.

 

Zohair

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Zohair

COMMENTS

WORDPRESS: 0

No more fake pictures with new DeepFake-busting apps

by Zohair time to read: 5 min
0