Want to See Around Corners: Algorithms Can Help

algorithms

Algorithms can now help see around corners as well.

Researchers in the field of computer vision have recently managed to uncover a whole world of various visual signals which are hiding in our surroundings.

These visual signals include various subtle motions which manage to betray what is being said along with faint images which give the observer an idea of what’s around a given corner.

Antonio Torralba, a computer vision scientist, noticed a stay shadow on one of the walls of the hotel room he was staying in while on a vacation on Spain’s east coast back in 2012.

The really strange part about the shadow which Antonio noticed was that he did not find anything that could have cast that shadow.

Eventually, Torralba found some success in realizing that the present discolored patches of the hotel room’s wall actually did not represent any shadows.

Those “shadows” were actually upside-down and faint images of the thing that was outside his hotel room window:

A patio.

In other words, Antonio realized that the window had started to act as an old-school pinhole camera.

This type of camera is the simplest form of any camera.

In such a camera, the light rays actually pass directly through a really small opening.

After passing through the opening, the light rays form an upside-down or inverted image on the opposite side of the object.

Antonio found that the resulting upside-down image was scarcely perceptible on the wall of his hotel room which was light-drenched.

However, it also struck Antonio that our environment and indeed the whole world was suffused with various type of visual information which human eyes simply failed to notice on a regular basis.

Recently he said in an interview that such type of images is essentially hidden to humans.

However, according to Antonio, these images are present all around the society and that too at all times.

The hotel room-patio experience not only alerted Antonio but also Bill Freeman, his colleague.

Perhaps this is a good time to mention the fact that both Freeman and Antonio work at the Massachusetts Institute of Technology as professors.

In any case, both had an awakening of sorts where they realized that these accidental cameras were actually ubiquitous.

Both professors called such upside-down images as the result of another object acting as a camera.

According to the professors, all types of objects like,

  • Houseplants
  • Corners
  • Windows

and many other common types of objects had this tendency to create really subtle upside-down images of their immediate surroundings.

These upside-down images were actually as much as a thousand times less bright (or dim) than each object in their surroundings.

Moreover, they also found out that these images are typically pretty much invisible to the human naked eye.

Freeman recently explained that he along with Antonio had managed to figure out various ways in which they could pull out those upside-down dim images and then apply some techniques on them to make them visible.

The pair of Antonio and Freeman also discovered the correct amount of visual information that these shadows contained and how everything hid in plain sight.

Both professors went to work and published a paper shortly afterward.

In the paper, Torralba and Freeman managed to successfully show that the alternating light on a given wall of a given room, when filmed with a simple or nothing more fancy than the camera that is present in an iPhone device, could go under specific processes in order to reveal the complete scene that may be present outside the room’s window.

A few months ago, both Antonio and Freeman along with their various collaborators published a report in which they said that they could spot someone actually moving on the opposite side of a given corner by simply filming the ground that was present near the related corner.

Fast forward to this year’s summer period, the team consisting of Freeman and Antonio along with others managed to demonstrated that they could film a simple houseplant and from that information could reconstruct a complete three-dimensional image of the entire rest of the given room.

And they would do so from nothing but the disparate shadows which the plant’s leaves cast on the walls.

Moreover, the team consisting of Antonio and Freeman also showed that they could turn the houseplant’s leaves into a kind of visual microphone.

In simpler terms, they could magnify the vibrations of these leaves to listen to the things that someone could be saying in the vicinity of the plant.

When a man said the phrase” Mary had a little tiny lamb”, while being present in a room, scientists reconstructed that audio from just the motion of an old and empty chip page which researchers themselves had filmed with the help of a soundproof window back in 2014.

As it turns out, “Mary had a little lamb” also represent the first few words which Thomas Edison managed to record with the help of a phonograph way back in 1877.

The actual research on subjects such as seeing things around corners and then inferring the various amount of information from the related images of things that were not directly visible really started to take off about six years ago in 2018.

Researchers called this technique the non-line-of-sight-imaging technique.

Back in 2012, Freeman and Torralba started the spark of a decent amount of interest from the scientific community with their accidental-camera research paper.

After that, another somewhat watershed research paper came out.

This time Ramesh Raskar led a separate group that worked on the same problem at MIT.

Fast forwarding to two years ago, in 2016, with reasonable support from the strength of earlier results, DARPA (the Defense Advanced Research Projects Agency) successfully launched the REVEAL program with a funding of $27 million.

REVEAL,as a term, stands for Revolutionary Enhancement of Visibility by Exploiting Active Light-fields.

This program provided some much-needed funding to a several number of nascent research labs which were present around the United States of America.

Ever since that first funding program, a huge stream of mathematical tricks, as well as new insights into these problems, has made techniques such as non-line-of-sight imaging significantly more practical as well as powerful.

Now, no one can doubt the fact that such a technique would have some obvious spying and military applications.

But the fact is, researchers have managed to rattle off a decently sized list of possible use cases of such techniques.

They have included areas that non-line-of-sight imaging can affect as,

  • Medical imaging
  • Search and rescue operations and/or missions
  • Space exploration
  • Astronomy
  • Robotic vision of the future
  • Self-driving autonomous cars.

algorithms

However, Torralba has told the media and neither he or Freeman have any kind of particular application in their minds when they initially began their work in this area and decided to go down this road.

According to Torralba, all that he and his team wanted to do was to simply dig deep into the basic concepts of how images actually formed and exactly what constituted a camera.

This pursuit of basic knowledge managed to naturally lead these two to a much more comprehensive and fuller scientific investigation of the behavior of light and how that light interacted with various surfaces and objects in their surrounding environment.

Both scientists began observing things which no one before them had the thought of looking at.

Torralba also noted that various psychological research studies had shown that humans were actually terrible incompetent at accurately interpreting different types of shadows.

According to Torralba, perhaps one of the reasons for human incompetence at understanding shadows is that many of the objects or things that humans see on a regular basis are actually not shadows.

Because of that, after a long period of time, the eyes simply gave up.

And did not try hard to make any sense of such shadows.

More on these Accidental Cameras.

Here is the really interesting bit about light rays.

Light rays carry images.

These images have information about the world that is outside any given person’s field of view.

Moreover, these light rays are able to consistently strike the walls they come across.

They also strike other surfaces.

And when they strike these different types of objects, they reflect into people’s eyes.

But the more important question here is that why are all the visual residues in these light rays so weak?

Well, the answer is simple enough once one comes to know about it.

The thing is, there are simply too many of such light rays.

And these innumerable light rays have nothing stopping them from traveling into a near-infinite number of different directions.

Eventually, these visual residues wash out.

It is true that if one intends to form an image then one requires to take these light rays and restrict them to a great extent in order to make sure that they are able to fall on a given surface.

This allows researchers to see one particular set of these light rays without problems.

As it turns out, this is exactly what a pinhole simple camera does as well.

Freeman and Torralba had these initial insights way back in 2012.

They knew that the environment that humans currently lived in simply had too many features and objects.

These naturally restricted different light rays.

Antonio and Torralba also found out that because of all these obstructions, they formed faint images which were only strong enough and prominent enough for machines to properly detect.

One other feature of pinhole cameras is that the smaller the aperture is, the shaper the image they can output.

Why is that?

It is because each single point present on the relevant imaged object would only get to emit a single light ray.

This light ray from the imaged object would have the correct angle.

And because of that right angle, the light ray would successfully pass through the hole.

This is where Torralba recently mentioned that the window in his hotel room that fateful day on the east coast of Spain simply had a too much of a size to actually produce a sharp and crisp image.

Interestingly, Antonio and Freeman had the knowledge to understand that generally speaking, any kind of useful accident camera (the pinhole variety of course) was rare.

Both MIT professors also realized that the pinspeck or the anti-pinhole cameras which only consisted of a small and light-blocking object managed to form images in a lot of places.

To further understand what is happening here, the reader has to imagine that he/she is actually filming a specific interior wall which is inside a given room but only through a crack that is present in the window shade.

The obvious observation here is that the reader would not be able to see much through the crack.

But what if, suddenly out of nowhere, a person’s right/left arm popped into the reader’s field of view?

algorithms

What would happening?

Well, according to recent research, if someone compared the actual intensity of light on a given wall when there is no arm and where there is an arm then that would help greatly in revealing information about the whole scene.

What is actually happening is that the person’s arm is blocking a set of specific light rays.

These light rays would have struck the wall in the very first video frame if the person’s arm did not briefly block them.

If someone managed to subtract the data that was contained in the second image (with the arm) from the first image (without the arm), then one could pull out the part of the image which the person’s arm blocked.

In other words, the subtraction would give the user a set of specific light rays which would represent an image.

That image would be for the part of the room which the arm blocked.

Freeman also said that if someone allowed himself/herself to look at specific things which blocked light and then also looked at things which allowed the light to get in then one could expand the repertoire of all the places where one could find the accidental camera pinhole-like upside-down images.

Apart from the accidental-camera work whose sole aim was to pick up on all the small light intensity changes, Antonio, Freeman and their colleagues also worked on devising algorithms.

These algorithms would help them to detect and then amplify various but subtle color changes.

These are the types that one is likely to find in a human face as the blood in the face pumps in and pumps out.

Such algorithms could also help researchers to manage really tiny motions.

These are the motions that one would find behind tricks involving talking bags of chips.

Researchers now have the ability to easily spot motions.

And not just normal motions.

We’re talking about some really subtle motions.

To be more precise, motions which are one-hundredth of a single pixel.

Such a small motion would usually bury itself in all the background noise.

The researchers are using a method which allows them to mathematically transform various images directly into corresponding configurations of sine waves.

The critical point here is that, in the new and transformed space, the noise is unable to dominate the actual signal as usual.

Why is that?

algorithms

It is because of the nature of sine waves.

These waves actually represent averages over a ton of pixels.

Hence, the noise is pretty much spread out over them.

This allows researchers to detect the relevant shifts in the specific positions of these sine waves from one video frame to the other video frame in a given sequence.

Researchers also obtain the ability to amplify these wave shifts.

Then they can easily employ techniques to transform the new data back.

In fact, now researchers have found even newer methods to combine all the tricks to access the major portion of their hidden visual information.

Researchers published a report back in 2017 (in October) in which they showed that building that had corners actually had corners which could act as cameras.

These corners had the potential of creating rough images of what was present around the corner.

Kate Bouman led research in this field.

Back then she was one of Freeman’s graduate students.

Now she works at the Harvard-Smithsonian Center for Astrophysics.

Pretty much like pinspecks and pinholes, corners and edges also have this tendency to restrict passages of different light rays.

Bouman and company managed to film a building corner’s penumbra region with the help of some conventional and simple recording equipment.

As mentioned before as well, this could be as simple as an iPhone device.

Researchers found that a specific subset of light rays that came from the hidden region right around the corner illuminated the present shadowy area.

To take an example, if a person with a shirt walked through that area, the shirt would actually project a small amount of red light directly into the region researchers call penumbra.

Moreover, such a red light would also sweep right across the region known as penumbra as the person with the red shirt is walking through the region.

Normally, such a person would not be visible to an unaided eye.

However, that person would become pretty much as clear as day and night when researchers do a bit of processing.

Conclusion

This post requires another part.

And that’s why we will discuss more about how algorithms may help to see around corners in an upcoming post.

So stay tuned to Security Gladiators for all the latest happenings in the world of security, privacy, anonymity and how machine learning and artificial intelligence will change them.

 

Zohair

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Zohair

Latest posts by Zohair (see all)

COMMENTS

WORDPRESS: 0

Want to See Around Corners: Algorithms Can Help

by Zohair time to read: 10 min
0