Online Filter Bubbles Are Bad. But What Do They Look Like?

filter_bubble

Are filter bubbles polarizing societies?

Various Twitter maps now show why online divides are extremely hard to actually bridge and how problems such as political polarization manifest.

Public life in America has become rather segregated ideologically speaking.

And such segregation has kept on increasing year by year as screens have taken the place of newspapers.

However, what people need to keep in mind is that various societies have managed to experience fragmentation and extremism without any assistance from San Francisco (more specifically, Silicon Valley) for the past many centuries.

Even in the case of the United States of America, political polarization isn’t something new.

Many agree that it began a long time ago before Facebook and Twitter.

More precisely, it started almost exactly with the rise of 24-hours seven-days-a-week cable news.

From this, one could develop a better understanding of how the internet has caused (or has not caused) the divisions that we see today.

Is the internet responsible for all the bad that is happening in society?

Perhaps more importantly, are these fissures really as ugly and harmful as they outwardly seem?

A recent Twitter map showed the entirety of the United States of America’s political landscape through the lens of the social media site.

In the map, researchers clustered together account that followed each other.

Then, the researcher color-coded all types of content that such accounts regularly shared on Twitter.

At first sight, researchers found that there appeared no prominent echo chambers.

Some even found that to be reassuring.

The map also showed an intertwined network of various elected officials and policy professionals along with political and press people.

Apart from that, researchers saw extremes as well as a robust middle that tried to mediate between the two ends.

However, upon closer inspection, researchers found out that the diagram/map did not show the actual strength of the middle.

Researchers found that the large middle was actually weak.

Weaker than it looked.

That, according to researchers, made sensible public discourse much more vulnerable both to manipulation by foreign state actors (examples include Russia) and extremists that reside in the United States of America itself.

Researchers also discovered what they described as partisan and noisy Twitter bots.

As mentioned just now, the center of the Twitter-based political universe did not make nearly as much noise as the polarized wings.

The map researchers generated plotted the average number of daily tweets on the social media network.

It clearly showed that extreme positions from all (more specifically, both) sides of the debate screamed while the big center only whispered.

The map also made a point that Twitter bots actually amplified the divisions on both political sides.

According to researchers, it had become clear that a lot of the activity on Twitter was actually automated.

Researchers even found accounts that churned out more than a hundred tweets every day.

Some posted even more than a hundred tweets.

All of these automated bots tweeted on a common daily schedule.

The left side of the political debate in the US had hundreds of Twitter accounts that had identical tweet counts each day.

This further provided researchers with evidence that the activity indeed came via Twitter bots.

A divided press

shutterstock_146431343

The problems with social media platforms are many.

But blaming it for all the ills in the country’s political discourse isn’t wise.

The reality is that social media would have had a very limited impact on the country’s democracy, if it merely served as nothing more than a distraction from a normal professional news diet.

However, the recent map researchers have generated shows otherwise.

Scholars from MIT Media Lab and Harvard generated a map based on the actual co-citations (in other words, who forms a link to whom) which showed a prominent bifurcation of the media world itself.

Many Twitter account only quote news sources in order to affect the United States political landscape.

The new map showed that the left primarily cited the traditional and mainstream journalistic sources.

Other news sources such as True Pundit, Breitbart and Fox served the right.

More interestingly though, the map also showed that as far as individual articles (instead of news sources) went, the ones that got the most number of tweets represented the most biased and/or partisan views on both the right and the left.

The division which Russian trolls exploited

As mentioned before, new maps had shown a great amount of political polarization within the United States of America.

Such extreme polarization conveniently acts as a fertile ground for any and all misinformation and manipulation operations.

According to some, this is exactly what Russia did in order to successfully influence the 2016 United States presidential elections.

Forces such as Russian trolls did not try to force their ill-intended messages directly into the mainstream section.

Instead, these foreign state-funded adversaries targeted extremely polarized communities.

Not only that, they also embedded fake social media accounts within those extreme communities.

In simpler terms, Russian hackers created false personas.

They then used these personas to engage with real people living in those extreme online communities in order to build a certain amount of trust and credibility.

Once these hackers managed to gain some influence and had established themselves as trusted voices within the community, they introduced new and more controversial viewpoints.

They also amplified inflammatory and divisive narrative which had already circulated the national debate.

In terms of the impact of these Russian trolls, what they successfully managed to do was to move extreme sections of the digital arena into the real-life equivalent of a tight-knit and isolated community.

These communities used their own preferred language quirks.

Apart from that these communities also catered to its members’ obsessions.

Continuing the same example, hackers not only managed to form these isolated and extreme communities but also managed to run for mayor.

As a final blow, hackers used their position of influence and power within the community to influence politics on the national level.

After generating the map, researchers found out that one of the apps highlighted a 30 odd years old American woman named Jenna Abrams.

Apparently, Jenna Abrams managed to gain a large number of followers with the help of her virtual tweet regarding subjects such as,

  • Slavery
  • Kim Kardashian
  • Donald Trump
  • Segregation

Jenna had extreme far-right political views.

These views endeared Jenna to people who identified themselves as conservatives.

Jenna’s shocking, but entertaining, tactics managed to win her a huge amount of attention from a good number of outlets in the mainstream media.

Moreover, her reputation got her to the point of having public spats with some of the most prominent people on social media sites like Twitter.

One of those people included the former United States ambassador to Russia.

Jenna and her followers in the Twittersphere (all right-wing) managed to increase her influence to even higher levels.

She became part of the broader United States political conversation.

After the elections, upon some research, people find out that in reality, Jenna did not exist.

She represented just one of the many hundreds and thousands of fake Twitter personas.

According to reports in the media, the Internet Research Agency (a St.Petersburg online troll farm) that had become infamous for its meddling in foreign elections had created that fake persona.

The effects of the blogosphere in Iran

There is nothing new about the internet’s echo-chamber effect.

An older map from 2008 depicted the condition of the blogosphere in Iran.

The map clustered together all the blogs which linked to other blogs on the same subject.

Moreover, the map also separated blogs by their actual content.

Eventually, though the government launched a huge crackdown on online inflammatory speech.

Before the crackdown, supports of the then-present clerical regime had managed to enjoy pretty big followings.

The Twitter Effect in Turkey

Researchers also mapped Twitter usage on Turkey’s political landscape.

The map, which researchers found had great similarities to the United States of America Twitter map that got a mention right at the beginning of the piece, showed polarization with multiple dimensions.

Not only that, the Twitter map of Turkey also showed a rather dense sphere of Twitter influencers surrounding Erdogan Twitter supporters.

On the far right corner of the Twitter map, there were one group of supporters.

The opposite end of the Twitterverse had two different poles.

Researchers have now come up with the term amplification cores.

What is an amplification core?

According to researchers these are highly connected Twitter accounts which possess disproportionate and significant influence on the given political conversation.

That gives such accounts a tremendous opportunity to actually rapid boost and spread messages which are polarizing.

Is Russia different or the same?

Who knows.

When researchers generated a Twitter map of the current political landscape in Russia, it showed political polarization in a different context that the researchers had not seen before.

As far as the Russian Twitter map goes, the country seems to NOT have a clear anti- or pro-Putin clusters.

However, the present clusters do knit together based on a wide set of discussion-oriented and news accounts which are, for the most part, pro-government.

shutterstock_521165329

Some Technologists have busied themselves in making efforts to fix the filter bubble issue

And it is a big issue.

In other words, some believe that big technology companies helped society create these huge filter bubbles.

Of course, there is some research which suggests that online polarization and other related issues are not really as clear-cut and simple as people seem to think they are.

Deb Roy, is one of the United States of America’s foremost authorities on issues related to social media.

Last fall, he attended a multiple number of roundtables in various small towns spread across middle America.

These places included the likes of,

  • Iowa
  • Anamosa
  • Wisconsin
  • Platteville

Roy came across things he wasn’t so used to.

After all, he busied himself with his work at the Laboratory for Social Machine at the Massachusetts Institute of Technology Media Lab.

He did not see any computer screen in various rooms he came across.

Roy didn’t see any posts or tweets that he could examine.

Instead of that, Roy simply tried to listen to the local residents and community leaders talking to him.

He held face to face discussions about all the neighbors involved there as well.

After listening to a lot of folks, Roy came to a conclusion that really alarmed him.

Roy recalled in one of his recent interviews that an elderly woman coming up to him and saying that she found out what a lot of people posted on Facebook.

According to the women, the people on Facebook had such extreme views that they were actually unacceptable to her.

She also said that she no longer saw the point of engaging with such people.

Of course, the elderly woman was not alone.

Roy came across such sentiments multiple times while he completed his trip.

He also mentioned that some of the people they mentioned were actually the ones that they saw on a fairly regular basis within their small middle America towns.

Furthermore, according to Roy, these people seemed to have no problems in agreeing to disagree in the past.

Roy also added that when balkanization and divisiveness were reflected prominently even at the hyperlocal level then there was something profoundly wrong with the whole situation.

The American society had reached a level where even when the people had access to other people living close by, the digital realm managed to silence their speech and cut them off from one another in the real (physical) realm.

Back in the year 2014, Deb Roy successfully set up his Massachusetts Institute of Technology lab in order to study exactly how social media, among many other things, could actually help the society to break through all the partisan and biased arguing which typically divided people.

Some might call Deb Roy a bit too ambitious.

But Roy probably has a unique position when it comes to attempting something even close to it.

Born in Canada, Deb Roy served as Twitter’s chief media scientist from 2013 to 2017.

His team collected and analyzed vast amounts of social media chatter.

As mentioned just now, Deb Roy opened up his MIT media lab back in 2014.

Around the same time, Twitter granted Roy exclusive and full access to the company’s firehose.

What exactly do we mean by that?

We mean that Twitter allowed Roy to have access to each and every tweet that each and every Twitter user ever produced and that too in real time.

Moreover, Twitter also granted Roy a total of $10 million to assist him in making sense of all the Twitter data that the company provided him with people’s,

  • Activities
  • Preferences
  • Interests

The company also wanted Roy to come up with new ways of using all that data for the public good.

However for Roy and several other researchers just like him (that is, people who have studied the impact of the internet on society) did not have “public benefits” at the front of their mind.

The most important and concerning problem for them (also highlighted by the 2016 United States presidential election) wasn’t that hackers from Russia made use of Facebook and Twitter in order to spread harmful propaganda.

It wasn’t even the fact that Cambridge Analytica (the political consulting firm) gained access to sensitive and private data for over 50 million users of Facebook without permission.

The main problem that Roy and others had in mind had to do with the fact that people had quite voluntarily and effectively retreated into virtual hyperpartisan corners.

Of course, one can thank internet and social media companies for lending a hand in creating such a society.

These entities harmed the society by determining what online consumers saw with the help of mass monitoring.

Internet companies looked up a tremendous amount of information on what online consumers had clicked on recently.

They used this information to give users more of what they wanted to see.

However, in the process of doing so, these internet companies effectively sifted out opposing views.

As a result of that, online consumers had nothing to consume but content which reinforced what these online consumers already knew and believed.

This is what researchers are now calling a filter bubble.

Back in the year 2011, Eli Pariser wrote a book by the same name and hence popularized the concept.

Eli Pariser, founder of Upworthy, a viral video site, and an internet activist recently wrote that ultimately, man-made concepts such as democracy only worked if and if only citizens of the given country had the capability of thinking beyond their own narrow self-interest.

He also wrote that in order to do so, people needed a neat and clean shared view of the physical world that they co-inhabit.

Therefore, the problem with the filter bubble was that it actually pushed against such a worldview.

In other words, it created the impression that a given person’s narrow self-interest was all that existed.

But is it really true?

Recent research suggests that may not be the case.

To put it in simpler terms, research says that, in the real world, things were not quite that easy and simple.

Is there a kind of war?

Cass Sunstein, a legal scholar, talked about this phenomena as early as 2007.

He wrote that the age of the internet had actually given rise to a new era of niches and enclaves.

Afterward, he also cited an experiment which took place in Colorado in the year 2005.

In that experiment,  about 60 Americans from liberal Boulder and conservative Colorado Springs, two cities which are about 160 kilometers apart, had to assemble in small groups.

Researchers then asked these people to deliberate on a total of three controversial issues.

These three issues came in the form of,

  • An international treaty on global warming
  • Same-gender marriage
  • Affirmative action.

Researchers found that in each and every case, people actually managed to hold more extreme positions on issues after they actually spoke with people who held similar positions.

shutterstock_551863021

In Chronicles of Higher Education, Sunstein wrote that the internet made it exceedingly simple for people to actually replicate that same Colorado experiment in the digital world.

It doesn’t really matter if a given person is actively and intentionally trying to do exactly that.

Sunstein said that there existed a general risk that people who had the inclination of flocking together on any given platform but especially the internet, would end up both wrong and confident.

The reason for that was also simple.

They simply did not have sufficient exposure to counterarguments.

These people may even have thoughts about their fellow citizens as adversaries and/or opponents in a, sort of, war.

However, can society really fault social media here?

Some Stanford University researchers published a study in the early part of 2018 in Proceedings of the National Academy of Sciences.

In the study, these researchers examined the phenomenon of political polarization in the United States of America.

These Stanford University researchers found that political polarization was actually increasing way faster among those demographic groups which had the least exposure to the internet and hence social media websites.

The lead author of that Stanford University, Levi Boxell, recently said that 65-year-old men and women were not polarizing much more quickly than even the younger age group.

That, according to the study, was the opposite of what one would expect if the internet and social media were the main drivers.

But there is more.

According to a research fellow at the Oxford Internet Institute, Grant Blank, and collaborators who actually did surveys on adults in Canada and the United States of America, most people were not as stuck in online echo chambers as a small number of researchers would want people to think.

Blank said that they had a total of five different methods in which they could define the echo chamber.

Moreover, it really did not matter which method one selected to use.

Why?

Because the results of the methods were extremely consistent across all of the methods.

According to Blank, there was no echo chamber.

Interesting?

Great.

Stay tuned for tomorrow’s post.

 

Zohair

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Zohair

Latest posts by Zohair (see all)

COMMENTS

WORDPRESS: 0

Online Filter Bubbles Are Bad. But What Do They Look Like?

by Zohair time to read: 12 min
0