Bad actors don’t need to do anything special in order to harm individuals and/or the society at large when it comes to social media.
Because social media, by design, helps them do their dirty work.
It is a fact that ISIS used Twitter in order to recruit fighters for its evil purposes.
Various media reports have also revealed that landlords do use services such as Airbnb in order to discriminate against potential tenants.
But the thing most people don’t pay much attention to is that fact that none of the bad actors we have mentioned above have actually exploited the platform.
They haven’t exploited any glitches in the above-mentioned services.
In fact, they are just using the official features of these services.
A couple of months ago Robert Mueller, the Justice Department’s special counsel, announced pretty serious criminal charges against actors such as Russian operatives.
Mueller mentioned that he had enough evidence to prove that Russian operatives indeed interfered in the 2016 US presidential elections.
Right after the announcement, various media outlets started to describe how Russian operatives used modern social communication technologies for their own purposes in all a too familiar manager.
Journalists from major publications started to refer to all the ways how Russia managed to manipulate online social-media platforms.
We also saw technology company executives like Rob Goldman (of Facebook) decrying how Russian operatives abused the country’s systems.
There is nothing surprising about that.
In fact, it has become standard fare really.
Whenever bad things like,
- Russian operatives manipulating elections via social media platforms such as Facebook and Twitter
- ISIS (a terrorist organization) recruiting followers via platforms such as Twitter.
- A racist landlord denying rentals to people with a darker skin and then offering the same space to white people via platforms such as Airbnb
happen, companies and commentators are quick to describe such activities as abuse and/or manipulation of the modern world’s almost ubiquitous apps and websites.
It is very hard to stop the impulse of portraying these odious behaviors as an unpredictable, peripheral and/or strange control of these online platforms.
But rest assured that it is certainly not any of these things.
Bad actors are simply using the platforms as the developers of the platforms designed these platforms.
One only has to look at Twitter’s official mission statement to know how these social media platforms provide services to any and all kinds of users.
It clearly speaks of the company’s vision to allow sharing of ideas.
Not only that, the company also wants to demolish barriers.
Along with that, Twitter wants to provide everyone with the power to share ideas and create them as well.
Twitter wants users to share the platform it has provided to share information instantly.
And of course, do that without any barriers.
Hence, it should not come as a surprise that terrorist organizations such as ISIS drew on Twitter’s features and its ability to share information and news about demolishing another type of barrier.
ISIS, the terrorist group, startled the world back in the year 2014.
By sweeping through large swathes of Syria very quickly.
It then moved ahead to push into Iraq as well.
And the organization’s key moment occurred via none other than Twitter.
ISIS tweeted pictures and other media which showed a bulldozer coming in and destroying an earthen barrier which had long represented and marked the official border between Iraq and Syria.
Twitter later had to make the announcement that it could not permit organizations such as ISIS to use the company’s service.
And of course, there is little doubt about the fact that Twitter has truly adopted that statement as a matter of official policy.
But it hasn’t done much for such problems as a matter of the service’s functionality.
In other words, ISIS did nothing that went against Twitter’s mission statement.
ISIS made use of Twitter.
And it did that to break down barriers.
Then the terrorist organization shared its own inhumane and horrific ideas anonymously and instantly.
To put it in simpler terms, ISIS did not manipulate how Twitter worked.
It used the social media platform precisely and exactly as the people behind Twitter designed the service.
That is, to share and spread ideas at a rapid pace and on a global scale.
Now, let’s have a look at Airbnb’s official motto.
It says “Belong anywhere”.
But as it turns out, Airbnb does have some landlords who don’t really really think that just about anyone deserves the chance to belong absolutely anywhere.
A study that came onto the scene back in 2016 clearly revealed that potential renters who had white-sounding full names managed to successfully book a place via Airbnb 50 percent of the time.
While potential renters with black-sounding full names had a success rate of 42 percent.
What did Airbnb do as a response?
Well, it commissioned a report.
The Airbnb report concluded that if the company wanted to stay true to its mission then it had to fight discrimination at the very fundamental level.
But is that really “fundamental” to the company’s mission?
Actually, what is fundamental to Airbnb’s official company mission is to fight any and all forms of regulation.
Or virtually all forms of regulation.
This is what Airbnb banks on to maximize its profits.
Coincidentally, this fundamental principle is also the same thing that has given Airbnb and its platform essentially a free pass.
A free pass?
From what, you may ask?
Well, the company gets to have a free pass from potentially decades of regulatory and legal infrastructure which the company has crafted itself to fight issues such as housing discrimination.
Landlords who engage in racism and take advantage of their unfettered discretion in order to pick and choose between different renters based on their whims or any given criteria whatsoever, even something as insignificant as the skin color of the would-be tenant as it appears in the tenant’s profile photo, aren’t exploiting any of Airbnb’s features.
They are just doing that Airbnb was built for.
These racist landlords are only using the platform’s available features.
Now, for a fair analysis, perhaps one should also mention the fact that Airbnb has subsequently modified some of its features.
But generally speaking, the company has chosen (itself) to maintain those features.
This is what should bring us back to the original question.
The original question of Mueller’s charges against Russian operatives and revelations about how a country like Russia ‘used’ social media platforms such as Facebook and others to its own advantage.
Of course, according to Mueller, Russia tried to interfere with the United States 2016 presidential elections.
The country also tried to sow a significant amount of discord among fellow Americans.
The research director at Tow Center for Digital Journalism Columbia University, Jonathan Albright, recently said something very interesting.
While talking to a reporter from the New York Times he said that Facebook had built some very incredibly effective tools.
Unfortunately, Russia took advantage of those tools.
And profiled citizens with those tools in the United States of America.
Facebook’s tools helped Russia to figure out how to effectively manipulate the American people.
In other words, Facebook effectively gave Russia everything it needed to sabotage the American society.
Just to take an example, Facebook has admitted that the Russian Internet Research Agency did purchase polarizing ads.
Facebook has also admitted that the platform actually rewarded the Russian Internet Research Agency via its (still undisclosed) algorithms for provoking so much user engagement.
The other thing that most people don’t consider is that Facebook actually aggressively marketed the micro tagging which Russian operatives made use of in order to pit American citizens against other American citizens.
This especially made a huge impact when it came to divisive political and social issues.
For clarity’s sake, Russia, in no way, abused Facebook.
Russia simply and only USED Facebook.
The sooner people working for Facebook and those who are using it, recognize these challenges the better.
Of course, there are other challenges as well.
Modern communications platforms cause lots of other problems to not only emerge but also spread at a rapid pace.
And all of these problems stem from the platform’s inherent features.
But we can’t simply indict technology companies for these problems.
No one can question the convenience that these online platforms have brought to the lives of many a million.
In fact, people now rely on these social media platforms for more than just connecting with their friends and family.
On the contrary, these recent problems only show that social media platforms are facing hard problems.
Really hard problems.
So what do these problems call for?
Well, these problems basically call for a reorientation of sorts.
We (that is, technology companies and people who use social media platforms) have to reorient ourselves.
And we have to think about these problems in a different way.
The one thing we, as a society, cannot do is to run away from these problems.
We have to face them and address these new challenges.
And perhaps the first step towards finding the solution to these problems is for technology companies to finally share with the world how do their algorithms work.
And how do they operate them?
But that is not all.
Apart from that, technology companies should also help all parties to understand how bad actors (and malicious ones) use their online services and social media platforms.
Of course, technology companies don’t have to reveal everything.
But they must share enough information to enhance transparency.
That could help technology companies yield more crowd-sourced solutions.
As important as social media platforms have become, one can’t leave the remedies to their problems to a small set of,
One also has to give proper weight to the fact that technology companies themselves employee these engineers, policy makers and lawyers.
Hence, any help from an outside source may open up rare solutions to general problems.
There is a second step as well.
But this will take some doing.
Not because it is hard or anything.
But it will require a lot of courage.
The second step that technology companies could take or rather should take is to at least try and experiment with newer and bolder approaches to preventing or restricting bad and/or malicious actors from accessing their online services.
At this point in time, technology companies’ official policies only have proscribed terrorist organizations such as ISIS.
And some other specific malicious actors as well.
That’s great as far as PR is concerned.
But the reality on the ground is different.
In other words, practically speaking, anyone can still use these social media platforms and other services.
Technology companies usually sit and wait unless and until one of the service’s users files a complaint.
Then these technology companies move to investigate the certain behavior that their user/users has filed against.
And only after validating the user’s complaint can these technology companies think of a solution to the given problem.
Technology companies could potentially flip that default behavior.
They can start out by changing the rules for a narrower category of actors that are really really bad.
We are living in the era of machine learning.
Hence, technology companies can readily identify activities that give off a malicious appearance.
Or even activities that mimic malicious activities closely enough.
To take an example, technology companies could halt the methods and ways in which Russian trolls on social media platforms behaved in the recent past.
The same goes for terrorist organizations such as ISIS as well.
Using advanced machine learning algorithms, technology companies can block such activities automatically.
And it is likely to have an effect at least in the beginning.
After blocking a specific bad actor or activity, humans could step in and review expeditiously.
Then they could determine whether the algorithms put the “hold” on any of their user accounts improperly and hence halted them.
Humans could then identify appropriate cases and then promptly try to reverse the temporary suspension.
All of that sounds great.
But that shouldn’t lead us away from the fact that if that were to happen it would represent a massive shift from how technology companies currently approach issues related to the use of their online services and platforms.
That’s why technology companies should only use such methods as experiments.
Even if technology companies use such actions as an experiment it would still mark the beginning of technology companies taking the increase in demand for first spotting and then halting malicious actors seriously.
Of course, technology companies have to make sure that they eventually stop bad actors from posting malicious and/or radicalizing content altogether.
Such an approach should also hold up against actors who promote socially divisive messages.
Technology companies must stop these as well before they go viral.
That’s where the standard line that these companies take to address these issues will also have to change.
Currently, it represents these challenges as peripheral exploitation of social media platforms.
Such line does nothing more than raise hope that these challenges will automatically go away with a strictly technical and/or engineering solution.
Recently Facebook did come up with such solution.
The company announced that it would start work on recalibrating its algorithm which drove the platform’s News Feed.
This is also a good time to mention the author of the report that Airbnb commissioned.
While talking about discrimination on Airbnb, the report phrased the solution in a very interesting manner.
The author said that just as entities assembled whole teams of capable lawyers to fight discrimination during the mid-20th century, it was his hope that engineers in the 21st century would do their required part in order to help technology companies eliminate bias.
Such a solution may have sufficed if these challenges truly represented marginal manipulations of modern technologies.
But, as mentioned before, they are simply not.
Because the problems we have discussed so far are essentially core features of the online technologies that a few bad actors are using to their bad ends.
Ultimately these challenges will not have any susceptibility to just technical solutions.
In order to address these challenges, technology companies ultimately will have to turn off and/or make unavailable, core features of their products for users who engage in activities abhorrent enough to not deserve an opportunity to access such services.
And to figure out which of the users fall into that unacceptable category, technology companies will have to make use of value judgment.
This is the same type of value judgment that technology companies don’t like to deal with.
Because of their libertarian ethos.
The type of ethos that has left technology companies very reluctant to make value judgments.