In this year alone, Facebook has not been able to catch a break. 2018 has brought allegations of data mismanagement, security breaches and cooptation in hate speech aimed at the social media giant. As Facebook gains increasing power and influence, its response to this storm of serious allegations has been both lackluster and alarming. Rescuing its image and trustworthiness in the public eye requires an honest review of the company’s shortcomings and responsibilities.
In February, it came to public knowledge that thirteen Russians and three companies were indicted for using Facebook to steal the identities of American citizens and pose as political activists in order to spread pro-Trump advertisements throughout the website. In their messages, they zeroed in specifically on the cleavages in American politics today, including religion, immigration and race, hoping to contribute to the tension of the current political climate and further the division across ideologies.
In addition to posting these ads, the Russians even created Facebook groups in order to ensure that their messages would be seen and perhaps mobilize people who held similar beliefs. Facebook was caught at the center of all this, mainly because Facebook and Instagram (which is owned by Facebook) were the primary means of spreading these messages. Both Facebook and Instagram were mentioned a total of 41 times in a 37-page indictment detailing how Russians attempted to manipulate the outcome of the 2016 election. After the story of the indictment broke, speculation started as to why the world’s biggest social media company didn’t catch the Russian activity, or if they did, why they didn’t attempt to stop it, in addition to questions about Facebook’s ability to effectively reducing these kinds of interferences. Rob Goldman, Facebook’s vice president of advertisement, issued a series of tweets that conceded that Russia did try to incite division in America by exploiting its institutions of free speech, but asserted that their goal was not to sway the election in any direction.
Just a month after Special Counsel Robert Mueller charged the Russians and firms with election meddling, the infamous Cambridge Analytica story broke. Cambridge Analytica, a political data firm, was hired by Trump’s 2016 election campaign and used to gain access to the personal information of more than 50 million Facebook users. After having acquired information including people’s friend networks, their “likes” and identities, Cambridge Analytica was tasked with analyzing the behavior of the accounts to inform targeted messaging and advertising for both the Trump campaign and external actors. Dr. Aleksandr Kogan, a Russian-American psychology professor at Cambridge University, developed this data-harvesting technology through an app that surveyed users. Through this app, he provided the information for the 50 million profiles, but only about 270,000 of these 50 million users consented to having their data used, and the ones that consented were told the information was being used for academic purposes.
In addition to the misinforming Cambridge Analytica its users, the firm has also been largely funded by Robert Mercer, a wealthy right-wing donor, and Stephen K. Bannon, the former advisor to president Donald Trump. Amid all this, Facebook insisted that this was not a data breach. In a statement given by Paul Grewal, the Vice President and Deputy General Counsel at Facebook, researchers are commonly granted access to user data for academic purposes, and the users consent to this when they create an account. However, Facebook prohibits this acquired data to be sold or transferred “to any ad network, data broker, or other advertising or monetization-related service,” and that’s what Dr. Kogan did when he relayed all the information he collected to Cambridge Analytica.
The outcome of the Cambridge Analytica scandal ultimately resulted in the company shutting down completely, and the data that was collected is now supposedly scrapped. Facebook responded to the scandal by claiming they hired a digital forensics firm “to determine the accuracy of the claims that the Facebook data in question still exists.” However, it’s been over half a year since the story broke, and though Facebook has banned the personality quiz that was gathering all the data without proper consent, they also haven’t provided a clear update on whether or not the harvested information has truly been properly disposed of. It also already might be too late to try and do damage control – millions of Facebook users still had their identities shared unknowingly.
“We shouldn’t have to wait until antitrust lawsuits get filed by the government in order to realize that Facebook needs to be held accountable for the amount of slip-ups they’ve faced – from information leaking to attempting to downplay the seriousness of several breaches.”
Having just barely recovered from this fallout, Facebook emerged in the news cycle again just three months ago, stating that a cyber attack on their site had compromised the personal information of nearly 50 million users, the largest security breach in the company’s 14-year history. Contrary to their previous denial of Cambridge Analytica’s information harvesting qualifying as a breach in security, Facebook is now actually facing an unidentified threat to the information of their users’ accounts, such as their name, sex and hometown. As of today, company officials still don’t know the origins of the attackers, nor do they know if there was a specific target in this breach.
Facebook is not only facing staggering amounts of criticism aimed at their inability to locate and prevent security breaches from occurring, but they’re also facing difficulties with how their platform is used: just last year, Facebook’s live video feature broadcasted a man in Thailand murdering his daughter, and the website also was once a marketplace for guns up until 2016, when they banned sales following the realization that guns could be bought without having to undergo background checks or registrations. In a recent statement from Facebook, a spokesman acknowledged Facebook’s delayed efforts to combat “bad actors” within their online community. The question of whether or not Facebook will follow through with their wishes to create a better social environment however, is doubtful due given continuing trends we’ve seen with Facebook – not addressing any of its own faults until it they come to public attention.
With a social media platform as big as Facebook (2.2 billion active users and counting), it’s hard to say that moderating the content that gets posted will be a quick and easy effort. However, with the growing distrust between the consumer and Facebook given the past year surrounding the now-public awareness that personal information may have been sold without users’ consent, attempting to reform their site to be more efficient at blocking cyber attacks is the first step Facebook should take in cleaning up the mess .
First, it’s important to note that there are many places in the world that are more susceptible to the propaganda that ends up on Facebook. In places with limited to no free press, people rely on whatever network available they have to access their news. This is currently playing out right now in Myanmar: an ethnic cleansing is happening to this day against Rohingya Muslims, and members of the dominant Burmese military are using Facebook as a tool of mobilization against the religious minority group.
After long delays, Facebook has ultimately been successful in taking down the official accounts of senior Burmese military leaders, but a large chunk of the propaganda campaign against Muslims still remains untouched.
We’ve witnessed something similar in the past: an area of limited proper press coverage that makes people more vulnerable to believing whatever information they can get their hands on. The radio was an essential tool to mobilizing support in the Rwandan genocide of the Tutsi population, fostered by the Hutus out of anxiety that Tutsis would gain governmental power and establish discrimination against the Hutu. Given the ubiquity of Facebook, it has shown a troubling capacity to amplify these threats when it is insufficiently moderated.
Facebook is most definitely not the sole cause of this violence. Ethnic conflicts have largely defined the country long before Facebook came around, and there have been ways people targeted mass killings of a certain group without the use of the website. What is undeniable is that the platform has served as powerful fuel to a long-burning flame. In addition, because Facebook generates most of their revenue from advertising, it seems they are essentially profiting off of hate speech and propaganda.
Contrary to the company’s own narrative of providing a platform for free speech, Facebook often exercises its ability to censor content when deemed necessary. However, such efforts to censor hate speech have been met with increasing controversy. Civil rights groups have expressed their anger at Facebook for taking down their posts detailing the stories of minorities being called racial slurs. At the same time, this particular effort Facebook took in trying to create a more moderated public forum seemed lackluster on their part, as moderators have claimed to struggle between deciphering when a slur was being used as an derogatory attack or when it was being used in an anecdote about someone’s experience with racism.
Facebook’s late reactions in capturing and removing things like hate speech and propaganda don’t necessarily pose as a threat to countries such as the U.S. and those in Western Europe because most people won’t easily be manipulated by an ad on Facebook. This sense of rationality amongst most citizens stems from our access to free press and other resources.
Facebook should also become more aware of the influence they have in such countries where people are more prone to falling for the propaganda ads on their site. This also brings up the question of whether or not Facebook even checks the ads before publishing them.
Facebook does formally require approval from their site on ads to make sure paid content lines up with their guidelines, allotting about a 24-hour waiting period while the ad gets reviewed. However, individual posts and status updates aren’t easily as regulated unless they get reported or violate primary community guidelines. This insufficient effort shows a definite disconnect in the supposed regulations they have and the mounting ignorance Facebook is starting to show in the increasing hate speech being posted on its platform.
With all the bad publicity it is catching, Facebook has gone on the offensive, trying to deflect accountability and lobbying to limit the kinds of questions Sheryl Sandberg, the chief operating officer of Facebook, would have to answer in hearings. Facebook successfully managed to ensure that Sandberg would not have to address questions facing pressing topics such as Cambridge Analytica and censorship issues. The dodge-and-weave method being employed by Facebook right now is possibly out of their knowledge that to install stricter methods of regulations will be not only extremely complicated, but also more expensive with the risk of failing to achieve the goals of a more moderated public forum.
In terms of Facebook itself becoming a threat due to the magnitude of personal information it carries, we’ve witnessed firsthand the way it became abused and equipped against us. In light of all these scandals surrounding Facebook, calling on them to create an environment of more transparency between its users is a must. When Facebook sought to try and turn the anger toward rival companies, it was a relatively unwise move, seeing as Facebook certainly displays monopolistic tendencies and therefore does not have many competitors, buying companies that had the potential to rival their company, such as WhatsApp and Instagram.
Microsoft faced an antitrust case in 2001 in which the lawsuit itself was centered around monopolistic behavior, something that Facebook has displayed in the past, but it’s only a portion of the issues Facebook is currently dealing with. United States v. Microsoft Corporation is worth noting for its settlement, which mirrors a possible resolution for Facebook in terms of trying to no longer keep its users in the dark about how their information is being handled: Microsoft was required to “share its application programming services with third-party companies and appoint a panel of three people who would have full access to Microsoft’s systems, records, and source code…” With the amount of distrust between Facebook and its users, publishing their ad revenue data and similar records isn’t too far-fetched of a first step.
Facebook’s dominance in the public domain, particularly in terms of data information and accessibility to billions, is an existential threat that has become “too big to fail.” Furthermore, as time passes and updates in technology continue to introduce themselves, questions about the security of our data and networks risk becoming even more serious. We shouldn’t have to wait until antitrust lawsuits get filed by the government in order to realize that Facebook needs to be held accountable for the amount of slip-ups they’ve faced – from information leaking to attempting to downplay the seriousness of several breaches. Facebook finds itself at a crossroads, with the choice between transparency, accountability and security on the one hand, and business as usual on the other. The company needs to reform if it is to regain the trust and respect of its billions of users.