Amazon may have been expecting lots of public attention when it announced where it would establish its new headquarters – but like many technology companies recently, it probably didn’t anticipate how negative the response would be. In Amazon’s chosen territories of New York and Virginia, local politicians balked at taxpayer-funded enticements promised to the company. Journalists across the political spectrum panned the deals – and social media filled up with the voices of New Yorkers and Virginians pledging resistance.
Similarly, revelations that Facebook exploited anti-Semitic conspiracy theories to undermine its critics’ legitimacy indicate that instead of changing, Facebook would rather go on the offensive. Even as Amazon and Apple saw their stock-market values briefly top US$1 trillion, technology executives were dragged before Congress, struggled to coherently take a stance on hate speech, got caught covering up sexual misconduct and saw their own employees protesting business deals.
In some circles this is being seen as a loss of public trust in the technology firms that promised to remake the world – socially, environmentally and politically – or at least as frustration with the way these companies have changed the world. But the technology companies need to do much more than regain the public’s trust; they need to prove that they deserved it in the first place – which, when placed in the context of the history of technology criticism and skepticism, they didn’t.
Looking away from the problems
Big technology companies used to frame their projects in vaguely utopian, positive-sounding lingo that obscures politics and public policy, transcending partisanship and – conveniently – avoiding scrutiny. Google used to remind its workers “Don’t be evil.” Facebook worked to “make the world more open and connected.” Who could object to those ideals?
Scholars warned about the dangers of platforms like these, long before many of their founders were even born. In 1970, social critic and historian of technology Lewis Mumford predicted that the goal of what he termed “computerdom” would be “to furnish and process an endless quantity of data, in order to expand the role and ensure the domination of the power system.” That same year a seminal essay by feminist thinker Jo Freeman warned about the inherent power imbalances that remained in systems that appeared to make everyone equal.
Similarly, in 1976, the computer scientist Joseph Weizenbaum predicted that in the decades ahead people would find themselves in a state of distress as they became increasingly reliant on opaque technical systems. Countless similar warnings have been issued ever since, including important recent scholarship such as information scholar Safiya Noble’s exploration of how Google searches replicate racial and gender biases and media scholar Siva Vaidhyanathan’s declaration that “the problem with Facebook is Facebook.”
The technology companies are powerful and wealthy, but their days of avoiding scrutiny may be ending. The American public seems to be starting to suspect that the technology giants were unprepared, and perhaps unwilling, to assume responsibility for the tools they unleashed upon the world.
In the aftermath of the 2016 U.S. presidential election, concern remains high that Russian and other foreign governments are using any available social media platform to sow discord and discontent in societies around the globe.
Facebook has still not solved the problems in data privacy and transparency that caused the Cambridge Analytica scandal. Twitter is the preferred megaphone for President Donald Trump and home to disturbing quantities of violent hate speech. The future of Amazon’s corporate offices is shaping up to be a multi-sided brawl among elected officials and the people they supposedly represent.
Is it ignorance or naivete?
Viewing the present situation with the history of critiques of technology in mind, it’s hard not to conclude that the technology companies deserve the crises they are facing. These companies ask people to entrust them with their emails, personal data, online search histories and financial information, to the point that many of these companies proudly tout that they know individuals better than they know themselves. They promote their latest systems, including “smart speakers” and “smart cameras,” seeking to ensure that users’ every waking moment – and sleeping moments too – can be monitored, feeding more data into their money-making algorithms.
Yet seemingly inevitably these companies go on to demonstrate how unworthy of trust they actually are, leaking data, sharing personal information and failing to prevent hacking, as they slowly fill the world with a disturbing techno-paranoia worthy of an episode of “Black Mirror.”
Technology firms’ responses to each new revelation fit a standard pattern: After a scandal emerges, the company involved expresses alarm that anything went wrong, promises to investigate, and pledges to do better in the future. Some time – days, weeks or even months – later, the company reveals that the scandal was a direct result of how the system was designed, and trots out a dismayed executive to express outrage at the destructive uses bad people found for their system, without admitting that the problem is the system itself.
Zuckerberg himself told the U.S. Senate in April 2018 that the Cambridge Analytica scandal had taught him “we have a responsibility to not just give people tools, but to make sure that those tools are used for good.” That’s a pretty fundamental lesson to have missed out on while creating a multi-billion-dollar company.
Rebuilding from what’s left
Using any technology – from a knife to a computer – carries risks, but as technological systems increase in size and complexity the scale of these risks tends to increase as well. A technology is only useful if people can use it safely, in ways where the benefits outweigh the dangers, and if they can feel confident that they understand, and accept, the potential risks. A couple of years ago, Facebook, Twitter and Google may have appeared to most people as benign communication methods that brought more to society than they took away. But with every new scandal, and bungled response, more and more people are seeing that these companies pose serious dangers to society.
As tempting as it may be to point to the “off” button, there’s not an easy solution. Technology giants have made themselves part of the fabric of daily life for hundreds of millions of people. Suggesting that people just quit is simple, but fails to recognize how reliant many people have become on these platforms – and how trapped they may feel in an increasingly intolerable situation.
As a result, people buy books about how bad Amazon is – by ordering them on Amazon. They conduct Google searches for articles about how much information Google knows about each individual user. They tweet about how much they hate Twitter and post on Facebook articles about Facebook’s latest scandal.
The technology companies may find themselves ruling over an increasingly aggravated user base, as their platforms spread the discontent farther and wider than possible in the past. Or they might choose to change themselves dramatically, breaking themselves up, turning some controls over to the democratic decisions of their users and taking responsibility for the harm their platforms and products have done to the world. So far, though, it seems the industry hasn’t gone beyond offering half-baked apologies while continuing to go about business as usual. Hopefully that will change. But if the past is any guide, it probably won’t.
Zachary Loeb, Ph.D. Student in History and Sociology of Science, University of Pennsylvania
This article is republished from The Conversation under a Creative Commons license. Read the original article.