Andrew Marantz on the social internet & our informational crisis
Andrew Marantz leads a conversation at Company, where he discusses his new book exploring reckless ambition, our unchecked beliefs about the virtue of technological progress, the rabbit hole of online radicalization, and more.
David King . Dec 23, 2019
Andrew Marantz is on the stage of the Company Amphitheater with moderator Lindsay Seigel, Company’s Director of Impact, to discuss Antisocial: his new book about how the internet has changed the way that we communicate and consume information.
Andrew Marantz pinpoints the moment when the nature of the internet changed. “There was a tectonic shift from Web 1.0 to Web 2.0. Web 1.0 was a centralized, top-down institution, whereas Web 2.0 empowered anyone with an internet connection to participate in a flawed, unequal democracy.” Web 2.0, the social internet (which really took off in 2008), was “built with a utopian view that failed to acknowledge the trade-offs of technological progress,” instead espousing that more freedom, more connections, and more access is always better. One of the trade-offs of this freedom has been that the social internet reinforces something we know to be untrue: quality and popularity are the same. “But, despite our knowing better, the social internet has reinforced a system where the more inflammatory, triggering things you say, the more you are rewarded.”
Andrew, a journalist for The New Yorker, has embedded himself with some of the most influential and inflammatory trolls in an effort to better understand the motivations and practices of the internet’s worst actors. “People were spreading fake news to make a buck.” Not just Russian bots and ideologues. “I’m talking about American citizens making their own propaganda and disseminating it. And here’s the thing: they were acting the way that they were supposed to be acting on Web 2.0. They were creating emotion to go viral. The business model of the internet is based on human psychology. To make something go viral, you tap into human emotions. This sounds obvious, but this was all being discovered less than a decade ago. People used to say, “the internet is too frivolous – too many cats, too much bacon.” However, the frivolity of the internet was never the issue. The real issue is the system of algorithms that the internet was built on.”
Whether your online content is effective or not is determined by activating emotions. How effective is your content at “inspiring the sharpest and most immediate spike of emotion? It’s much easier to inspire negative emotions like fear and outrage than it is to inspire positive emotions like awe and inspiration.”
It’s not that the internet is inherently bad. Andrew does not make the internet a scapegoat. Instead, he reminds us of our social responsibility to challenge our unchecked assumptions about the nature of technological progress. He says that we have a tendency to subscribe to pithy maxims that aren’t really helpful. “Of course people have the potential for both good and bad, but to simply say, “I believe that people are good and therefore will use the internet solely for good” is a cop-out. No. We have to build a better system.”
“Get clicks, track data: that’s the current business model. I don’t think that should be banned; I just think it’s a bad model for how we should organize our attention. But, because of techno-utopianism, there is a prevailing thought that if something is innovative and profitable, then it must be good for society. I think we should know better.” Andrew implores the next generation of technologists to be more mindful. “I think awareness is the first step. People should internalize the thought that whatever they are building may be technologically innovative, but that doesn’t mean that it’s necessarily good for the world. It is possible to build a good, bad company. Don’t just tell yourself a good story about what you are doing. Ask yourself questions like, “am I actually helping?” and “how could my product turn into a vehicle for fascism?” Andrew clarifies that his book does not say that tech is bad. The book is about “why the idea that tech can do no wrong is bad.”