Twitter

“Just an Ass-Backward Tech Company”: How Twitter Lost the Internet War

Twitter faces more challenges than most technology companies: ISIS terrorists, trolls, bots, and Donald Trump. But its last line of defense, the company’s head of trust and safety, Del Harvey, isn’t making things easier. “Del overcomplicates things . . . and you can see that in the way some of these things are handled publicly.”
Image may contain Animal Fish Logo Symbol and Trademark
Illustration by Lauren Margit Jones; from Alamy (twitter logo), from Getty Images (bandaid).

Del Harvey, Twitter’s resident troll hunter, has a fitting, if unusual, backstory for somebody in charge of policing one of the Internet’s most ungovernable platforms. As a teenager, she spent a summer as a lifeguard at a state mental institution; at 21, she began volunteering for Perverted Justice, a vigilante group that lures pedophiles into online chat rooms and exposes their identities. When the group partnered with NBC in 2004 to launch To Catch a Predator, Harvey posed as a child to help put pedophiles in jail. In 2008, she joined Twitter, then a small status-updating service whose 140-character quirk was based on the amount of alphanumerics that could be contained on a flip-phone screen. She was employee No. 25, and her job was to combat spam accounts.

Harvey’s bildungsroman is legend inside Twitter, where she now serves as vice president of trust and safety, effectively commanding a massive, never-ending war between the company’s censors and a legion of Russian bots, sexual harassers, neo-Nazis, and Turkish hackers who have, at times, seemed to overwhelm the platform. For the past decade, she has been at the forefront of that battle, winning the loyalty of Twitter employees who respect her deep institutional knowledge. But as Twitter has grown from a small messaging platform with no revenue to a $25 billion public company, many company insiders have come to a frustrating conclusion: it’s a war that Harvey is losing.

Combating the flood of abuse is “like trying to put out a fire in a house,” a former trust and safety executive for another tech company told me. “Once you do that, four more fires pop up in its place.” Each day, Harvey and her team investigate countless tweets that have been flagged as spam or abuse and quickly decide whether a user should be suspended or banned. In 2013, for instance, Twitter added a “report abuse” feature after Caroline Criado-Perez, who started a campaign to put Jane Austen on British currency, said that she had received 50 rape threats an hour. Still, the abuse continued: During the height of the 2014 Gamergate controversy, video-game critic Anita Sarkeesian posted 157 examples of death and rape threats, insults, and incitements to suicide that she had received over the course of six days.

Since 2016, however, Twitter controversies have erupted with increasing regularity. As a presidential candidate, Donald Trump seized the platform as a megaphone to amplify his attacks on immigrants, women, and Muslims, adding a virulent new political dimension to the tidal wave of harassment. (When Saturday Night Live’s Leslie Jones was chased off Twitter by a horde of white-nationalist trolls, it was Harvey’s team that banned Milo Yiannopoulos, the conservative provocateur that led the attack.) That same year, Twitter formed a Trust and Safety Council composed of researchers, technologists, and safety groups to help weigh in on free speech issues on the platform. But while Twitter made progress in cracking down on Islamic extremists—suspending hundreds of thousands of accounts associated with ISIS—other hate groups multiplied. It was, and remains, a vexing problem for a company whose financial imperatives have muddied the waters between protecting users’ speech and safety—especially when Trump, the platform’s most vocal fan, re-tweets white supremacists and anti-Muslim snuff films.

Last fall, Twitter updated its approach again, releasing a new set of rules and clarifying policies intended to make Twitter “more aggressive” about preventing harassment on its platform. In a statement, a representative for Twitter emphasized that safety remains a top priority. “As hateful conduct and online abuse continue to evolve, our efforts to combat this behavior must also evolve; our work in this space will never be done. That said, we’re making meaningful progress toward reducing this content on our platform.”

Views of Twitter's work in the space appear to be evolving, too. In Twitter’s early years, Harvey was considered a visionary. “Del became an encyclopedia of the weird things people were doing,” early Twitter employee Jason Goldman told Forbes in 2014. (“In case of emergency, trust Del,” he told staffers when he left the company.) But now, as Twitter soul-searches, many close to the company are wondering whether Harvey’s hallowed status has prevented Twitter from recognizing that it needs an outsider, or a wholly new approach, to fix the problems it never solved. “Del overcomplicates things. Things that should be low-hanging fruit, they get wrapped around the axle about. And you can see that in the way some of these things are handled publicly,” a former Twitter executive told me. “It’s hard to understand how decisions are made, and why certain decisions are made one day versus the next day.”

“It’s a technology company with crappy technologists, a revolving door of product heads and C.E.O.s, and no real core of technological innovation.

Part of the problem, insiders agree, is that Twitter has never set clear guidelines for what kind of language or behavior will get somebody banned. In the case of the anti-Muslim videos that Trump re-tweeted, Twitter offered a series of shifting explanations. First, the company said, the videos were inherently newsworthy; later, it suggested that they had become newsworthy—and thus in the public interest—by virtue of Trump’s involvement. “There may be the rare occasion when we allow controversial content or behavior which may otherwise violate our rules to remain on our service because we believe there is a legitimate public interest in its availability,” a spokesperson said at the time. Nearly three weeks later, Twitter banned the account that Trump had shared, rendering his re-tweet null and void.

People familiar with the inner workings of the company were not impressed. “The way they deal with abuse and talk about it, and the way the goal line moves over time, is not crisp,” another former Twitter executive complained. “It’s like, now this is abuse, and now this is abuse, and now this is abuse—when what seems like abuse to the outside world is much more straightforward. Just state: this is what the goal line is.”

Dorsey photographed at the Twitter headquarters.

Photograph by Art Streiber.

It’s easy to find fault with Harvey’s trust and safety team, which has been behind a number of recent public stumbles. The rogue customer-service employee who shut down Trump’s Twitter account for 11 blissful minutes was on Harvey’s team, as were the staffers who decided to suspend actress Rose McGowan’s account when she responded to the Harvey Weinstein scandal by posting a screenshot that included a personal phone number. It was Harvey’s team that ultimately allowed white supremacist Richard Spencer to keep his account, but blocked a Republican congresswoman from publishing a campaign advertisement describing abortion.

At the same time, her defenders say, Harvey has been forced to clean up a mess that Twitter should have fixed years ago. Twitter’s backend was initially built on Ruby on Rails, a rudimentary web-application framework that made it nearly impossible to find a technical solution to the harassment problem. If Twitter’s co-founders had known what it would become, a third former executive told me, “you never would have built it on a Fisher-Price infrastructure.” Instead of building a product that could scale alongside the platform, former employees say, Twitter papered over its problems by hiring more moderators. “Because this is just an ass-backward tech company, let’s throw non-scalable, low-tech solutions on top of this low-tech, non-scalable problem.”

Calls to rethink that approach were ignored by senior executives, according to people familiar with the situation. “There was no real sense of urgency,” the former executive explained, pointing the finger at Harvey’s superiors, including current C.E.O. Jack Dorsey. “It’s a technology company with crappy technologists, a revolving door of product heads and C.E.O.s, and no real core of technological innovation. You had Del saying, ‘Trolls are going to be a problem. We will need a technological solution for this.’” But Twitter never developed a product sophisticated enough to automatically deal with with bots, spam, or abuse. “You had this unsophisticated human army with no real scalable platform to plug into. You fast forward, and it was like, ‘Hey, shouldn’t we just have basic rules in place where if the suggestion is to suspend an account of a verified person, there should be a process in place to have a flag for additional review, or something?’ You’d think it would take, like, one line of code to fix that problem. And the classic response is, ‘That’s on our product road map two quarters from now.’”

“Jack is not decisive ... You just can’t have a company with no desire or ability to make decisions.”

Dysfunction is nothing new for Twitter, which has been plagued by management troubles since its founding. Over the last ten years, Twitter has cycled through three C.E.O.s—four, if you include Dorsey’s return in 2015—without ever committing to a comprehensive vision for itself. (The fact that Dorsey is simultaneously serving as C.E.O. for Square, another public company, hasn’t helped.) “Jack is not decisive,” a former employee says. “You just can’t have a company with no desire or ability to make decisions. It’s not just abuse, but also product decisions. We used to debate for a thousand hours and the boss man couldn’t make a decision. If you’re unwilling to make anyone upset, it can be paralyzing in many cases. And user abuse is a case like that.”

Harvey’s defenders say she has the institutional knowledge to tackle the abuse issue, but is under-resourced for what is arguably an insurmountable task. “It seems her heart is very much in it,” the third former executive told me. “If you are a company of amateur-hour technology, so that everything becomes some nightmarish Rube Goldberg construction, in some cases, the best people to keep around are the ones who understand how the Rube Goldberg machines work.” Danielle Citron, a Twitter trust and safety partner and a professor at the University of Maryland Francis King Carey School of Law, emphasized that if Harvey’s team is sometimes slow to address abuse, it’s only because they care so much about getting each case right. “They mean it when they say they care about speech that terrorizes and silences,” Citron said. “They really do have their users’ speech issues in mind in a way that’s very holistic.”

But as Twitter grows and matures, the counter-argument goes, it also needs to take its responsibilities more seriously. If the buck stops at the top, the blame lies with Dorsey. Yet the question remains: If Harvey can’t solve the problem on her own, shouldn’t someone else take the wheel? “[Harvey] is over-titled and overpaid,” the former employee told me. “She joined this company early and got to ride the wave. I know why she would never quit. It’s a little bit like asking Ringo Starr why he never left the Beatles: it was the best job he ever had.” There are two main components to Harvey’s job, this person told me: to formulate a clear set of rules for what constitutes abusive speech, and to be consistent in enforcing them. “And I hate to say it, but she clearly was in so far over her head on both of those. It was a disaster. I’m sure she’s a nice person personally, but in this job, she was utterly incompetent.”

Twitter, of course, is hardly the only tech company to have faced a leadership crisis. Silicon Valley is chock full of start-ups that experienced an awkward adolescent phase on their way to profitability (to say nothing of endemic arrested development in the tech industry itself). Facebook matured with the assistance of Google executive Sheryl Sandberg, who took over as Facebook’s chief operating officer in 2008, transforming an anarchic office culture into a corporate money press. Last year, as it tried to smooth over a tense relationship with the media, Facebook hired Campbell Brown, a NBC News and CNN veteran, to lead its news partnerships team.

Twitter, meanwhile, has overseen a vast talent exodus since Dorsey returned, without ever finding its own Sandberg or Brown to take the reins. About 60 percent of Twitter’s executives had left by the end of 2016, and the company is still missing a chief technology officer. Last month, Dorsey lost the closest thing he had to a shadow C.E.O. when Anthony Noto, the former Goldman Sachs banker who helped take Twitter public in 2013, announced that he was stepping down, too. The diaspora has left Twitter simultaneously depleted and overly reliant on its veterans, like Dorsey and Harvey, to keep the ship afloat. The question remains whether the company’s dinosaurs are also the people best placed to lead Twitter into the future.

Back in 2012, my colleague Nick Bilton reported how Dorsey was sidelined at Twitter, under then C.E.O. Dick Costolo, after employees complained that Dorsey was difficult and indecisive when it came to product management. When he returned to Twitter as C.E.O., some three years later, Dorsey had a new mandate. But it seems he still lacks the strategic vision to either fix Twitter’s problems or find someone who will. And it’s not clear at this point whether the company can still attract the caliber of talent it would need to start over. “That person, I think, will never come to work at Twitter,” the former Twitter executive said. “They know that it’s where technologists go to die. They want to be with other great technologists elsewhere.”