How average Facebook users in Germany inspired a wave of violence against refugees
Recently I ran into a well known tech CEO and asked him how he was feeling about social networks. (I am extremely fun at parties.) The CEOâs unequivocal response surprised me: âshut them down,â he said. His reasoning was simple: the networks undermine democracies in ways that cannot be fixed with software updates, he said. The only logical response, in his mind, was to end them.
Whether social networks can be fixed is the question looming over Amanda Taub and Max Fisherâs deeply unsettling new report in The New York Times. The report, based on academic research and bolstered by extensive on-the-ground reporting, finds a powerful link between Facebook usage and attacks on refugees in Germany:
Karsten MÃ¼ller and Carlo Schwarz, researchers at the University of Warwick, scrutinized every anti-refugee attack in Germany, 3,335 in all, over a two-year span. In each, they analyzed the local community by any variable that seemed relevant. Wealth. Demographics. Support for far-right politics. Newspaper sales. Number of refugees. History of hate crime. Number of protests.
One thing stuck out. Towns where Facebook use was higher than average, like Altena, reliably experienced more attacks on refugees. That held true in virtually any sort of community â" big city or small town; affluent or struggling; liberal haven or far-right stronghold â" suggesting that the link applies universally.
The most striking data point in the pi ece: âwherever per-person Facebook use rose to one standard deviation above the national average,â the authors write, âattacks on refugees increased by about 50 percent.â
From there, the authors explore why this happens. They examine how Facebook promotes more emotional posts over mundane ones, distorting usersâ sense of reality. Towns that had been relatively welcoming to immigrants eventually came to encounter an overwhelming tide of anti-refugee sentiment when they opened the Facebook app.
Much of this activity is driven by so-called âsuperposters,â who flood the service with negative sentiment. This asymmetry of passion makes it appear as if refugees have less support than they actually do, which in turn inspires more people to gang up against them.
One of the most notable features of the study, which you can read in its entirety here, is how it determines that Facebook is uniquely responsible for the surge of anti-immigrant violence in Germany . Here are Taub and Fisher again:
German internet infrastructure tends to be localized, making outages isolated but common. Sure enough, whenever internet access went down in an area with high Facebook use, attacks on refugees dropped significantly.
And they dropped by the same rate at which heavy Facebook use is thought to boost violence. The drop did not occur in areas with high internet usage but average Facebook usage, suggesting it is specific to social media.
Also notable: these attacks happened despite strict laws against hate speech in Germany, which require Facebook to take any offending posts down within 24 hours of being reported. As the authors note, the posts driving the violence largely do not qualify as hate speech. The overall effect of standard political speech has been to convince large swathes of the population that Germany is beset by a foreign menace â" which triggered a political crisis in the country earlier thi s year.
In New York, Brian Feldman says Facebook has two choices:
It can do more to limit user speech on posts that are not explicitly hateful but couched in the rhetoric of civil discussion â" the types of posts that seem to fuel anti-refugee violence. Or it can tweak its distribution mechanisms to minimize overall user engagement with Facebook, which would also reduce the amount of ad money it collects.
Surprisingly, Facebook declined to comment on the study or its implications. But even as it was still reverberating around the internet, the company was getting ready to answer for another set of concerns: four new influence campaigns linked to Russia and Iran. From my story:
Facebook removed more pages today as a result of four ongoing influence campaigns on the platform, taking down 652 fake accounts and pages that published political content. The campaigns, whose existence was first uncovered by the cybe rsecurity firm FireEye, have links to Russia and Iran, Facebook said in a blog post. The existence of the fake accounts was first reported by The New York Times.
âThese were networks of accounts that were misleading people about who they were and what they were doing,â CEO Mark Zuckerberg said in a call with reporters. âWe ban this kind of behavior because authenticity matters. People need to be able to trust the connections they make on Facebook.
People indeed ought to be able to trust the connections they make on Facebook. But between the study of Facebookâs effects on Germany and news of multiple ongoing state-sponsored attacks on the service, it was hard to say where that trust could come from.
âWhen you operate a service at the scale of the ones that we do, youâre going to see a lot of the good things, and youâre going to see people abuse the service in every way possible as well,â Zuckerberg told reporters. And yet th e thing that troubles me most today wasnât the people abusing the service. It was the Germans using Facebook just as it was intended to be used.
Facebook is rating the trustworthiness of its users on a scale from zero to one
Facebook relies on user reports to determine whether a post is false or misleading. But users themselves can seek to mislead Facebook by falsely reporting credible information. And so Facebook has begun giving users a score to help them weight reports. Itâs a bit less dramatic than it sounded when the story first hit â" this is not an equivalent to, say, an Uber rating or Reddit karma â" but it does seem like a good and useful thing. Elizabeth Dwoskin reports:
A userâs trustworthiness score isnât meant to be an absolute indicator of a personâs credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.
Facebook Pushes Back on Reporting About its User Trust Ranking
Facebook didnât seem to like the Post story::
âThe idea that we have a centralized âreputationâ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading. What weâre actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system,â a Facebook spokesperson wrote via email, âThe reason we do this is to make sure that our fight against misinformation is as effective as possible.â
Facebook Is Removing Mor e Than 5,000 Ad Targeting Options To Prevent Discrimination
After a series of reports by ProPublica and others about how Facebookâs ad platform can enable discrimination, the company said it would remove thousands of targeting capabilities, Alex Kantrowitz reports:
Facebookâs removal of the targeting options comes amid an investigation from the US Department of Housing and Urban Development, which filed a complaint last week alleging Facebook had enabled discriminatory housing practices with its ad targeting options. The complaint began a process that could eventually lead to a federal lawsuit.
On the frontline of Indiaâs WhatsApp fake news war
Soutik Biswas examines how India is working to educate young people about viral misinformation on WhatsApp, in the hopes that it will reduce the number of murders inspired by hoaxes on the platform:
To combat this, district offici als have now begun 40-minute-long fake news classes in 150 of its 600 government schools.
Using an imaginative combination of words, images, videos, simple classroom lectures and skits on the dangers of remaining silent and forwarding things mindlessly, this initiative is the first of its kind in India. This is a war on disinformation from the trenches, and children are the foot soldiers.
New Russian Hacking Targeted Republican Groups, Microsoft Says
Russia is now targeting conservative think tanks who favor stronger sanctions against the country, according to new research from Microsoft, David E. Sanger and Sheera Frenkel report:
The goal of the Russian hacking attempt was unclear, and Microsoft was able to catch the spoofed websites as they were set up.
But Mr. Smith said that âthese attempts are the newest security threats to groups connected with both American political partiesâ ahead of the 2018 midterm elections.
Jack Dorsey On Deleting Tweets, Banning Trump, And Whether An Unbiased Twitter Can Exist
Your Jack Dorsey interview of the day is with Buzzfeedâs Charlie Warzel. He offers lots more big-picture talking about âincentivesâ and âconversation,â and little in the way of concrete plans. But Iâm glad Warzel suggested to Dorsey that he is getting played by conservatives crying wolf about shadow bans:
Dorsey: I want to acknowledge my bias and I also want to acknowledge thereâs a separation between me and our company and how we act. We need to show that in our, we need to be a lot more transparent, we need to show that in our product, we need to show that in our policy and we need to show that in our enforcement and I think in all three we have, but it bears repeating again and again and again. The reason weâre talking with more conservatives is just in the past we havenât really done much. At least I havenât.
Twitter Gets Powerful Win in âMust-Carryâ Lawsuitâ"Taylor v. Twitter
Eric Goldman updates us on a case in which white supremacists sued Twitter in an effort to prevent the company from banning them. An appeals court ruled that Twitter is protected from the suit by section 230 of the Communications Decency Act.
Number of Third-Party Cookies on EU News Sites Dropped by 22% Post-GDPR
What if GDPR â¦. is good? Catalin Cimpanu offers a data point:
The number of tracking cookies on EU news sites has gone down by 22% according to a report by the Reuters Institute at the University of Oxford, who looked at cookie usage across EU news sites in two phases, in April 2018 and July 2018, pre and post the introduction of the new EU General Data Protection Regulation (GDPR). [â¦]
âWe may be observing a kind of âhousecleaningâ effect. Modern websites are highly comp lex and evolve over time in a path-dependent way, sometimes accumulating out-of-date features and code,â researchers said. âThe introduction of GDPR may have provided news organizations with a chance to evaluate the utility of various features, including third-party services, and to remove code which is no longer of significant use or which compromises user privacy.â
Line is another chat app rife with spam, scams, and bad information. The volunteer-supported Cofacts is fact-checking them in the open
Kirsten Han profiles Cofacts, a collaborative fact-checking service that uses bots to check information thatâs spreading virally on popular Asian messaging app Line. The bot has received more than 46,000 messages, of which chatbot was answered 35,180 automatically:
Any interested volunteers can log into the database of submitted messages and start evaluating the messages, using the Cofacts form. Cofacts offers step- by-step instructions for those who canât figure out how to use the platform, as well as a set of clear editorial guidelines that helps volunteers weed out uncheckable messages or ones that are âpersonal opinion,â and what types of reliable sources they can use to back up their fact-checking work.
Based on data collected by the Cofacts team on the messages theyâve received so far, the misinformation debunked on the platform can range from fake promotions and medical misinformation to false claims about government policies.
How misinformation spreads on Line â" one of the most popular messaging apps in Southeast Asia
Speaking of Line, Daniel Funke looks at how public accounts on the service grow big by promising users free stickers and then pivoting to disinformation once they get a large audience. Many of the influence campaigns appear to advertise health care products of dubious value:
Many of the top m isinforming accounts on the app publish accurate tips about things like lowering blood pressure alongside spammy ads for things like detoxifying foot pads â" and Anutarasoat said channels regularly profit from it.
âThe products that some of these networks want to sell, (theyâre) not harmful products, but not useful like they advertise â" like a fake website thatâs selling medicine that can reduce blood pressure, and theyâre targeting it for older people who have high blood pressure problem,â he said. âThey create a convincing website that has a picture of a doctor and a picture of a witness. In some websites, they actually fake that it is a website from public health ministries.â
Say âAlohaâ: A closer look at Facebookâs voice ambitions
Drawing on some new information from researcher Jane Manchun Wong, Josh Constine reminds us that Facebookâs home speaker is still in develo pment.
Schools Are Mining Studentsâ Social Media Posts for Signs of Trouble
Tom Simonite examines the state of social media monitoring in schools and finds several companies vying for district dollars with a promise of protecting schools from attack. But their value is unclear, and they could have significant downsides:
Thereâs little doubt that students share information on social media school administrators might find useful. There is some debate over whether â" or howâ"it can be accurately or ethically extracted by software.
Amanda Lenhart, a New America Foundation researcher who has studied how teens use the internet, says itâs understandable schools like the idea of monitoring social media. âAdministrators are concerned with order and safety in the school building and things can move freely from social mediaâ"which they donât manageâ"into that space,â she says. But Lenhart cautions that research on kids, teens, and social media has shown that itâs difficult for adults peering into those online communities from the outside to easily interpret the meaning of content there.
Slack raises $427 million, now valued above $7 billion
Sometimes I wonder whether the Time Well Spent movement will ever affect the famously noisy, all-consuming office chat app Slack. The answer so far â" no, not at all!
Tinder is rolling out a college-only service, Tinder U
My colleague Ashley Carman reports on the launch of Tinder U, a version of the dating app just for college students. I imagine this will be quite popular, although it may turn out that Tinder itself is good enough.
Tinderâs marketing frames the service as ideal for finding a study buddy or someone to hang out with on the quad. Also, if Tinder can build in a new dedicated user base of 18-year-olds, it can also st art converting them to paid users sooner. Facebook employed a similar strategy when it first launched. The platform required a .edu email address to build out a loyal college following before opening widely a few years later. The opposite is happening with Tinder: everyone can use it, but college kids now might want a safe haven from creepy older people.
Google is developing an experimental podcast app called Shortwave
My colleague Russell Brandom finds evidence of a new podcast app from Google:
Nothing in the trademark filing specifies the kind of audio being accessed, but a Google representative said the focus of the app was on spoken word content. There is little public information about the app, although Google has played with smart captioning, translation, and other AI-assisted features in previous podcast products.
Advertising is obsolete â" hereâs why it âs time to end it
Ramsi Woodcock makes a sweeping case against advertising, saying the internet has made its core function of consumers obsolete, and saying it could even violate antitrust laws. This is a big take but a well considered one:
The courts have long held that Section 2 of the Sherman Act prohibits conduct that harms both competition and consumers, which is just what persuasive advertising does when it cajoles a consumer into buying the advertised product, rather than the substitute the consumer would have purchased without advertising.
That substitute is presumably preferred by the consumer, precisely because the consumer would have purchased it without corporate persuasion. It follows that competition is harmed, because the company that made the product that the consumer actually prefers cannot make the sale. And the consumer is harmed by buying a product that the consumer does not really prefer.
Facebook and Twitter arenât liberal or conservative. Theyâre capitalist.
Will Oremus listens to the Radiolab episode I wrote about yesterday and examines it in the context of charges of bias against platforms:
Donald Trump, Ted Cruz, and other Republicans probably wonât buy Dorseyâs claim that he tries to keep his biases out of the companyâs decision-making, particularly the next time an Alex Jones gets the boot. Nor will most liberals believe that he isnât bending over backward to appease the hard right, especially the next time an Alex Jones isnât ejected from the platform. When a company that shapes the flow of online political speech is making high-stakes decisions about who can talk and who canât, itâs hard to accept that those decisions are the product of a jury-rigged rulebook or algorithm rather than political calculations or a secret agenda.
But itâs worth remembering, with these controversies, that social media companies do have an agenda, and it isnât secret. Their agenda is to keep making money, and when it comes to high-stakes decisions about who can say what online, the most lucrative option is often to play dumb
And finally ...
Donald Trump Jr.âs Instagram Is a Shakespearean Tragedy
The presidentâs eldest son is just like us â" which is to say, he reads the comments. Especially on Instagram, reports Eve Peyser:
Heâll respond to anyoneâ"he frequently ignores comments from verified accounts, instead replying to messages from random accounts, which suggests that he reads all the comments. Which has to got to hurt. But when replying to these so-called âwhiny libs,â Don Jr. doesnât hold back, chiding them for their low follower counts, and/or accusing them of being robots.
Something tell me Don may find himself receiving more comments than usual today.
Talk to me
Send me tips, questions, comments, academic studies: firstname.lastname@example.org.
Next Up In TechVerge3.0_Logomark_Color_1