On the evening of 17 February 2018, Professor Mary Beard posted on Twitter a photograph of herself crying. The eminent University of Cambridge classicist, who has almost 200,000 Twitter followers, was distraught after receiving a storm of abuse online. This was the reaction to a comment she had made about Haiti. She also tweeted: âI speak from the heart (and of cource I may be wrong). But the crap I get in response just isnt on; really it isnt.â
In the days that followed, Beard received support from several high-profile people. Greg Jenner, a fellow celebrity historian, tweeted about his own experience of a Twitterstorm: âIâll always remember how traumatic it was to suddenly be hated by strangers. Regardless of morality â I may have been wrong or right in my opinion â I was amazed (later, when I recovered) at how psychologically destabilising it was to me.â
Those tweeting support for Beard â irrespective of whether they agreed with her initial tweet that had triggered the abusive responses â were themselves then targeted. And when one of Beardâs critics, fellow Cambridge academic Priyamvada Gopal, a woman of Asian heritage, set out her response to Beardâs original tweet in an online article, she received her own torrent of abuse.
There is overwhelming evidence that women and members of ethnic minority groups are disproportionately the target of Twitter abuse. Where these identity markers intersect, the bullying can become particularly intense, as experienced by black female MP Diane Abbott, who alone received nearly half of all the abusive tweets sent to female MPs during the run-up to the 2017 UK general election. Black and Asian female MPs received on average 35 per cent more abusive tweets than their white female colleagues even when Abbott was excluded from the total.
The constant barrage of abuse, including death threats and threats of sexual violence, is silencing people, pushing them off online platforms and further reducing the diversity of online voices and opinion. And it shows no sign of abating. A survey last year found that 40 per cent of American adults had personally experienced online abuse, with almost half of them receiving severe forms of harassment, including physical threats and stalking. 70 per cent of women described online harassment as a âmajor problemâ.
The business models of social media platforms, such as YouTube and Facebook, promote content that is more likely to get a response from other users because more engagement means better opportunities for advertising. But this has a consequence of favouring divisive and strongly emotive or extreme content, which can in turn nurture online âbubblesâ of groups who reflect and reinforce each otherâs opinions, helping propel the spread of more extreme content and providing a niche for âfake newsâ. In recent months, researchers have revealed many ways that various vested interests, including Russian operatives, have sought to manipulate public opinion by infiltrating social media bubbles.
Our human ability to communicate ideas across networks of people enabled us to build the modern world. The internet offers unparalleled promise of cooperation and communication between all of humanity. But instead of embracing a massive extension of our social circles online, we seem to be reverting to tribalism and conflict, and belief in the potential of the internet to bring humanity together in a glorious collaborating network now begins to seem naive. While we generally conduct our real-life interactions with strangers politely and respectfully, online we can be horrible. How can we relearn the collaborative techniques that enabled us to find common ground and thrive as a species?
âDonât overthink it, just press the button!â
I click an amount, impoverishing myself in an instant, and quickly move on to the next question, aware that weâre all playing against the clock. My teammates are far away and unknown to me. I have no idea if weâre all in it together or whether Iâm being played for a fool, but I press on, knowing that the others are depending on me.
Iâm playing in a so-called public goods game at Yale Universityâs Human Cooperation Lab. The researchers here use it as a tool to help understand how and why we cooperate, and whether we can enhance our prosocial behaviour.
Over the years, scientists have proposed various theories about why humans cooperate so well that we form strong societies. The evolutionary roots of our general niceness, most researchers now believe, can be found in the individual survival advantage humans experience when we cooperate as a group. Iâve come to New Haven, Connecticut, in a snowy February, to visit a cluster of labs where researchers are using experiments to explore further our extraordinary impulse to be nice to others even at our own expense.
The game Iâm playing, on Amazonâs Mechanical Turk online platform, is one of the labâs ongoing experiments. Iâm in a team of four people in different locations, and each of us is given the same amount of money to play with. We are asked to choose how much money we will contribute to a group pot, on the understanding that this pot will then be doubled and split equally among us.
This sort of social dilemma, like all cooperation, relies on a certain level of trust that the others in your group will be nice. If everybody in the group contributes all of their money, all the money gets doubled, redistributed four ways, and everyone doubles their money. Winâwin!
âBut if you think about it from the perspective of an individual,â says lab director David Rand, âfor each dollar that you contribute, it gets doubled to two dollars and then split four ways â which means each person only gets 50 cents back for the dollar they contributed.â
Even though everyone is better off collectively by contributing to a group project that no one could manage alone â in real life, this could be paying towards a hospital building, or digging a community irrigation ditch â there is a cost at the individual level. Financially, you make more money by being more selfish.
Randâs team has run this game with thousands of players. Half of them are asked, as I was, to decide their contribution rapidly â within 10 seconds â whereas the other half are asked to take their time and carefully consider their decision. It turns out that when people go with their gut, they are much more generous than when they spend time deliberating.
âThere is a lot of evidence that cooperation is a central feature of human evolution,â says Rand. Individuals benefit, and are more likely to survive, by cooperating with the group. And being allowed to stay in the group and benefit from it is reliant on our reputation for behaving cooperatively.
âIn the small-scale societies that our ancestors were living in, all our interactions were with people that you were going to see again and interact with in the immediate future,â Rand says. That kept in check any temptation to act aggressively or take advantage and free-ride off other peopleâs contributions. âIt makes sense, in a self-interested way, to be cooperative.â
Cooperation breeds more cooperation in a mutually beneficial cycle. Rather than work out every time whether itâs in our long-term interests to be nice, itâs more efficient and less effort to have the basic rule: be nice to other people. Thatâs why our unthinking response in the experiment is a generous one.
Throughout our lives, we learn from the society around us how cooperative to be. But our learned behaviours can also change quickly.
Those in Randâs experiment who play the quickfire round are mostly generous and receive generous dividends, reinforcing their generous outlook. Whereas those who consider their decisions are more selfish, resulting in a meagre group pot, reinforcing an idea that it doesnât pay to rely on the group. So, in a further experiment, Rand gave some money to people who had played a round of the game. They were then asked how much they wanted to give to an anonymous stranger. This time, there was no incentive to give; they would be acting entirely charitably.
It turned out there were big differences. The people who had got used to cooperating in the first stage gave twice as much money in the second stage as the people who had got used to being selfish did. âSo weâre affecting peopleâs internal lives and behaviour,â Rand says. âThe way they behave even when no oneâs watching and when thereâs no institution in place to punish or reward them.â
Randâs team have tested how people in different countries play the game, to see how the strength of social institutions â such as government, family, education and legal systems â influences behaviour. In Kenya, where public sector corruption is high, players initially gave less generously to the stranger than players in the US, which has less corruption. This suggests that people who can rely on relatively fair social institutions behave in a more public-spirited way; those whose institutions are less reliable are more protectionist. However, after playing just one round of the cooperation-promoting version of the public goods game, the Kenyansâ generosity equalled the Americansâ. And it cut both ways: Americans who were trained to be selfish gave a lot less.
So is there something about online social media culture that makes some people behave meanly? Unlike ancient hunter-gatherer societies, which rely on cooperation and sharing to survive and often have rules for when to offer food to whom across their social network, social media have weak institutions. They offer physical distance, relative anonymity and little reputational or punitive risk for bad behaviour: if youâre mean, no one you know is going to see.
I trudge a couple of blocks through driving snow to find Molly Crockettâs Psychology Lab, where researchers are investigating moral decision-making in society. One area they focus on is how social emotions are transformed online, in particular moral outrage. Brain-imaging studies show that when people act on their moral outrage, their brainâs reward centre is activated â they feel good about it. This reinforces their behaviour, so they are more likely to intervene in a similar way again. So, if they see somebody acting in a way that violates a social norm, by allowing their dog to foul a playground, for instance, and they publicly confront the perpetrator about it, they feel good afterwards. And while challenging a violator of your communityâs social norms has its risks â you may get attacked â it also boosts your reputation.
In our relatively peaceful lives, we are rarely faced with outrageous behaviour, so we rarely see moral outrage expressed. Open up Twitter or Facebook and you get a very different picture. Recent research shows that messages with both moral and emotional words are more likely to spread on social media â each moral or emotional word in a tweet increases the likelihood of it being retweeted by 20 per cent.
âContent that triggers outrage and that expresses outrage is much more likely to be shared,â Crockett says. What weâve created online is âan ecosystem that selects for the most outrageous content, paired with a platform where itâs easier than ever before to express outrageâ.
Unlike in the offline world, there is no personal risk in confronting and exposing someone. It only takes a few clicks of a button and you donât have to be physically nearby, so there is a lot more outrage expressed online. And it feeds itself. âIf you punish somebody for violating a norm, that makes you seem more trustworthy to others, so you can broadcast your moral character by expressing outrage and punishing social norm violations,â Crockett says. âAnd people believe that they are spreading good by expressing outrage â that it comes from a place of morality and righteousness.
âWhen you go from offline â where you might boost your reputation for whoever happens to be standing around at the moment â to online, where you broadcast it to your entire social network, then that dramatically amplifies the personal rewards of expressing outrage.â
This is compounded by the feedback people get on social media, in the form of likes and retweets and so on. âOur hypothesis is that the design of these platforms could make expressing outrage into a habit, and a habit is something thatâs done without regard to its consequences â itâs insensitive to what happens next, itâs just a blind response to a stimulus,â Crockett explains.
âI think itâs worth having a conversation as a society as to whether we want our morality to be under the control of algorithms whose purpose is to make money for giant tech companies,â she adds. âI think we would all like to believe and feel that our moral emotions, thoughts and behaviours are intentional and not knee-jerk reactions to whatever is placed in front of us that our smartphone designer thinks will bring them the most profit.â
On the upside, the lower costs of expressing outrage online have allowed marginalised, less-empowered groups to promote causes that have traditionally been harder to advance. Moral outrage on social media played an important role in focusing attention on the sexual abuse of women by high-status men. And in February 2018, Florida teens railing on social media against yet another high-school shooting in their state helped to shift public opinion, as well as shaming a number of big corporations into dropping their discount schemes for National Rifle Association members.
âI think that there must be ways to maintain the benefits of the online world,â says Crockett, âwhile thinking more carefully about redesigning these interactions to do away with some of the more costly bits.â
Someone whoâs thought a great deal about the design of our interactions in social networks is Nicholas Christakis, director of Yaleâs Human Nature Lab, located just a few more snowy blocks away. His team studies how our position in a social network influences our behaviour, and even how certain influential individuals can dramatically alter the culture of a whole network.
The team is exploring ways to identify these individuals and enlist them in public health programmes that could benefit the community. In Honduras, they are using this approach to influence vaccination enrolment and maternal care, for example. Online, such people have the potential to turn a bullying culture into a supportive one.
Corporations already use a crude system of identifying so-called Instagram influencers to advertise their brands for them. But Christakis is looking not just at how popular an individual is, but also their position in the network and the shape of that network. In some networks, like a small isolated village, everyone is closely connected and youâre likely to know everyone at a party; in a city, by contrast, people may be living more closely by as a whole, but you are less likely to know everyone at a party there. How thoroughly interconnected a network is affects how behaviours and information spread around it, he explains.
âIf you take carbon atoms and you assemble them one way, they become graphite, which is soft and dark. Take the same carbon atoms and assemble them a different way, and it becomes diamond, which is hard and clear. These properties of hardness and clearness arenât properties of the carbon atoms â theyâre properties of the collection of carbon atoms and depend on how you connect the carbon atoms to each other,â he says. âAnd itâs the same with human groups.â
Christakis has designed software to explore this by creating temporary artificial societies online. âWe drop people in and then we let them interact with each other and see how they play a public goods game, for example, to assess how kind they are to other people.â
Then he manipulates the network. âBy engineering their interactions one way, I can make them really sweet to each other, work well together, and they are healthy and happy and they cooperate. Or you take the same people and connect them a different way and theyâre mean jerks to each other and they donât cooperate and they donât share information and they are not kind to each other.â
In one experiment, he randomly assigned strangers to play the public goods game with each other. In the beginning, he says, about two-thirds of people were cooperative. âBut some of the people they interact with will take advantage of them and, because their only option is either to be kind and cooperative or to be a defector, they choose to defect because theyâre stuck with these people taking advantage of them. And by the end of the experiment everyone is a jerk to everyone else.â
Christakis turned this around simply by giving each person a little bit of control over who they were connected to after each round. âThey had to make two decisions: am I kind to my neighbours or am I not; and do I stick with this neighbour or do I not.â The only thing each player knew about their neighbours was whether each had cooperated or defected in the round before. âWhat we were able to show is that people cut ties to defectors and form ties to cooperators, and the network rewired itself and converted itself into a diamond-like structure instead of a graphite-like structure.â In other words, a cooperative prosocial structure instead of an uncooperative structure.
In an attempt to generate more cooperative online communities, Christakisâs team have started adding bots to their temporary societies. He takes me over to a laptop and sets me up on a different game. In this game, anonymous players have to work together as a team to solve a dilemma that tilers will be familiar with: each of us has to pick from one of three colours, but the colours of players directly connected to each other must be different. If we solve the puzzle within a time limit, we all get a share of the prize money; if we fail, no one gets anything. Iâm playing with at least 30 other people. None of us can see the whole network of connections, only the people we are directly connected to â nevertheless, we have to cooperate to win.
Iâm connected to two neighbours, whose colours are green and blue, so I pick red. My left neighbour then changes to red so I quickly change to blue. The game continues and I become increasingly tense, cursing my slow reaction times. I frequently have to switch my colour, responding to unseen changes elsewhere in the network, which send a cascade of changes along the connections. Timeâs up before we solve the puzzle, prompting irate responses in the gameâs comments box from remote players condemning everyone elseâs stupidity. Personally, Iâm relieved itâs over and thereâs no longer anyone depending on my cackhanded gaming skills to earn money.
Christakis tells me that some of the networks are so complex that the puzzle is impossible to solve in the timeframe. My relief is shortlived, however: the one I played was solvable. He rewinds the game, revealing for the first time the whole network to me. I see now that I was on a lower branch off the main hub of the network. Some of the players were connected to just one other person, but most were connected to three or more. Thousands of people from around the world play these games on Amazon Mechanical Turk, drawn by the small fee they earn per round. But as Iâm watching the game I just played unfold, Christakis reveals that three of these players are actually planted bots. âWe call them âdumb AIâ,â he says.
His team is not interested in inventing super-smart AI to replace human cognition. Instead, the plan is to infiltrate a population of smart humans with dumb-bots to help the humans help themselves.
âWe wanted to see if we could use the dumb-bots to get the people unstuck so they can cooperate and coordinate a little bit more â so that their native capacity to perform well can be revealed by a little assistance,â Christakis says. He found that if the bots played perfectly, that didnât help the humans. But if the bots made some mistakes, they unlocked the potential of the group to find a solution.
âSome of these bots made counter-intuitive choices. Even though their neighbours all had green and they should have picked orange, instead they also picked green.â When they did that, it allowed one of the green neighbours to pick orange, âwhich unlocks the next guy over, he can pick a different colour and, wow, now we solve the problemâ. Without the bot, those human players would probably all have stuck with green, not realising that was the problem. âIncreasing the conflicts temporarily allows their neighbours to make better choices.â
By adding a little noise into the system, the bots helped the network to function more efficiently. Perhaps a version of this model could involve infiltrating the newsfeeds of partisan people with occasional items offering a different perspective, helping to shift people out of their social media comfort-bubbles and allow society as a whole to cooperate more.
Much antisocial behaviour online stems from the anonymity of internet interactions â the reputational costs of being mean are much lower than offline. Here, bots may also offer a solution. One experiment found that the level of racist abuse tweeted at black users could be dramatically slashed by using bot accounts with white profile images to respond to racist tweeters. A typical bot response to a racist tweet would be: âHey man, just remember that there are real people who are hurt when you harass them with that kind of language.â Simply cultivating a little empathy in such tweeters reduced their racist tweets almost to zero for weeks afterwards.
Another way of addressing the low reputational cost for bad behaviour online is to engineer in some form of social punishment. One game company, League of Legends, did that by introducing a âTribunalâ feature, in which negative play is punished by other players. The company reported that 280,000 players were âreformedâ in one year, meaning that after being punished by the Tribunal they had changed their behaviour and then achieved a positive standing in the community. Developers could also build in social rewards for good behaviour, encouraging more cooperative elements that help build relationships.
Researchers are already starting to learn how to predict when an exchange is about to turn bad â the moment at which it could benefit from pre-emptive intervention. âYou might think that there is a minority of sociopaths online, which we call trolls, who are doing all this harm,â says Cristian Danescu-Niculescu-Mizil, at Cornell Universityâs Department of Information Science. âWhat we actually find in our work is that ordinary people, just like you and me, can engage in such antisocial behaviour. For a specific period of time, you can actually become a troll. And thatâs surprising.â
Itâs also alarming. I mentally flick back through my own recent tweets, hoping I havenât veered into bullying in some awkward attempt to appear funny or cool to my online followers. After all, it can be very tempting to be abusive to someone far away, who you donât know, if you think it will impress your social group.
Danescu-Niculescu-Mizil has been investigating the comments sections below online articles. He identifies two main triggers for trolling: the context of the exchange â how other users are behaving â and your mood. âIf youâre having a bad day, or if it happens to be Monday, for example, youâre much more likely to troll in the same situation,â he says. âYouâre nicer on a Saturday morning.â
After collecting data, including from people who had engaged in trolling behaviour in the past, Danescu-Niculescu-Mizil built an algorithm that predicts with 80 per cent accuracy when someone is about to become abusive online. This provides an opportunity to, for example, introduce a delay in how fast they can post their response. If people have to think twice before they write something, that improves the context of the exchange for everyone: youâre less likely to witness people misbehaving, and so less likely to misbehave yourself.
The good news is that, in spite of the horrible behaviour many of us have experienced online, the majority of interactions are nice and cooperative. Justified moral outrage is usefully employed in challenging hateful tweets. A recent British study looking at anti-Semitism on Twitter found that posts challenging anti-Semitic tweets are shared far more widely than the anti-Semitic tweets themselves. Most hateful posts were ignored or only shared within a small echo chamber of similar accounts. Perhaps weâre already starting to do the work of the bots ourselves.
As Danescu-Niculescu-Mizil points out, weâve had thousands of years to hone our person-to-person interactions, but only 20 years of social media. âOffline, we have all these cues from facial expressions to body language to pitch⌠whereas online we discuss things only through text. I think we shouldnât be surprised that weâre having so much difficulty in finding the right way to discuss and cooperate online.â
As our online behaviour develops, we may well introduce subtle signals, digital equivalents of facial cues, to help smooth online discussions. In the meantime, the advice for dealing with online abuse is to stay calm, itâs not your fault. Donât retaliate but block and ignore bullies, or if you feel up to it, tell them to stop. Talk to family or friends about whatâs happening and ask them to help you. Take screenshots and report online harassment to the social media service where itâs happening, and if it includes physical threats, report it to the police.
If social media as we know it is going to survive, the companies running these platforms are going to have to keep steering their algorithms, perhaps informed by behavioural science, to encourage cooperation rather than division, positive online experiences rather than abuse. As users, we too may well learn to adapt to this new communication environment so that civil and productive interaction remains the norm online as it is offline.
âIâm optimistic,â Danescu-Niculescu-Mizil says. âThis is just a different game and we have to evolve.â
This article was first published on Mosaic.
Buy an annual Scroll Membership to support independent journalism and get special benefits.
Our journalism is for everyone. But you can get special privileges by buying an annual Scroll Membership. Sign up today!