Yik Yak has (re-)entered the chat. But maybe we have bigger problems now.
When I graduated with my PhD in the summer of 2015, I’d written a 200-page dissertation and also accumulated almost 30,000 meaningless internet points known as yakarma. Anonymously, on the location-based app Yik Yak, I complained about my university’s wifi, shared photos of campus squirrels and cookies I’d baked, argued about feminism and politics, gave relationship advice to twenty-year-olds, consoled students upset about failing grades, asked for recommendations for coffee shops when I traveled, and ruthlessly downvoted racist and homophobic jokes. Of course, at the same time, many Yik Yak users were engaging in cyberbullying, harassment, and even bomb threats — and well, the racism and homophobia was getting plenty of upvotes, too.
In 2015, Yik Yak was one of the most popular social networking platforms on college campuses, and it was valued at close to $400 million. By the end of 2017, it was gone. Many blamed its demise on significant design changes — notably, doing away with anonymity by moving from first optional and then required user handles, which arguably destroyed a differentiator that had drawn many users to the platform. However, it also had persistent problems with toxic content that led to some universities banning its use.
Now, quietly and without much fanfare, Yik Yak is back — and its reputation precedes it. At least one university has already banned the app, and I’ve been thinking about what the problem with the app was and whether we’ll still see those problems today. Or maybe part of the issue is that we’ve got all new ones.
Like many online platforms, Yik Yak wasn’t strictly good or bad. It was complicated. In 2015, in the wake of calls for universities to ban the platform due to harassment against marginalized groups, YikYak was also being used for discussions of stigmatized topics, exploring identity, finding support, and giving marginalized students a voice through organization of activism. This tension, in which the same features that allow harassment or hate speech to thrive can also allow for critical anonymous social support, is true for pretty much every social platform. I could spend fifteen minutes on Reddit, Tumblr, or TikTok, and find examples of both the best and worst of what humanity has to offer. Anonymous online interaction absolutely leads to the kind of awful people being awful you might imagine, but it also provides ways for people to connect when they might be fearful of being outed, or to open up about stigmatized topics. In other words: anonymity isn’t the problem, people are.
So with stars still in my eyes from spending five years on a dissertation about one of the more positive and supportive parts of the internet, I wrote a piece for Slate about what was wonderful about Yik Yak. I wrote about how many parts of Yik Yak were doing a good job policing themselves, and how the platform design around upvoting and downvoting allowed for formation of positive social norms. “It isn’t a given that people will be terrible,” I wrote. “Sometimes they might surprise you.”
Six years later upon hearing that Yik Yak was back, my first impulse was dread. So I asked myself: what changed? One of the answers might be that six years as a professor researching technology ethics has changed my own perspective. But I also think that many of us feel worse about the internet than we did in 2015. Maybe we’ve seen even more evidence that, given the opportunity, people will be terrible — and that our platforms aren’t as well equipped to handle it as we’d like.
The Wall Street Journal recently released a barrage of investigative reports about the inner workings of Facebook and specific, concrete harms related to the platform: how Instagram can be damaging to mental health, especially for young girls; how algorithms can fuel outrage; persistent challenges for content moderation; and how vaccine misinformation thrives. Misinformation on social media especially is objectively a much more profound problem than it was five or six years ago — not only because of how easily it spreads, but because of the stakes. You thought misinformation’s impact on democracy was a problem? Well, just wait until we’re in a global pandemic!
These kinds of big problems with other platforms, particularly as related to content curation and recommendation algorithms, are indicative of how we currently interact most with social media. We’re entirely accustomed to algorithms deciding what content we should see. And YikYak is in a way an example of what people say they want instead — a content feed curated by people. Content floats to the top if upvoted, and disappears if downvoted. But given the problems that YikYak faced with respect to negative content, I don’t know that there’s reasons to think humans are necessarily better than algorithms in this respect.
The relaunch of Yik Yak came with promises for “community guardrails” and their guidelines do clearly state that posts with bullying and harassment, bigotry, violence and threats are not allowed on the platform, as well as posts that “knowingly share fake news, unless it’s obvious satire.” However, moderation is challenging especially at scale, and my own previous optimism about the benefits of algorithmic content moderation on Yik Yak has definitely faded thanks to my research on content moderation, most recently on TikTok. The problem with algorithmic flagging of content like hate speech is that algorithms will never be perfect, and they are known to be biased. Platforms like TikTok are struggling to find an appropriate calibration between false positives and false negatives — and either way, the errors tend to disproportionately harm people from marginalized groups. So just as Yik Yak previously could serve to both help and harm marginalized students on college campuses, depending on very nuanced aspects of their moderation practices, their voices could be either uplifted or suppressed.
You might be wondering if my dread was well-founded. What is Yik Yak like in this new digital moment? Honestly… so far I’m not impressed, and polling my colleagues at other universities suggested they’re seeing much of the same. A lot of tasteless jokes with underpinnings of sexism, racism, and homophobia. Mean-spirited call-outs of individuals and groups. And unfortunately, not as much supportive content to even it out. Of course, it’s still early days and perhaps early adopters like to push boundaries and see what they can get away with. On the bright side, I personally haven’t seen COVID-related misinformation on the app. Also I hope the 20 people who upvoted my supportive message felt slightly better because of it.
Though I also hear that uptake of YikYak simply isn’t that high — a lot of the college students I’ve asked haven’t heard of it at all, or folks my age are surprised to hear it’s back. Others told me they downloaded it briefly and then didn’t go back.
So maybe the ways we expect to consume content and interact on social media really has changed since 2015. This brings with it challenges for platforms to meet current engagement norms (see e.g. how Facebook and Instagram are constantly adding new features that look a lot like features from other platforms), but also challenges for new kinds of ethical issues and new forms of harm. And whatever comes next, it’s almost certainly not going to be strictly good or bad — it’s complicated.