How We Can Protect Users from Online Violence Hello and welcome to the first of four re-skill masterclasses that we're going to do this year, 2025. We're broadcasting live from the studios of HEC Paris today. We welcome Professor Kristine de Valck, she's professor of marketing and also the Dean of degree programs here at HEC. Professor de Valck will be sharing some of the 25 years of research that she’s accomplished in the field of online communities and social media. Professor de Valck, welcome. Happy to be here. You recently co-signed a research paper on online community brutalization that involved adults of all ages. This study looks at the electronic dance music scene and community in the UK. The work is all the more relevant and important, I feel, because London has recently published its first-ever Code of Practice and guidance, so very topical. Professor de Valck, the floor is yours. Thank you, Daniel, for that wonderful introduction. So indeed, there are two ways of thinking about how we should protect users from online violence. On the one hand, we have the UK, which has adopted the first-ever Online Safety Act that has come into implementation just recently. It's a very comprehensive guideline that enforces tech platforms to protect users from illegal content, illegal practices, but also online harassment. That’s one way of thinking about it. The other way of thinking about it, we see in the United States, where Mark Berg just recently announced that he would fire the third-party fact-checkers because the community can moderate itself. So we have these two approaches: the more regulatory one that we see in the United Kingdom, and the more laissez-faire approach that we see in the United States. We asked the audience what they think, how internet users should be protected from online violence. This is the poll that shows you the results. Many of you indeed say that we should have some form of professional moderation. There’s also selection at the gate, and the least of you believe that we should have volunteer moderators. I will explain that each one of those can play their role, but I agree with you: the research that I’ve done shows that some form of professional moderation is a good practice. Now, together with my two co-authors from the University of Bath and Maria Lua from the Vienna University of Economics and Business, we set out to study how online violence develops in communities. Because if we know how and why it develops, we also know better how we can protect users from it. As Daniel explained, we studied a community of electronic dance music in the United Kingdom. We studied this community that would meet in person during clubbing events, but then during the week would interact online. We studied the community from its inception in 2001 until its decline in 2019. We had a very large data set of around 7 million posts coming from 20,000 people, and that really gave us an opportunity to study how the violence that this community became known for developed. In our research, we found that there are five misconceptions about online violence. The first is that if you are a member of a recreational community and you are being harassed by other members, the simplest thing is to just step out and leave. What we found in our study is that this is not what members do, and there are two reasons for it. People have invested a lot of time and effort to build ties and social relations, friendships with other community members. People participate in these communities because they gain recognition for their expertise, and if they leave, they lose that. Another reason is that these communities are important information sources. The community we studied was the biggest and most important community that gathered all information about electronic dance music, upcoming events, DJs, and playlists. When people leave that community, they deprive themselves of that information source. The first lesson is that people who are online and harassed will not necessarily leave, and that puts an obligation on platform owners to protect users from online violence. The second misconception is that people may believe that outbursts of violence in an online community are isolated incidents. What we found in our research is that this is not the case. There are actually three forms of violence that make violence become endemic, meaning that it becomes part of the DNA of the community. Direct violence is what you observe—the outbursts, the verbal slander, the harassment aimed at hurting people’s feelings, identities, and self-esteem. The second form is structural violence, where relational structures and power imbalances lead to exploitation and discrimination. The third is cultural violence, involving norms, values, beliefs, and narratives that legitimize the other two forms. These three types interact and make violence part of the culture. For example, we talk about sadistic entertainment. People go there for fun and excitement, but often these communities are a bit boring. There is nothing going on after the weekend, and they find ways to entertain themselves through what we call “block games,” similar to trolling. A member would find a vulnerable person to bait, and then harass them for the entertainment of others. Other members would cheer them on, using popcorn icons and encouraging messages. This is direct violence. Structural violence was present because there was a deliberate absence of protection, including from moderators. When harassment occurred, some members would protest, but most would say, “We’re enjoying this, let it play out.” Moderators also did not intervene, telling us in interviews that even though it was morally wrong, it created engagement and excitement, so they ignored it. This is what we call hedonic Darwinism—survival of the fittest, exploiting others for pleasure. Cultural violence reinforced it through narratives like “It’s just harmless play,” or “What happens online isn’t real.” Victims who complained were told to “grow up” or “stop taking it seriously.” These three forms of violence—direct, structural, and cultural—interacted, making violence part of the community’s culture. Another example is clan warfare. When communities start, people know each other well and share norms about how to use the community. But as more people join with different ideas, conflicts emerge. In our study, one group focused on caring and friendliness, while others valued exclusivity and tradition. Fifteen cliques formed, fighting for power and status. Some had control over moderation and used it to silence others. Culturally, they justified it by saying they were protecting the “true” community and denigrating others as unfit. A third example is popular justice. Toward the end of the community’s life cycle, moderators became less active. Without governance, members began to enforce rules themselves. When someone behaved inappropriately—like posting irrelevant content repeatedly—the community reacted harshly. Eleven members piled on to humiliate and abuse one member, continuing even after she left. She stopped participating for a year. The attackers justified their behavior as protecting the community. Again, direct, structural, and cultural violence interacted to create a brutalized environment. If we sum this up: to protect users from online violence, when you observe direct violence, you need to act. Ban, censor, or punish direct violence through AI algorithms that detect flame wars, empower members to block others, or set posting limits. But that is not enough. Direct violence will continue if structural and cultural violence are not addressed. Our research shows that people have three main needs when they join communities: entertainment, status, and justice. Platforms should address these needs constructively. For entertainment, offer non-violent forms of engagement. In one community, moderators replaced violent games with creative wordplay, turning trolling into fun without harm. For status, reward positive contributions with badges that reflect diverse achievements. To avoid clan warfare, emphasize similarities over differences, as in a cooking community that encouraged members to share their late-night cravings instead of arguing about recipes. Finally, for justice, moderation must be fair and inclusive. You can have volunteer or distributed moderators, but they must follow clear guidelines and represent the diversity of the community. If you manage direct, structural, and cultural violence together, and meet people’s needs for entertainment, status, and justice, you can protect users from online violence. Well, hello and welcome back to this question and answer session with Professor Kristine de Valck. Thank you so much, Professor de Valck, for a very exciting and entertaining 20-minute exchange masterclass. It provoked a lot of questions from all over the world. Victoria from Sydney, Australia, notes that not all online communities are identical and asks which communities are more likely to develop the toxicity you described. That’s a very interesting and important question. Communities that are more hedonic and have a higher need for entertainment are more prone to sadistic entertainment and brutalization—platforms like Discord, Reddit, or Twitch, where people seek thrill. It happens less in support-oriented communities. Communities emphasizing similarity can also react defensively toward nonconformists. Finally, communities with fewer moderation resources, or consumer-owned ones, are more prone to popular justice because of the lack of oversight. Paul from Manchester asks how your study generalizes to other social media platforms. That’s a good question. The community we studied was on a discussion forum between 2001 and 2019, so its format was different. Still, we believe our findings generalize. Social media are places where people seek excitement and escape, where performance blurs with reality, making people less careful toward others. Social media also operate in a media environment that normalizes emotional exploitation, which fuels sadistic entertainment. Moreover, because people from all walks of life mix there, they often disagree on norms, leading to clan warfare. Algorithms and influencers can amplify polarizing narratives, making social media prone to conflict. Alejandro from Santiago, Chile, asks why you do not name the platforms or the real names of people you quote in your paper. That’s the norm in academic research. We are not reporters or judges; we aim to understand. By keeping identities anonymous, we protect our informants and ensure continued access to real behavior. Many would not speak openly if they feared exposure, especially about unethical actions. Beatrice from Paris asks how the platform you studied has evolved since 2019. The platform still exists, miraculously, with a few older members still active. However, by 2015, most users had migrated to newer platforms, so while it remains online, the main hub for electronic dance music in the UK has moved elsewhere. Liliane Ang, a lawyer from Kinshasa, asks how you detect the violence that begins in communities. That’s something we also worked on. Because we spent so long in the community, we learned to recognize when things were about to escalate. Moderators play a key role in this. But AI can help too. We developed a vocabulary to detect episodes of violent behavior. AI is becoming increasingly capable of identifying not only direct violence but also structural and cultural signs, picking up early signals of hostility. Personally, I lean more toward the UK approach: more protection is better than less. Finally, a student from HEC Paris, Nara Ben, asks if you have any tips to identify good moderators versus bad ones. Moderators should understand multiple groups within the community. They must follow clear, consistent guidelines, be impartial, and not moderate only for their friends. They should have a thick skin, explain their actions transparently, and represent the diversity of society. Professor Kristine de Valck, thank you so much. Unfortunately, time has caught up with us, so we’ll wrap up here. We invite our viewers to the next Reskill Masterclass, featuring Professor of Law Pablo Baquero, who will discuss the AI Act passed by the European Union in 2024. The masterclass is scheduled for April 10th at the same time, so please be sure to tune in. Goodbye.