skip to content


Saturday 19 October: 4:00pm - 5:00pm

Faculty of Law, LG19, Sidgwick Site, 10 West Road, CB3 9DZ

This talk considers the spread of Hate Speech (HS) via social media, and probes the consequences of moderating offensive messages using automated HS-detection systems. Currently, companies like Facebook, Twitter, and Google generally respond reactively to HS. This procedure raises concerns about tech companies enforcing self-imposed censorship regimes and delimiting freedom of expression of their users. Considering the constant development and improvement of automated systems, we anticipate these technologies to be able to reliably classify utterances as either constituting HS or not. Such systems could eliminate human agency, contextual nuance, and ethical sensibility from the HS decision-making process entirely. In this talk we propose that HS should be handled in a way that is analogous to the quarantining of malicious computer software. If a post were classified as harmful, it would be temporarily quarantined, and the direct recipients would control how they accessed the detained material. This way, the user regains control over the content they wish to see, while companies are relieved from making decisions that may potentially be deemed centralised censorship. A talk by Dr Stephanie Ullmann and Dr Marcus Tomalin from the 'Giving Voice to Digital Democracies' project at CRASSH.

Warning: Due to the nature of the topic of Hate Speech, this talk is likely to feature elements of speech that some members of the audience may find disturbing.

Booking Information

Telephone number:
01223 766766

Booking required


Full access, Accessible toilet

Additional Information

Age: 16+, Talk, Arrive on time, free