About Me

I'm an Assistant Professor of Sociology at Rutgers University—New Brunswick. I specialize in using computational methods and data from social media to study far-right activism, populism, and hate speech. I have recently been working on several papers exploring the methodological applications of generative artificial intelligence.

My website includes a short overview of my main areas of teaching and research and links to my CV, social media, and Github profile. If you have any questions, please get in touch using the contact details at the bottom of the page.

Research

Social media, populism, and far-right activism

+

Why have populist and far-right actors have been so successful at building large online audiences? To what extent is online activism driven by offline events like protests, elections, and terrorist attacks versus endogenous processes to social media? Do extremists benefit from the affordances of platforms, particularly ranking and recommendation algorithms? To address these questions, I bring together theories from political sociology, social movements, and public opinion scholarship and novel computational methods to examine data collected from social media, newspapers, and other sources.

This work includes case studies focused on the United Kingdom and comparative studies of political parties across Europe. In a recent article in Mobilization, I argue that algorithmic feedback loops enable social movement actors to generate and sustain attention from online audiences. In an article in Political Communication, I show that populist parties in Europe have attracted more engagement on Facebook than other parties and that their online advantages appear to be growing. I am currently working on projects using simulations to understand the impacts of algorithmic ranking and recommendation on online activism.

Hate speech detection and content moderation

+

A second area of my research focuses on identifying and understanding hate speech on social media. My early work on the topic focused on the distinction between hate speech and other forms of offensive language, showing how the conflation of the two often resulted in false positives in machine learning classifiers. This research was covered in Wired Magazine, Tech Republic, and New Scientist. I have subsequently developed theoretical work on the dimensions of hateful and abusive language (paper) and examined racial bias in hate speech and abusive language detection systems, demonstrating how classifiers designed to detect hate speech tend to predict that tweets written in African-American English are hateful than similar tweets written in Standard American English (paper). This work was covered in Vox. A chapter on the sociology of hate speech detection, published in the Oxford Handbook on the Sociology of Machine Learning, provides an overview of this research and identifies several directions for further inquiry.

I am conducting on experimental research to understand how social context influences judgments about whether certain content is hateful or abusive, supported by a Foundational Integrity Research award from Meta. Relatedly, I contributed to a recent paper combining large language models and experiments to understand how social contexts inform perceptions of toxicity and offensiveness, published in the 2023 Proceedings of the Association for Computational Linguistics. I am interested in informing policy debates on online hate speech and content moderation and have spoken about my research at a policy dialogue organized by the Organization of American States and a working group on content moderation at the European Commission.

Computational methodology and artificial intelligence

+

In addition to my substantive interests, I also study how computational methods can be applied more generally in sociological research. Most recently, I have published several articles examining the uses of large language models (LLMs) and generative artificial intelligence for sociological research. My 2024 article in Socius showcases the methodological possibilities created by generative AI. In a forthcoming article in Sociological Methods & Research, Youngjin Chae and I evaluate the use of LLMs for text classification and provide a series of recommendations for best practices. Along with Daniel Karell at Yale, I organized the first workshop on Generative AI and Sociology, held at Yale Univeristy in April 2024, and guest edited a special issue of Sociological Methods & Research on the subject. Our editors' introduction and the ten articles comprising the August 2025 special issue are available online.

In earlier work, I explored the application of machine learning to study social processes. As part of the Fragile Families Challenge, I examined whether neural networks can accurately predict social outcomes and the extent to which these black box models can be amenable to sociological explanations. I found that these methods do not substantially outperform traditional approaches like linear regression but may allow us to use large amounts of data to inductively identify important variables. A paper based on my analysis is published Socius and our paper describing the results of the Fragile Families Challenge mass collaboration was published in PNAS.

Beyond my academic work, I have experience using computational methods in industry settings. In the summer of 2016 I was an Eric and Wendy Schmidt Data Science for Social Good Fellow at the University of Chicago. I worked on a project to develop an early-warning system to identify police misconduct and helped develop a new model to predict risks at the dispatch level. You can read about our work here, along with media coverage in The Chicago Tribune, NPR, Mother Jones, the Economist, and Forbes. I spent the summer of 2017 in the Data Science Research & Development group at Civis Analytics, a data science consulting and software company based in Chicago. I used natural language processing and machine learning techniques to build a tool to monitor political discussions on Twitter. In 2018, I was a Core Data Science intern at Facebook in Menlo Park. I studied misinformation sharing on the platform and helped evaluate and deploy a new tool related to their ongoing election integrity efforts.

Teaching

I teach undergraduate classes on Political Sociology, Sociology of Culture, and Data Science. At the graduate level, I teach Computational Sociology and a second-semester Statistics course. Code and slides for my graduate methods classes are available on Github.

Contact

Email me at thomas dot davidson at rutgers dot edu.