Maite is a member of the Steering Committee of the Digital Democracies Institute with good reason – a lot of her work and the work of the Discourse Processing Lab aligns with the research that we are doing, so her contributions are crucial to progress our work. She presented on three key aspects of her work a couple of weeks ago and we wanted to share.
1. Managing harmful content online
This project works on creating a text classification model for machine learning to help moderate online comments to news articles, but also that might be applied to any kind of comment, for example on social media platforms too. Maite stressed that this is only suggested as part of the solution to online abuse, but can hopefully play a part in creating safe comment fields again.
In regulating comments on a news article, one of the key elements is determining what is construed as constructive, because a constructive commenter is one that is attempting to enter into dialogue, not giving abuse. To give context, a lot of news media have stopped even allowing comments because of the presence of hate speech and abusive language. One disturbing example from Canada CBC Online, who had to stop comments on any story to do with Indigenous stories specifically, because they gendered too many comments that violated their hate speech policy. This link also takes you to a huge project from The Guardian where they looked at all the comments on its site from 2006 – 2016 which discovered that of 10 of the most abused writers, 8 are women, and the two men are black.
Initial research on US election news article comments found that constructive comments often include opinion, personal experience, and other linguistic expressions that say that a person is trying to engage in debate and presented some kind of informed opinion. But how do you train a machine to recognize these kinds of comments? Using a much larger data set from The Globe & Mail of comments on their site from 2012-2016, including 633,000 comments, the Lab implemented computational methods which organized topics, extract sentiment, and classify comments. Some of the features identified are shown on this table:
In testing the machine learning system by comparing it with what humans identified, the machine was correct about 81-85% of the time in identifying constructiveness. This is pretty great, but not good enough for real-life applications.
In order to improve, the next set of testing involved much more detailed annotation to determine what would train the NLP model, and they asked people to analyze 12,000 comments from the corpus. They asked the human reviewers much more detailed questions on the comments, looking for sarcasm and constructive characteristics. Key identifiers were things like the inclusion of dialogue in a comment meant that it was more likely to be constructive, or if it included something irrelevant it was more likely to be non-constructive. They also found that the longer the comment the more likely it was to be constructive but this is an attribute easily gamed, so is not useful as a key identifier. You can read this forthcoming paper here. [Kolhatkar, V., N. Thain, J. Sorensen, L. Dixon and M. Taboada (to appear) Classifying constructive comments. First Monday.]
One really cool thing about this research is that you can go and play with the machine! Head to this link and add a comment from a news article or social media and see if it is classified correctly! You are then helping to train the model!
2. Fake News
In a similar project, Maite’s Lab is also looking at introducing NLP for automatic checking on fake news, and again is suggested as contributing to the effort, not a final solution.
The hypothesis here is that language for fake news is different from the language used in fact-based news. You can find the paper written on this here, but essentially the team gathered articles that fact-checkers had labelled as true, looking for the distinguishing factors as shown in this table below:
The LIWC system (Linguistic Inquiry and Word Count) looks at certain words and classifies them. Key findings included the prevalence of negative words, the use of ‘they,’ and the lack of apostrophes in fake news. This last feature stemming from the lack of familiarity that imitators have with fact-based news, as, in imitating, they may try to use a more formal style, which is actually NOT how journalists write.
You can ALSO contribute to this project by visiting this site and fact check your story!
3. Gender Gap Tracker
The last part of the research that Maite presented on was the work the Lab is doing on the the gender distribution of sources in news articles – who is mentioned and who is quoted? She did caveat that they are taking a necessarily binary approach using names and pronouns in this research. They collect daily data from 7 news outlets, and use machine learning programs to classify sources. The breakdown shows that around 70-75% of the sources that are quoted are men. This has remained broadly the same for the two years that the project has been running. Expect further work on this project as it develops, but you can find more information on the project here.
I hope that this short summary of Maite’s fascinating research does justice to just how crucial this, and work like this is to the maintenance of the Internet as a deliberative space, used to invoke dialogue and democratic debate between interested people. The protection of these spaces is something both Maite’s Lab and the Digital Democracies Institute is working towards, and we are really looking forward to how collaboration between the groups can continue to be productive.