Categories
Deep Learning Digital Humanities Propaganda in Context

Propaganda in Context: Eastern European Nationalist(ic) Symbols

The goal of this project was to train an artificial neural network to recognize specific nationalist(ic) symbols from Eastern Europe. Initial training annotations were funded by the German BMBF at the University of Passau, Germany. Further annotations were funded by the DI4DH initiative at the University of Innsbruck, Austria. Training was conducted using the infrastructure of the Research Center High Performance Computing in Innsbruck.

Our project can be considered a contribution to the emerging field of “distant viewing”, which uses quantitative methods to assess a corpus consisting of a large number of visual media. Currently, deep learning methods play a minor role in distant viewing, as most of the projects use pretrained networks. This is understandable, as training is not trivial. However, using pretrained networks significantly reduces the amount of possible research questions. Moreover, a better understanding of the training process allows us to contribute to the field of “critical machine learning”; more precisely, we try to point out some of the benefits and pitfalls of training an artificial neural network for a humanities research project.

We selected YouTube as an example, which has become the most important online media outlet in Russia. In 2020, 82% of those aged 14-64 years used it daily, making in the most successful example of Social Media in Russia. Therefore, it is of vital importance for Slavic cultural and media studies to develop analytic tools for this platform.

Three test cases were used in our project: Ukrainian nationalist Stepan Bandera (1909-1959), who was instrumentalized by both sides of the Ukraine conflict starting in 2013; prominent Russian opposition leader Aleksei Naval’nyi; and Belarusian president Aliaksandr Lukashenka, who was re-elected in fall 2020. In the case of Naval’nyi and Lukashenka, YouTube clips with several million views helped to bring the protest to the streets. Naturally, demonstrations rely a lot on visual symbols such as flags and thus, allow us to test our theories. Protesters in Belarus, for example, do not use the official flag and coat of arms, which stem from Soviet times, but rather those of the first Belarusian republic founded in 1918.

A sample video showing protests in Minsk, automatically annotated by our artificial neural network

For these test cases, Deep Learning was used to train an artificial neural network (Resnet1010) to automatically detect 45 predefined nationalist(ic) symbols and 40 politicians from Eastern Europe. This network can now be applied in a wide variety of research questions. The trained networks together with our scripts and performance metrics are available on Github.