Categories
Deep Learning Digital Humanities Propaganda in Context

Propaganda in Context: Eastern European Nationalist(ic) Symbols

The goal of this project was to train an arti­fi­cial neur­al net­work to rec­og­nize spe­cif­ic nationalist(ic) sym­bols from East­ern Europe. Ini­tial train­ing anno­ta­tions were fund­ed by the Ger­man BMBF at the Uni­ver­si­ty of Pas­sau, Ger­many. Fur­ther anno­ta­tions were fund­ed by the DI4DH ini­tia­tive at the Uni­ver­si­ty of Inns­bruck, Aus­tria. Train­ing was con­duct­ed using the infra­struc­ture of the Research Cen­ter High Per­for­mance Com­put­ing in Innsbruck.

Our project can be con­sid­ered a con­tri­bu­tion to the emerg­ing field of “dis­tant view­ing”, which uses quan­ti­ta­tive meth­ods to assess a cor­pus con­sist­ing of a large num­ber of visu­al media. Cur­rent­ly, deep learn­ing meth­ods play a minor role in dis­tant view­ing, as most of the projects use pre­trained net­works. This is under­stand­able, as train­ing is not triv­ial. How­ev­er, using pre­trained net­works sig­nif­i­cant­ly reduces the amount of pos­si­ble research ques­tions. More­over, a bet­ter under­stand­ing of the train­ing process allows us to con­tribute to the field of “crit­i­cal machine learn­ing”; more pre­cise­ly, we try to point out some of the ben­e­fits and pit­falls of train­ing an arti­fi­cial neur­al net­work for a human­i­ties research project.

We select­ed YouTube as an exam­ple, which has become the most impor­tant online media out­let in Rus­sia. In 2020, 82% of those aged 14–64 years used it dai­ly, mak­ing in the most suc­cess­ful exam­ple of Social Media in Rus­sia. There­fore, it is of vital impor­tance for Slav­ic cul­tur­al and media stud­ies to devel­op ana­lyt­ic tools for this platform. 

Three test cas­es were used in our project: Ukrain­ian nation­al­ist Stepan Ban­dera (1909–1959), who was instru­men­tal­ized by both sides of the Ukraine con­flict start­ing in 2013; promi­nent Russ­ian oppo­si­tion leader Alek­sei Naval’nyi; and Belaru­sian pres­i­dent Ali­ak­san­dr Lukashen­ka, who was re-elect­ed in fall 2020. In the case of Naval’nyi and Lukashen­ka, YouTube clips with sev­er­al mil­lion views helped to bring the protest to the streets. Nat­u­ral­ly, demon­stra­tions rely a lot on visu­al sym­bols such as flags and thus, allow us to test our the­o­ries. Pro­test­ers in Belarus, for exam­ple, do not use the offi­cial flag and coat of arms, which stem from Sovi­et times, but rather those of the first Belaru­sian repub­lic found­ed in 1918.

A sam­ple video show­ing protests in Min­sk, auto­mat­i­cal­ly anno­tat­ed by our arti­fi­cial neur­al network

For these test cas­es, Deep Learn­ing was used to train an arti­fi­cial neur­al net­work (Resnet1010) to auto­mat­i­cal­ly detect 45 pre­de­fined nationalist(ic) sym­bols and 40 politi­cians from East­ern Europe. This net­work can now be applied in a wide vari­ety of research ques­tions. The trained net­works togeth­er with our scripts and per­for­mance met­rics are avail­able on Github.