Our spring sale is on! Use promo code SPRING24 at checkout to save 50% on any order!

Q&A with Asta Zelenkauskaite, Author of Creating Chaos Online

By: Briana Johnson | Date: October 10, 2022 | Tags: Author Post, Q&A
Q&A with Asta Zelenkauskaite, Author of Creating Chaos Online

This guest author post  is a Q&A with Asta Zelenkauskaite, author of Creating Chaos Online: Disinformation and Subverted Post-Publics , from the University of Michigan Press. This book is available in hardcover, paper, and open access.

 

What surprised you the most while conducting your research for the book?  

I am delighted to share some of the insights about Creating Chaos Online . I started data analysis for this book with two premises: either there is going to be a systematic pattern of a repeated Russian trolling justification across data sources—indicating that there is something going on—or there won’t be. And, the findings of the book are that the same Russian troll justification was indeed repeated across user comments online. Uncovering this systematic pattern of Russian trolling justification tropes repeated across comments under news stories that covered Russian trolling in both USA and Lithuanian contexts was an eye-opening moment for me not only as a researcher but also as a person who contributes to online spaces every day, as do most of my readers. Let me emphasize that these justification arguments were found in the user comments of the news portals across the political spectrum; yet, the right-leaning sources had a rate of justification that was more than double (40%) compared to the analyzed comments on the left-leaning news stories (18%).

As a result, my argument is that instilling doubt and uncertainty is a practice that can systematically take place or be dispatched in today’s online world, and with online means that can amplify such uncertainty. And thus, the propaganda and disinformation that are typically considered something of the past, something detached from our times, something from the generation that experienced the Cold War, are still relevant today. Disinformation can be employed through digital media, especially user-generated content and poses an equal threat of misinformation.

This urges us to stay vigilant on how to make sense of information flows by understanding the intended purpose, knowing that information can be amplified and modeled for us by bots and automated tools that can be dispatched. Finally, automation and propagation online can be weaponized for exploitation by foreign operatives, as was the case in the Russian trolling allegations of US interference.

 

What inspired you to examine the characteristics of Russian trolling?

Over the past decade, I have been analyzing emergent practices in online spaces. Some of my recent work includes how Facebook video live streaming is embedded in mass media, or how other user-generated content such as tweets are included in news stories. However, the past US presidential elections and the allegations of Russian trolling presence in the user-generated content, first escalated in the European media, made me look at the Russian trolling as a systematic phenomenon across news portals. Thus, my focus of emergent practices shifted to what’s known as dark participation online.

The beginning of this project was in 2016, when I started analyzing Lithuanian news portals’ comments. With my research assistant at the time, doctoral student Brandon Niezgoda, I came across numerous comments under stories, unrelated to political content, where users were calling out Russian trolls for the comments that they shared. There, users were fighting to debunk claims that were focused on destabilizing the democratic premises. Typically, those were the comments on controversial topics—challenging the government's ability to handle crises: e.g., Syrian immigration crisis in Europe by evoking racist and nationalist undertones, challenging NATO’s legitimacy, and ridiculing the political parties that were historically critical of Russia, just to name a few. While debunking these claims, users were openly calling out Russian trolls, thus highlighting that some of the users online were not authentic but rather hired by Russian special operations to sway peoples’ opinions in neighboring countries, such as Lithuania. Fast forward to 2018, I saw the same phenomena of Russian trolling allegations emerging in the USA. My focus shifted to analyze this phenomenon in a systematic manner with the focus to uncover the modus operandi of Russian trolling, as projected in the news portals comments.

 

What do you think gave way to the rise in Russian trolling? Is it easy to spot?

There are multiple reasons to argue that Russian trolling is rather opaque. For example, Russian trolling allegations started with the revelation of the Internet Research Agency in Russia that was uncovered by the US journalist’s Adrian Chen’s story in the 2015, showcasing how the agency’s employees are paid by the Russian government to write comments in online spaces of other countries and spread specific agenda to sway discord for various causes relevant to foreign politics. When allegations of interference in 2016 US presidential elections took place, the evidence gathering process took a while, reopening the question to prolonged uncertainty.

Thus, Russian trolling has certain traits of invisibility and my book was set to uncover that masking. In online spaces, digital traces and text itself serve as one of the entry-points to such unmasking. Yet, unmasking is challenging; in addition to identifying textual traces, it requires creative approaches to find indicators that allow for the unmasking to take place; be it sociotechnical traces such as Internet Protocol address, that showcase where the message is written geographically, or by rhetorical means of the nature of the message–how is the argument presented, what does it advocate, and who does it advocate for? To uncover rhetorical patterns of justification, I analyzed Russian trolling from the ground-up across platforms and sources.

Conceptually, to create such an uncovering mechanism, I had to employ an interdisciplinary approach–by drawing on theoretical assumptions on how masking can be performed or staged online and connecting it to propaganda and disinformation studies drawing from the historical and computational propaganda to tap on propaganda techniques. I also analyzed how specific rhetorical justification mechanisms impact journalism today, how information infrastructures are new battlefields of influence, and how various democratic and non-democratic countries prepare their legislations and infrastructure by finally drawing on the psychology of denial as an explanatory mechanism of why Russian trolling justification was found online.

 

Can you summarize the idea of post-publics, and the role it plays in the book?

Post-publics is a concept that departs from the notion of publics and in part deals with the online spaces shaped by post-truth, and the consequences of misinformation and disinformation. Post-publics represent a state of confusion induced during times when a contested but impactful high-stakes issue is being discussed and cannot be based on unequivocal evidence. I argue that Russian trolling exploits the very notion of public debate. Typical trolling techniques include information digression with the paralyzing but not clarifying intention. Such a digression, similar to what’s known as information flooding, a technique of feeding an overabundance of information to the reader to seed confusion, rather than clarify, was found in authoritarian regimes, as argued by scholars like M. E. Roberts in her book released in 2018 Censored: Distraction and diversion inside China’s great firewall . Debate is transformed into a simulated discussion fueled by futile arguments that can lead to disengagement rather than foster participation and democratic deliberation.

Post-publics as a lens is critical in the book to make sense of the state of the public sphere in light of the effects of post-truth. In the book, I discuss the consequences post-truth to the democratic settings in the context of disinformation. I argue that through affect and proliferation capacity, disinformation resembles rumors, which are particularly destabilizing and circulate most effectively in times of uncertainty—when public perception is most vulnerable to manipulation. Then, chaos spawns endless speculations, proving that the affective power of rumors exceeds that of merely factual information.

 

What do you hope readers will take away from reading Creating Chaos Online?

The goal of this book is not only to explain how justification works rhetorically, but to provide a toolkit to the reader on how to make sense of the contradictory information and find clarity in the times of uncertainty when contested issues coexist.

Since this is a new norm in online spaces, readers are expected to be equipped to combat the exploitation of the democratic premises of the public sphere, where debate is subverted to stir more discord and futile arguments, such as justification, rather than to provide clarity. While Russian trolling allegations in the US elections interference seemed an isolated case, targeted, automated, and amplified influence, mediated through artificial intelligence and non-human actors is a new reality and a threat to national security.

Heightened risk for such foreign influence is particularly relevant during the times where high-stakes situations and potential crises emerge and the effects of information warfare can be particularly destabilizing since various vulnerabilities such as sensitive social issues can be capitalized on to further stir discord. Thus, it is critical in such times of crises that the citizens are equipped to understand the arguments of justification that can be artificially pushed on them. Examples of such sensitive issues are not limited to Russian trolling justification, although that is the primary focus of the book. The same justification argument construction can be applied to the debates regarding safety of COVID19 vaccination, or the side-taking in the Russian military invasion that led to the war in Ukraine in February 2022. Ultimately, the book is about the resilience that the reader should build not to succumb to the paralyzing state of chaos or paranoia that uncertainties typically bring.