Keynote 1: Towards Theoretically and Empirically Grounded Design of Behavior Change Technologies
Behavior Change Technologies (BCTs) have emerged in the field of Human-Computer Interaction (HCI) as a promising class of tools able to address key societal problems. Human behavior is a key contributing factor to most of them, from global warming, to the rising cost of healthcare worldwide, and emerging concerns of the technological age, such as online privacy and the propagation of misinformation through social media. In the domain of health, non-communicable diseases including heart disease, stroke, cancer and diabetes account for nearly 70% of deaths. Their causes are primarily behavioral, including smoking, physical inactivity, poor diet and alcohol use. Physical inactivity, in particular, is one of the leading risk factors for death worldwide. BCTs, such as physical activity trackers, can become instrumental in the transition to a new healthcare landscape that stresses prevention and patients being in control of their health. Yet, recent studies have questioned the effectiveness of physical activity trackers and shown high attrition rates among their users. While technological interventions have been proven to be more effective when grounded in theory, studies have shown the majority of activity trackers to lack theoretical content. With an abundance of theories and behavior change techniques, it has been noted that designers and researchers are having a hard time deciding with confidence which of the theories and techniques to use. In this talk we will argue for the need for theoretically and empirically grounded design in the context of BCTs. We will present a number of recent projects where we have attempted to make behavioral theory accessible to design teams, as well as empirical studies of the adoption, engagement with, and impact of physical activity trackers on individuals’ behaviors.
Keynote 2: Detecting the “Fake News” Before It Was Even Written
Fake news generators use persuasion and manipulation and exploit the cognitive biases of the target audience. With fact-checking approaches, by the time a claim is finally verified, it could have reached millions of users, and the harm caused could hardly be undone. An arguably more promising direction is to focus on analyzing entire news outlets, which can be done in advance; then, we could fact-check the news before it was even written: by checking how trustworthy the outlet that has published it is (which is what journalists actually do). We will show how we do this in the Tanbih news aggregator (http://www.tanbih.org/), which aims to limit the impact of “fake news”, propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking, which are arguably the best way to address disinformation in the long run and to resist persuasion and biases. In particular, we develop media profiles that show the general factuality of reporting, the degree of propagandistic content, hyper-partisanship, leading political ideology, general frame of reporting, stance with respect to various claims and topics, as well as audience reach and audience bias in social media.
Another important observation is that the term “fake news” misleads people to focus exclusively on factuality, and to ignore the other half of the problem: the potential malicious intent. Thus, we detect the use of specific propaganda techniques in text, e.g., appeal to emotions, fear, prejudices, logical fallacies, etc. We will show how we do this in the Prta system (https://www.tanbih.org/prta), another media literacy tool, which got the Best Demo Award (Honorable Mention) at ACL-2020; an associated shared task got the Best task award (Honorable Mention) at SemEval-2020