Although it is possible to correct misinformation, as we have shown in the COVID-19 Vaccine Communication Handbook, the Debunking Handbook 2020, and the Conspiracy Theory Handbook, this may not be sufficient to stop disinformation from gathering speed, particularly if it is disseminated by highly motivated people for political reasons. (Our page on the politics of COVID-19 disinformation explains this is in more detail.)
We must therefore look for additional ways in which we can “flatten the curve of the infodemic, so that bad information can’t spread as far and as fast” (Ball and Maxmen, 2020). Two of those options are nudges and boosts.
Nudges are ways to use the context in which a decision takes place to systematically affect these decisions. For example, employees at Google have access to free food and drink at work. Healthy options, such as sparkling water, are more visible and easier to access than less healthy options, such as sugary drinks. Together with a number of other nudges, Google successfully shifted employees' consumption towards healthier options.
Many nudges have been successfully explored in a variety of settings.
Boosts resemble nudges in that they have the same goal of systematically affecting decisions. However, unlike nudges, boosts seek to “improve” decisions by reminding people of existing knowledge or beliefs, and encouraging them to use that knowledge in a specific context. For example, a "boost" may be as simple as telling people that they should immediately call an ambulance when they suspect someone has a heart attack–-this is not obvious to many people (Grüne-Yanoff & Hertwig, 2016)!
Both approaches respect behavioural autonomy. They neither incentivize nor limit choice options: people are still free to choose. Specifically, with respect to fighting misinformation, the choice environment can be designed in a way that facilitates proper identification of and interaction with information (without limiting access to information or the right to free speech and media). Such nudges can increase resilience towards misinformation and empower consumers to spot and counter misinformation. One example is Facebook's recent introduction of semi-opaque masks that are overlaid onto material identified as being false: these masks inhibit but do not preclude access to questionable material. According to Facebook, 95% of people do not view the questionable content if they are warned. This is a significant reduction of misinformation compared to when no warning is presented.
Recent research provides evidence that people may fail to even consider whether news content is accurate before they share it. In consequence, they may share attention-grabbing but false information (e.g., Pennycook et al., 2020a).
One approach, therefore, is to subtly prompt or nudge people to consider accuracy before sharing content on social media–-thereby increasing the salience of truth to change the way that people interact with social media. This approach has been shown to increase the quality of news content that people intend to share on social media, both in the context of COVID-19 misinformation (Pennycook et al., 2020b) and political misinformation (Pennycook et a., 2020a).
A similar approach asks people to commit to a Pro-Truth Pledge of verifying information before sharing and correcting misinformation. Signatories pledge their "Earnest Efforts To: Share truth, Honor truth, and Encourage truth". It has been found that people who took the pledge posted and shared more accurate news stories on Facebook (Tsipursky et al., 2018).
Would you like to find out more about nudging? We created a search query specifically for this page, which links you to other interesting resources like Twitter threads, blogposts, websites, videos and more. Check out the search query that we generated specifically for this page here.
Would you like to know more about how we generated the search queries and how our underlying knowledge base works? Click here to learn more.
Page contributors: Stephan Lewandowsky, Dawn Holford