Interviewing is a fundamental methodology for both quantitative and qualitative social research and evaluation. Due to the Covid-19 restrictions it was not possible to conduct face to face interviews, which is why I did an online survey. The survey results can be found at the following link:
To summarize the outcome of the survey almost all of the 22 participant have encountered misinformation at some point and only a few are not sure about it, but when it comes to labeled content there are different opinions. Some of the participants feel like these labels are helpful and others do not trust them. They are not transparent enough and some of them also do not trust the correctness of the labelling.
However, when asked what additional information should be presented, some just answered: “Author and Date”. Only a few of them said that sources and other things like the domain ending and an SSL certificate should be included or present as well. When asked about the influence of the design of information on the trustworthiness 14 out of 18 answers stated that design does make a difference. Surprisingly one person said that when the design of information is too bold or striking it seems less trustworthy. Another answer was that it does not matter because false information can be designed properly as well.
To conclude this survey it seems that a lot of people have encountered false or misleading information and some of them do not trust online available information at all. Even if there is additional information given about a posting or if there is an independent factchecking site linked there still occurs major mistrust because it is not transparent enough.
State of the art refers to the highest level of general development at a particular time. In this case we will take a look at currently used design method and interaction process when interacting with misleading information. Some of the examples have already been shown in prior postings in a different context.
Since there are a lot of social platforms around, only the most popular ones will be analyzed. In this statistic you can clearly see which platforms have the most active users.
Clearly the market leader is Facebook, which is why it is the first social media platform in this analysis.
Facebook
Facebook is not only the market leader, it is also the biggest platform for misleading content in any forms. As most people already know, Facebook was a part of a major election scandal in the presidential election 2016, but this was not at all unpredictable. Some studies from 2012 showed that Facebook was a powerful, non-neutral force in electoral politics. Back then the “I Voted” button had driven a small but measurable increase in turnout, primarily among young people.
The research showed that a small design change by Facebook could have electoral repercussions, especially with America’s electoral-college format in which a few hotly contested states have a disproportionate impact on the national outcome.
With this knowledge it is even more important to be really careful about design changes on this platform, because the priorly described effect does not only happen on Facebook. It is happening all over the social platforms.
This is an example of an as false information labeled content. Facebook uses a dark or greyish overlay or a blur effect to let you see the picture and also puts some information onto it. This information includes an icon, a clear bold headline with a subtitle and a link, which leads you to an independent fact-checker site or some other article.
The research questionnaire should help understand how people interact with this kind of content, if they really click the link and how it makes them feel. Furthermore it should show how and if the design of information makes a difference on how trustworthy it is and also which key factors need to be in place to make information believable.
Instagram
Instagram uses the same technique as Facebook for labelling content. This may come from the fact that Instagram belongs to Facebook, which is also why I am not going to analyze this any further.
Youtube
What about Youtube? Youtube is the not only one of the largest social media platforms, it is also often used as a search engine. Most people love it because it offers so much, like tutorials or short explanations and you can even watch some movies for free. However, this might not be as great as at it was because of all the commercials. Just remember the “Good old Days”.
Despite the fact that Youtube is a big player, it is often overlooked when it comes to misleading information, but this statistic shows that a great share of the potentially false information which is flowing around about the corona virus actually has its roots on Youtube.
Not only does this mean that that by far the most frequent source of potentially false information reported to correctiv.org by readers are Youtube videos. It also means that most of them share exactly this false information through other apps like WhatsApp.
So how does Youtube label their content?
Today, Jan 4th 2021, the dailymail.co.uk wrote that YouTube has started displaying fact-check information panels to users in the UK, in an attempt to stop the spread of misinformation on the video platform.
UK users will start seeing the independent, fact-checked information from third-party organisations on the Google-owned platform from Thursday.
Panels will appear above search results, offering ‘more context’ and links to reputable sources of information relating to whatever users are searching for.
Facit
Most platforms use more or less the same technique or process for misleading or false content. Even the design looks similar on all of these platforms. The strategy most of them use looks like this:
An empirical user research questionnaire about how we interact with social media platforms and false or misleading content.Furthermore how the design influences us and if labeling content is helpful?
In 2018 the European Union did a survey on “The digital transformation of news media and the rise of disinformation and fake news”. In this report they stated that misinformation or fake news is pretty old. The first known case of fake news goes back to the 16th century. However, this may be an argument for some people, it is clear by now that social media and the spread of fake news and misinformation have become a problem. First of all we need to define some wordings:
Falsity
Falsity refers to inconsistency in claimed facts (Spears, 2015), for instance, when a car manufacture claims that the car’s gas mileage is higher than it actually is.
Misleading
Some content creates an impression about a product, a story or news that is untrue (fake) or about features, information or facts that do not exist.
Misleading and false content affects the choices of users and their opinions.
The Questionnaire
In this phase of the research some personal qualitativ interviews based on a standardized questionnaire with a few participants will be conducted and analyzed. The main interest of this survey is how people interact with fake or misleading information and how we can change the apperance or interaction process through design. Therefore the following questions will be asked:
Demographic data like gender, age, education level, employment and family status.
Where do you usually watch/read news or get information on certain topics?
Which social media platforms do you use?
How often do you visit these platforms daily/weekly/monthly?
Have you ever experienced misleading content on these platforms? If so, please elaborate.
How do you interact with misleading or false content? Please elaborate.
What is your reaction when you find out the content you found is misleading or false?
What do you think about labeled content?
Which additional (background) information of a statement/fact is important to be shown directly for you as a viewer?
What makes a website/content/information trustworthy?
How trustworthy is social media in your personal opinion and why? (Scale 1 – 10)
Do you think the design of information or content has an effect? If so, please elaborate.
In times of new media and fake news it is hard to know which facts are actually true and which are not but why is this a problem for humanity?
You might think that some misinformation might not be harmful, but a workshop of Yale Law School and the Floyd Abrams Institute for Freedom of Expression showed that fake news can have a bad influence on our society. This problem exists not only in politics but also in our daily life. But what means bad influence and how can we make the online world more transparent?
Since the beginning of time humans were never exposed to such tons of data as we know today. At the beginning of the internet age people did not really use or understand the power of the world wide web. It started around the turn of the century that humans got connected and since then it increased exponentially. The devices got easier to use and the screen design improved as well. Originally most of the information online was reduced to fun articles and some early staged websites with mostly bad usability but that changed quickly. More and more humans became as we call them “Users” and at the same time the amount of misinformation rose and the transparency decreased. Nowadays it is hard to distinguish what information is correct and what is only there to get our emotions out of control. Bots and people who distribute false stories for profit or engage in ideological propaganda are now part of our everyday life as we spend around up to seven hours a day in front of a screen. Since the beginning of the pandemic our daily screen time might have increased even more. The positive or negative health effects of screen time are influenced by quality and content of exposure. The most salient danger associated with “fake news” is the fact that it devalues and delegitimizes voices of expertise, authoritative institutions, and the concept of objective data – all of which undermines society’s ability to engage in rational discourse based upon shared facts.
In 2014 some researches tried to cluster algorithms which have emerged as a powerful meta-learning tool to analyze massive volumes of data generated by time-based media. They developed a categorizing framework and highlighted the set of clustering algorithms which were best performing for big data. However, one of their major issues was that it caused confusion amongst practitioners because of the lack of consensus in the definition of their properties as well as a lack of formal categorization. Clustering data is the first step for finding patterns which may lead us to detecting misinformation, false stories, ideological propaganda or so-called fake news. It is also a method for unsupervised learning. Furthermore, it is a common technique for statistical data analysis used in many other fields of science and if used correctly it could be a game changer for our online and offline society.
Why does Fake News exist?
A pretty important thing to know about social media, is that always the most recent published or shared content is the first you will see. That means if there is no reliable recent post on a topic, it leaves a so-called data void behind, which means as soon as somebody publishes something new on this topic, it will be shown first. This comes from the fact that we always long for “new” news, despite the fact that no one, no tool nor algorithm has ever screened these information verifying its accuracy.
What about Twitter?
Since May 2020 Twitter is trying to make it easy or easier to find credible information and to limit the spread of potentially harmful and misleading content. They introduced new labels and warning messages that will provide additional context and information on some Tweets. These labels will link to a Twitter-curated page or external trusted sources containing additional information on the claims made within the Tweet.
So Twitter is one of the major social media platforms actually labeling content, despite it being the Tweet of the current president of America alias Donald J. Trump. Also they are actively trying to decrease the spread of misinformation though introducing an extra notice before you can share conflicting content. Since content can take many different forms, they started clustering the false or misleading content into three broad categories:
Of course Twitter is not the only platform labeling false information or content going viral – Facebook and Instagram started doing that too. Instagram has been working with third-party fact checkers, but up until now the service was far less aggressive with misinformation than Facebook. Also qualitativ fact checking takes time, which can be problematic and there is still some catching up to do.
Labeling or removing postings is a first approach in the right direction, but it does not solve all issues that come with false information and how we interact with it. This is why this topic is so important for the future and the wellbeing of our society.
Sources:
Fighting Fake News Workshop Report hosted by The Information Society Project & The Floyd Abrams Institute for Freedom of Expression