Fake News on Social Media (Extended) | Big Ideas Faculty Research | McCombs School of Business

Fake News on Social Media (Extended) | Big Ideas Faculty Research | McCombs School of Business

>>Hi everybody. I’m Catenya McHenry,
Communications Director here at the McCombs School of Business. And
you are joining us on our Big Idea show, where we look deeper into faculty
research happening here on campus. Joining me today is Tricia Moravec, she
is a new professor here in the McCombs School of Business. She
is an assistant professor of information management. We are focusing on her
new paper today, which everyone will be very interested in. It’s focusing on
fake news. The title of the paper is “Fake News on Social Media: People
Believe What They Want to Believe When it Makes No Sense At All.”
Thank you so much for joining us today. I’ve– ever since you got here, we’ve
been talking about this paper and fake news. I’ve been very interested in this
because now it feels like fake news has become a household buzz term, and it’s
used constantly.>>Right. But unfortunately, it’s not
being used in the correct way. So the way that I’m using fake news
and the way fake news should be used is that it’s used to describe information
that’s verifiably false. And how I also describe it is it’s intentionally created
to mislead. So that’s my understanding of fake
news, but unfortunately, it’s currently being used in ways that don’t actually
describe false information.>>Right, right. I think what’s most
offensive, as a journalist, is the misuse of the term. So let’s talk a little bit
about just your motivation for wanting to even look into this topic.
>>Right. So it was in 2016. And if you’ll remember, 2016, that was
when our presidential election happened. And so leading up to the 2016 presidential
election, two of the three most engaged with pieces of news on social
media were false. And so–
>>Which were those?>>So it was one about “Pope endorses
Donald Trump,” and that Hillary sold some guns to ISIS. Something like that.
And so that was a first. And then the third most engaged with.
So that means shared, liked, whatever. But people saw it. And likely believed
it for a bit. And so, and you know, Facebook had this
problem and continues to, where they’re profit-driven. They want us to stay
online, and to do that, they show us information that we like. And that
means our confirmation bias is driving what we’re doing online and
everything.>>So information that we like, whether
it’s true or not. It doesn’t really matter to Facebook and you guys just focused
on Facebook in the research.>>We just focused on Facebook in the
research just to kind of constrain it. But really, it was at this time. It was
in 2016 when I needed a new research project and I was very motivated by
this relevant issue that was occurring, and so I had this research question of
well, can people detect fake news on social media? And does this disputed
flag that Facebook debuted, which was– it said, it had a little caution
sign and it said “disputed by third-party fact checkers.” And so they were
going to put this on fake articles after enough people had said “this might be
fake.” And we won’t go into the issues with that design in general–
>>Yeah, because we’ll be here all day talking about that. (laughs)
>>Yeah, I could talk about this for hours. But I was curious, like “does this
work? Can people detect fake news?” And so I brought this to my adviser
and we just took off and studied it in a lab with students and using
electroencephalography or EEG, which is measuring brain activity. So the
flag was on particular articles that could have been real or not, but
it was Facebook’s way of disclaiming that it may or may not be true,
and it’s really up to you to decide what you believe about that information.
>>So I think that’s one of the reasons why it didn’t work. Because what people
actually need is a flag that says, “this is fake.” Not something that says,
“this may or may not be fake.” So the issue with the phrase “disputed
by third-party face checkers,” is that it’s kind of soft. It’s very soft language
that doesn’t really get to the fact that this is fake news, and you
shouldn’t believe it. But that was the intention, as far as I know, of the
flag, was to say “this is inaccurate information and you should not
be consuming it as truth.”>>So when you brought the group in
to test whether or not your research would or wouldn’t work, what happened?>>So it was one person at a time, and
this is one of the constraints of EEG research is because you have to put
this headset on people, and so that’s one interesting factor in getting enough
subjects and actually doing research like this. But what happened is that
they came in the lab, they went through some demographic questions, and then
I got the headset put on them. And then they went through 50 articles
that were either true or false, and they were actual articles that were
circulating around December, January of 2016, 2017. And so they were actually
verifiably true or false. They were real articles and people only saw the
headlines, and then the headlines were either flagged as false or not, which
was the control condition. And so what we ended up seeing is
that when you ask people, you know “how believable is this article? How
credible is this headline?” People’s belief didn’t change when the
flag was present.>>Really?
>>So their stated belief didn’t change whether it was flagged or not. And so
that was an initial, wow. So the flag doesn’t work. Facebook had also
figured this out, but I think we end up getting more into the why it doesn’t
work. And so we found out that the flag doesn’t work, but then, alarmingly,
we also checked whether people were correct in whether they–
>>And when you say “correct,”>>In whether they believed it or not.
>>Okay. So not matter how outrageous the headline was, your determination was
whether or not people believed what the–>>The headline.
>>The headline, yeah. Yeah.>>And so the way I measured this was that
the items– there are three items on seven point Likert scale, with the middle
being “I don’t know, unsure.” And so for someone to be classified as
correct, if it was a false article, they needed to say either “I — it’s
extremely unbelievable, not believable, not very believable.” And if they said
any of those, then they would be correct. And likewise
for truth, if it was true, as long as they said “this is extremely believable,
moderately believable, slightly believable–” even if they said slightly.
I mean that’s not a very big commitment. But even if they said slightly, I would
count them correct. But if someone was unsure or in the opposite
direction, they were incorrect. And what I ended up finding is that
I think 17% of my sample was able to detect fake news better than chance.
So that means they were correct on more than 50%. Which is really, really
bad.>>Yeah. Only 17%! That’s just–
>>The best person did 66% correct. And that’s a D, or an F. That’s
essentially failing.>>That’s bad! So that– what does
that tell you? Does that tell you that people will– I mean, obviously in
the title of your paper, people believe what they want to believe, but was it
surprising to you that people believe what they already felt in
their moral being?>>So to capture that, we used a
measure of confirmation bias. So assessing what people had already,
what people believed based on some conservatism items and whether they
self-identified as a Democrat or Republican. So combining these two.
And using that, we’re able to see that if the headline was something that
aligned with your confirmation bias, then people believed that more. So
yes, essentially people were unable to detect fake news. The flag did
not work as Facebook had intended. And it was essentially just this
confirmation bias where people really just believed whatever they want to
believe, and that’s not entirely new. But the extent to which confirmation bias
drives what we do on social media and what we believe, that is a new
topic to be studied.>>So on the back end, did you look
at just Facebook and how they infiltrate information into their platform and how
they sort of categorize certain stories to target certain people because they
have those algorithms to figure out who people are and what their
interests are, depending on part of the country and all these other
variables.>>Right. And so that creates these
echo chambers where, you know, Facebook is financially motivated to
keep us online and keep us happy. And so they already know that they
want to show us information that we like so that we stay online because
they’re doing hundreds of micro-experiments every week. And so
they know what they’re doing. But the problem is that, you know, with
their financial motivation, they don’t really care about whether they’re
showing us true information or false. And I didn’t go into the details of
Facebook’s algorithm because it really is a black box. None of us really
know about it and Facebook doesn’t want to share those details.
>>Of course.>>But what is just… alarming about
this whole thing is that we really don’t know what we’re going to see next.
And so we are very susceptible to believing fake information. And
Facebook did not want to take responsibility for their part in the
spread of fake news for pretty much all of 2017. So when this started
coming out about the top 2– or two of the three most engaged with
articles were fake in the three months leading up to presidential election, Mark
Zuckerberg pretty much would just say, “this isn’t my problem. This isn’t
Facebook’s problem. You know, I shouldn’t have to– we shouldn’t
be censoring information.” And so slowly throughout the past three years
since then, they’ve come to take more responsibility for this because
they realized that as a platform, they need to be providing people with
actually valuable information, rather than false information and misleading us.
And especially with the Cambridge Analytica scandal, I think
that’s also helped in helping Facebook to take responsibility, but truly one
of the worst problems about this and as it’s been getting worse and worse
is that there just hasn’t been anyone taking responsibility for it.
>>But the information, it’s still coming. It’s still out there. And people are
still engaging with it.>>And people aren’t — one thing I’ll
say is that, we could train everyone in the world to use Facebook responsibly
and then it could be an issue of our own that we’re taking responsibility
for, but that’s impossible. And people use social media in a
hedonic mindset. They use it to escape. This isn’t utilitarian. This isn’t work-
oriented or goal-oriented. People are using it as a passive
source of entertainment.>>And one of your stats in the paper,
you say “social media has become a common source of news, more than 50%
of American adults read news on social media.”
>>Yeah.>>That’s amazing.
>>And I think it’s actually gone up since then. And so, and that’s people
who get some amount of information. And then I don’t have a statistic for
this, but when I talk to people, and some people say “oh, I get all
of my news on social media.” That happens! There are people who
use social media as a kind of source to curate information and rather
than going out to actual news sources themselves, they’re using Facebook and
at the end of all this, and I’ve got a number of other studies looking at fake
news and how we can really help people become more responsible in
their social media use. So since this flag doesn’t work, can we design it
with stronger language to make it work? How else can we design it? What
are our options? And since Facebook seems loathe to
add these to the platform and really take more responsibility in this way,
what people need to do is take responsibility for themselves. And since
we can’t force people to use social media with a goal in mind, we can’t
force them to think critically, and it’s also very hard for us to think
critically. We expend a lot of mental effort when we’re trying to think about
difficult problems and that tires us out, and then we switch back to our automatic
or gut-level cognition, which drives what we do. And so we can’t force
people to use social media in a different way. But what people can
do is go to actual news sites and get their news there. Try to be more–
>>So examples. Examples. Give examples of actual news sites.
>>Well, look. There are biases in the news. And I don’t really cover that. But
at least if you’re on a news site, you know you’re consuming news. And
so if you’re on CNN or even if you’re on Fox News, or NBC, whatever you’re
on, at least you know you’re consuming news. And so–
>>Do you think it’s more difficult for people these days– so I have two
more questions. One, when you have a president who’s
saying that these reliable news sources that we’ve known have been
reliable for many, many years, are fake news and when he is very critical of
certain sites. He goes after New York Times, he goes after all of them who
don’t have positive things to say about him, how do you think that
makes it more difficult for people to make a cognitive judgement about
what they believe is real or not.>>So people who believe what our
president is saying, of course that makes it harder for them to trust
the news site that he says isn’t true. And that is a problem because it is
good to get news from a variety of different news sources with a variety
of biases. You want to do that so that you have a more neutral opinion
because you’ve been able to take information from all sides. And so when
we have this instance where people are encouraged to only get news from a
couple news sites and the other reputable sources are being bashed and
said that they’re not reputable and that they’re all fake news, that’s a
problem for people who are in the camp of believing the president because
then they might discount these actually reputable news sites when it would
benefit them to get news from a variety of different news sources.
>>So what’s next? What is the next step? Because obviously
there is so much more to uncover about just the idea of fake news.
>>Right.>>What is next? What’s your next piece
of research?>>So currently I am setting up the
behavioral lab in McCombs with EEG equipment, so that’s going to enable
me to do more of these EEG studies to see brain cognition as people are
on social media, as they’re looking at different types of news. But
additionally looking at a stronger flag to see how that works and whether
increased cognitive dissonance can really help people to better detect
fake news. Just different types of flags. I’m fascinated by deep-fakes. I think
that with our current society and our technology and the technology growth
is just amazing, how quickly we’re learning to do different things. And so
these deep-fakes, these videos that people are very well able to make it
look like someone’s saying something else or putting someone else’s face on
the person who’s actually doing the talking and these things, I think that’s
going to be fascinating to study as well because since it’s a video, I think
we’ve all learned to trust video more. Because it seems more credible. And
so what happens in the era of fake news and deep-fakes and how all of this is
just going to keep progressing. The information credibility issue
online is just going to keep getting more interesting as we continue to do
more of our lives online. And so I see just unlimited potential.
>>It’s endless! Yeah, I mean I feel like there’s really
no end to it. As technology continues to advance and we spend more time
online, there’s really no cap! And there’s no threshold.
>>I mean this is what I love about studying information systems and
information management. It’s just that as technology continues to advance
and improve, my opportunities for research just continue to expand
as well. And so I think this is fascinating. People are fascinating.
We are not the rational actors that is often assumed. We’re really–
>>It’s not like in the movies.>>Right. And so, I love doing this
behavioral style research where I really understand what’s going on
with us and the way that we’re using social media. And yeah, so
I just plan to continue in that trajectory until I get tired of
it, and then pivot to something else.>>I don’t think you ever will.
There’ll be more to look at.>>I think so.
>>One last question. What has been the most interesting or surprising
discovery as part of your research with this particular paper?
>>With this paper?>>Yes.
>>I think — one of the things that was very interesting was that the flag
didn’t work, but cognitively, people had more activation in their frontal
cortex when they were shown news headlines that aligned with their
beliefs but were flagged as false. And additionally, they spent a lot
more time thinking about those. So even though we’re bad at detecting
fake news and the flag didn’t work, but behind that, people thought
about it more. So what’s really going on there? Do we just discount it? Are
we taking it in and then saying, “mm. Okay. But could this be true or
false? Okay, well it doesn’t align with my beliefs so I think it’s false.”
>>Yeah. Wow.>>So just the people believing what
they want to believe when it makes no sense at all. I think was the
most fascinating part for me.>>Well, this has been so interesting.
I mean, like I said, we could talk about this for a whole day. (laughs)
>>Right. I could. I could talk about this forever.
>>Yes! It’s very fascinating. So thank you so much for joining us today.
I really appreciate your time and just, digging deep into what this is and
what it’s all about. So thank you so much. If you want to read Tricia’s paper, it’s
called “Fake News On Social Media: People Believe What They Want to Believe
When It Makes No Sense at All.” Thank you for joining us on our Big Ideas.
We’ll have a link to the full paper, right there in the bottom of the video.
We’ll see you next time on our Big Idea show.

You May Also Like

About the Author: Oren Garnes

Leave a Reply

Your email address will not be published. Required fields are marked *