The Click Trap (2024) Movie Script
We live in the digital universe,
surrounded by digital advertising.
Google, Meta, the parent company
of Facebook and Instagram,
these are basically
advertising companies.
The vast majority of their revenue
comes from advertising.
Online advertising
is now worth 400 billion dollars,
far exceeding
traditional advertising.
Thanks to the harvesting
of more and more data on web users,
this system has generated
huge amounts of revenue,
but it also has a dark side.
We know exactly
where this person is.
We know their real name.
We know their credit card number.
We know where they have travelled.
Imagine that, Mr Advertiser, of what
you can do to reach these people.
But traditional advertisers
aren't the only ones to benefit
from the platforms' targeting power.
Scammers spend billions
on advertising every year.
It's not one person out there
putting these ads online,
it's an ecosystem.
Sometimes those ads are obvious.
Sometimes they are sneaky
and deceptive and they trick us.
Sometimes they steal money from us
to the point of leading us
into financial ruin.
I couldn't breathe,
I couldn't eat, I couldn't sleep.
I've lost everything.
It destroys your life.
Dramatic on an individual scale,
the dangers of online ads reach
catastrophic proportions
when they affect thousands
or even millions of people.
The truth is social media companies
discovered prioritising hate,
misinformation, conflict
and anger is highly profitable.
Recent years have seen
the emergence of a new type of scam
that acts at a collective level:
disinformation.
I think the disinformation crisis
is one of the greatest threats to
humanity right now.
Democracies all over the world
are under threat
because of disinformation.
Misinformation is sticky,
it's addictive, it's chewy content
and for the companies
that monetise that content,
they see it as an opportunity.
Spread lies, spread hate
and you can make money.
Has online advertising become
an out-of-control machine
that is gradually bending
humanity to its will?
Senator, we run ads.
Every day,
without much thought, we accept
general terms and conditions
and cookies.
In doing so, we give websites,
apps and social media
access to our personal data.
They use it to offer us targeted ads
on topics that interest us.
This act seems banal and harmless,
but this customised advertising
is really the result
of a real revolution
caused by digital technology.
A revolution
with unexpected consequences.
Before digital ads,
if you wanted
to buy ads in a magazine,
if you wanted
to buy ads on a TV station,
you would at some point
be talking directly to the place
you wanted to buy ads from.
There would be
a salesperson at your newspaper
that would sell an advertisement.
They would say, "How big
do you want it to be?
When do you want it to appear?"
They'd work with you on that.
But we've really gone
far away from that.
In the 1990s,
the internet became more and more
present in our lives,
offering us
a space of freedom and sharing.
Quickly, it became a requirement
that everything online
be available for free.
And this extended over time.
As we got into the late 1990s
and early 2000s,
massive amounts of piracy for video
and music made people think
"I should be able to get my music,
videos and other things for free"
and it also created a necessity
for the creators of information
to figure out a business model
that could work in this environment.
And so naturally, advertising
is the one that would make sense.
When Google started,
they didn't know that
advertising would be
the primary business model.
There was
no business model for Google.
It just lost
lots and lots of money
for a very long period of time.
One thing they were trying
to figure out with advertising
is that you can't make it work
with a sales team,
a human group of people
that takes calls
and then puts up the ad
on the website
because Google, they get
about 99,000 searches a second.
Billions of searches
every single year.
And so the only way
to make it scalable
is to get rid of the human team
and really have a system which is
data driven, algorithm driven
and really moves at lightning speed.
We think about Silicon Valley,
you say, "Oh, computer programmers
working out of garages",
but lots of the people
who got involved early on,
they were former Wall Street people,
former ad sales people.
They worked
with a lot of economists
and people who had
previous experience on Wall Street
who really constructed
these types of markets.
The only difference is rather than
selling stocks or bananas, we sell
attention.
Shares of Google,
the company that makes the world's
most popular internet search engine,
went on sale to great fanfare today.
Google has doubled its profits
selling internet advertising.
Despite not expecting advertising
to be the business model of Google,
when they figured it out,
it was just this gold mine.
They made so much money
they instantly went
from a no name start up
to being one
of the richest companies.
They raised
1.7 billion dollars in cash
and made the stock of Google
as valuable as General Motors.
The template that Google created
influenced many other companies.
When Facebook got started,
they had nothing to innovate.
"We'll take
the Google business model,
figure out how it works here."
This new ad technology created
a whole new territory
for modern capitalism to conquer,
the attention economy.
Now, you don't buy
space in an outlet;
you literally just buy eyeballs.
Advertising became so successful
that it became very difficult
to create other business models
that were viable in this internet.
One result of that is that we assume
everything on the internet is free
because advertising has driven
the cost down to zero dollars.
You're not paying a dime
to use Facebook.
You're not paying a dime
to use Google's search engine.
There's an old saying,
"If you aren't the customer,
then you're the product."
We believe that we need to offer
a service that everyone can afford.
We're committed to doing that.
Well, if so, how do you sustain
a business model
in which users don't pay
for your service?
Senator, we run ads.
Behind the ads we see online
lies a complex story.
Whether they appear on a site
or in our social media feeds,
most of these commercial ads
are the result of a secret war
that the brands engage in
to capture web users' attention.
Every time you visit a website,
details that have been collected
about you
are sent by that website
into the open bidding market
so that advertisers who might want
to reach somebody like you
have a chance
to see that information,
see the website you're on and
place a bid to show an ad to you.
They are putting in the parameters
of "I wanna reach
men between 25 and 35
who might be interested
in buying a car
and who live in New York."
And then the system is supposed
to find those people
as they are looking on apps,
as they are visiting websites,
watching videos
and show the ads to them.
And what happens is there's
a process called real-time bidding
where algorithms representing
different media buyers will say,
"I'm willing to pay
a dollar for this person",
"Two dollars
for this person".
And it's literally a marketplace
where people competitively bid
for the value of your attention.
So if I am a very valuable consumer
that lots of people want to bid to,
it'll be more money to do so.
So one classic thing
is that it turns out
that buying ad space
for people who have iPhones
tends to be more expensive
than people who have Android
because people who have iPhones
tend to have more money.
So this is a more valuable form of
attention you're trying to capture.
This is a process that happens
in the blink of an eye.
The bids are taken,
the winner gets to place the ad
and show it in front of you.
The systems that Google runs,
the systems that Facebook run,
they operate at a scale that is
really unheard of in human history.
There are billions and billions
and billions of ads
and ad requests at any given time
throughout the day,
overnight, around the world.
But the problem
is that the system is so opaque,
it is so complex,
it is so confusing even to people
who work in digital advertising
that nobody actually knows
what's going on.
And that is to me
an absolutely insane scenario
where you have roughly
about half a trillion dollars
transacting every year
in digital advertising
and people don't understand
where all the ads are going
and where all the money is going.
This complex automated system
is the central axis around which
the internet has evolved
into what we know today.
It may be a dream for advertisers,
but giving control
to algorithms is not without risk.
In 2016, I was just
a one-woman marketing team.
I was working
mostly in Europe at the time
and my boss suggested to me,
"Why don't we run
our first Google
ads campaign?"
I said, "OK."
I went into the Google Ads dashboard
and attempted to run
my first campaign
to see...
I was hoping
my ads would end up on like CNN
or New York Times
or Washington Post or something,
but then when I went to see
my ad placements,
I was surprised and shocked
to see none of that.
I saw a whole bunch of websites
that I'd never heard of.
There were all kinds of domains
that had weird numbers in them
that I knew that nobody I know
is visiting those websites.
I just filed that away as strange,
but also with the understanding that
we don't really know
where our ads are going.
A few months later, I visited
Breitbart.com for the first time.
Breitbart in 2016
was the biggest source
of disinformation in the country.
They'd been more influential
than Fox News
and the Washington Post combined.
They had more Facebook followers,
more engagement.
They played a huge role
in getting Trump elected in 2016.
The first thing that I saw
when I visited the website was ads.
I was just assaulted with
all these brands and advertisers.
Ads that were following me around.
So they're brands
that I shop with, am familiar with
and to see those ads
against that kind of content
was something
that was impossible to ignore.
These brands probably
don't wanna be here
and are probably running their ads
like I did a couple of months ago.
They're probably just
not checking their ad placements.
Due to their reliance on algorithms,
advertisers are totally disconnected
from the sites
on which their ads appear.
Due to this disconnection,
sites such as Breitbart
can receive ad revenue
without the brands realising.
Online advertising is a machine
for communicating blind.
I do remember
being in the room by myself
and I literally looked
right and left.
I was like, "Am I really
the only person seeing this?"
I wrote a blog post that day.
"Everybody, go and block
Breitbart now in your Google Ads.
If all of us do it at the same time,
we can put this outlet
out of business."
I co-founded Sleeping Giants
as a way to alert advertisers
that their ads were on this website
and we started
tweeting at companies
with screenshots
of their own ads on Breitbart
and asking them whether
they wanted their ads to be there.
And the brands were coming back
almost instantaneously.
Some of these guys were
coming back within minutes to say,
"We didn't know.
Thank you so much for alerting us
and we will take steps
to block it from our media buy."
And it grew super fast.
As we ran this campaign, over time,
we were able to collectively contact
thousands of brands.
Breitbart lost 90%
of its ad revenues
within the first three months
of our campaign.
They were on track to making
eight million dollars in ad revenue.
And...
we cut that off.
Within months, there were
other accounts popping up.
Just citizens of these countries
who saw what we were doing
and realised that they could
probably apply the same tactics
to their own local Breitbarts.
We realised that this was
something that we could take global.
This is Google's
location.
They are our main target for this.
OK, I feel like right here
is great, guys.
This is a good location.
Let's do it.
We are in New York City.
It's where Ad Week is taking place.
Google is also here at Ad Week.
They're one of the sponsors
of the event
and we're trying to get Google
to stop monetising disinformation
and to make
their publishers' data transparent.
Are you an advertiser?
Find out
where your ad money is going.
It's not going
where you think it's going.
So the plinko board is actually
a fun way for us
to engage advertisers on this issue.
So we want them to know
that they are unknowingly funding
disinformation and hate speech.
They put a coin into the board
and that represents their ad budget.
They wait to see
where their coin ends up.
Is it going to end up
on a good website
or is it going to end up on one
of these disinformation websites
that's spreading climate disinfo
and election disinfo
and all this other harmful stuff?
Hopefully, we'll be able to raise
awareness with the advertisers
and they will pressure
Google to take action.
But I think we can wrap it up now.
The automation
of online advertising has
real repercussions.
The way it works is at the heart
of what we now call
the "economy of disinformation".
One of the realities of false
and misleading information today
is that it has the potential
to be profitable
more than ever before in history
because it's easy to get
in the business of disinformation.
When we talk about
the "disinformation
economy",
most people think
of Facebook because
Facebook amplifies
and monetises disinformation,
but disinformation does not
make money on Facebook.
They make money when you click
on the article on Facebook
and go to their website.
That's where they run ads
and that's where
they basically cash in.
That is where the real money lies.
They are able to set up websites,
write their conspiracy theories
and their disinformation articles
and then receive Google ads
on their websites
and those Google ads are putting
dollars in their pockets.
Let's split Google into three parts.
Google has the search,
which has advertising
which is native to Google search,
so you can buy the search results.
Second bit is YouTube, which is big.
That's advertising that appears
and you can pay
to target people's eyeballs
and it will come before a video.
The third is,
what most people don't know about,
is Google's ads platform
that goes on other people's websites
and will serve
that advertising to you.
And then it will give
a bit of money to that content site.
90% of all websites carrying
advertising carry Google Ads.
Google and ad companies make it
very difficult for brands
to know exactly
where their ads appear.
So advertisers don't realise
that their ad dollars are going
onto these websites
that are spreading disinformation.
And so you have major corporations,
big brands,
buying ads on sites
with content about terrorism,
with hate content,
with disinformation.
And it's basically created
a whole business model
for people who would never have made
money from advertising before
to suddenly be able to earn
lots of money from advertising.
There are
so many countries now around
the world
where the arrival of Google Ads
has then been followed
by a huge proliferation of websites
whose core business model
is around hateful clickbait,
pushing out hundreds
of toxic inflammatory stories
targeting minority groups
which they then monetise
through online advertising.
I think the disinformation crisis
is one of the greatest threats
to humanity right now.
Democracies are under threat
because of disinformation.
COVID VACCINES MODIFY DNA
One of the really crazy things
we found in our investigation,
it's an article claiming that
COVID-19 vaccines change your DNA.
This is from a Serbian website.
We have an ad for Amazon Prime Video
and Discovery Plus,
as well as an ad
for Spotify Premium on it.
Creating disinformation can
absolutely be a job
and it can be a job paid to you
by the biggest brands in the world
thanks to Google.
It's a good business.
That's what's
so extraordinary about it
is that it encourages
low-quality content.
Why do you think it's so hard
for real journalism to prosper?
We have disinformation thriving.
It made more than
three billion dollars last year.
And meanwhile,
journalism is getting cut globally.
Newspaper ad revenue has been cut
by half in the last five years.
This is a crisis
where journalistic standards decline
in power and revenue,
and disinformation rises.
BREAKDOWN OF WORLDWIDE
ADVERTISING SPENDING
The traffic controllers
of ads and money and data
are deciding
who gets revenue and who doesn't.
Here's the thing about fake news,
it's displacing good news,
useful news,
well-researched news,
evidence-driven thinking
considered opinions
and that's a real problem
for a society
because it makes us
dumber as a society.
It's making our democracies weaker
and turning us into an idiocracy.
Clickbait is replacing thought.
3 SIMPLE TRICKS
FOR STAYING YOUNG
The new thing about clickbait
is that you have to click
on the title
to see what it's about
because that's
how the website gets paid.
When you write an article
for a normal newspaper,
the title must tell you
what it's about and ideally
tells you exactly
what you need to know.
Then you read the rest
of the article to find out about it.
Clickbait does precisely
the opposite.
It makes sure not to tell you
what you need to know,
so that you click on it
and generally end up disappointed.
THE NUMBER 1 CARB
YOU SHOULDN'T EAMAKE LOTS OF MONEY FAS3 SIMPLE TRICKS
FOR STAYING YOUNG
EAT THIS FRUIAND LIVE AN EXTRA 20 YEARS
For these companies
to make a lot of money,
web users must spend lots of time
on their sites and social media.
The more time they spend there,
the more ads they see.
And the more ads they see,
the more money the company makes.
The aim is clear:
capture our attention
by any means possible.
Social media and the internet
in general are competing
with the world around us
for our attention.
They are winning
this competition.
Something that I've noticed
and we can all see
is that when you're
on public transport,
everyone is on their phone.
The only reason we don't notice
is we too are looking
at our phone
rather than the people around us.
Facebook's competition
isn't Twitter.
Facebook's competition
is spending time with your kids,
spending time with your wife,
reading a book.
They need to keep your eyeballs
engaged so they can pump ads to you.
One of the things
that keeps our eyeballs on a site
engaging with the content
is disinformation and hate.
Hyper-emotional states.
I'm Imran Ahmed.
I'm CEO and founder of the Centre
for Countering Digital Hate.
We're an organisation
that looks at the architecture
by which hate
and misinformation spread online,
how they affect the real world,
and we try and find
ways to disrupt that.
The algorithms pick it out.
They see the stuff
that gets the most reactions,
the most people staying
and watching for longer.
They think,
"That's where the money is."
You know, I think initially
it was an accident.
It was amoral.
There was no morality.
They weren't choosing
disinformation or hate.
But now,
if you look at the revelations
from whistleblowers,
if you look at the work
that my centre's done in showing
that there are super-spreaders
of disinformation,
their failure to act,
their failure to change
the algorithms, the business model.
That's where it stops being amoral
and it becomes immoral.
Tributes to the Labour MP Jo Cox
who's died after being stabbed
and shot in West Yorkshire.
She was 41,
married with two young children
and was elected to parliament
just over a year ago.
My friend and my colleague Jo Cox,
who was a Member of Parliament.
She was shot, stabbed
and beaten to death
by a white supremacist terrorist
radicalised online
in the middle of the EU referendum.
It just broke my heart
to see someone who had been fed
lies and disinformation
and decided that it was rational
to kill a young woman,
so yeah, I mean,
it's really personal to me.
When the guy killed her, he shouted,
"Britain First, death to traitors."
Britain First was the first
political movement in the UK
to achieve a million likes
on Facebook.
When it happened, I remember
my reaction was to say, "Who cares?
They've got a million likes.
They have clicks.
We've got members."
And I was wrong.
I was just wrong.
The main place
where information was being shared,
people were deciding
on norms of behaviour and attitude,
negotiating what is true
and what is not fact
were shifting
to these online spaces
and we didn't understand them.
We've made a deal with the devil.
The devil said to us,
"I can let you see
what your friends are up to.
I can let you be connected
with your family so you can
share with them, so you don't
actually have to call them
and it just makes things
more convenient for you."
For convenience, we've handed over
our data, our attention.
In 2016, when I said online harms
were causing offline harms,
I remember being treated
like I was crazy.
People would laugh at me.
I used to get asked the question,
"Yeah, but this is online.
How does this affect
the real world?"
But after 6 January,
I don't think anyone's asking
that question any more.
They got ten people
trying to stop us.
06/01/2021
ATTACK ON THE CAPITOL
WASHINGTON DC, USA
It's time that somebody did
something about it.
And Mike Pence,
I hope you're gonna stand up
for the good of our constitution
and for the good of our country
and if you're not, I'm gonna be
very disappointed in you,
I will tell you right now.
Facebook's algorithms
were amplifying harmful content
that led to 6 January.
They were showing
groups to people
that were interested
in the "stop the steal" narrative
and those groups were able
to catch on fire
and grow in hours
to hundreds of thousands
because of their algorithms
and their recommendation systems.
Hey!
Fcking hey, man.
Why is this happening?
Why are we here?
The truth is, social media companies
discovered prioritising hate,
misinformation, conflict
and anger is highly profitable.
It keeps users addicted
so they can serve them ads.
CCDH's research has documented
bad actors causing harm,
but also bad platforms encouraging,
amplifying and profiting
from that harm.
Leading up to the insurrection
and after the insurrection,
I've been using a dummy profile,
a sock puppet profile
that solely joined militia groups
and engaged with extremist groups
on the platform
and Facebook,
following the insurrection,
not only pushed in the feed
an array of false election claims,
but pushed those
alongside ads for essentially
military gear-style materials.
Body armour, bulletproof vests,
specific holsters,
sites to make
your weapon fire more carefully.
This is exactly the kind of material
we saw Facebook pushing to users
in the wake of 6 January.
We also looked at the ad interests
the company was using
to target this individual.
If you get on your Facebook profile
and click
through a whole layer of things,
you can eventually get
to why you're being shown that ad
and it'll come up with the interests
you're being targeted with.
And one of the ones this profile
was tagged with was "militia".
Months earlier, in August 2020,
Facebook made a big public show
of the fact that they had
banned militia
and domestic extremist movements,
but not only
were those movements operating,
there was
an entire ad interest category
that could be used to target
militia for the company to profit.
So, as far as we can tell
from the outside,
Facebook's algorithm
is single-minded
and nearly sociopathic.
It is designed to do one thing:
to maximise engagement,
to maximise activity,
and it's going to find the content
and promote the content
that does the best job of that.
And in many cases,
egregious types
of content get amplified
many, many times more
than they would in the real world.
And just as important,
in the real world,
these egregious types of speech
would be met with counterspeech.
Algorithmic amplification separates
the counterspeech
from the damaging speech
in a systematic way
that doesn't happen
in the real world.
I'm Guillaume Chaslot. I studied
at engineering school in Lille,
then did a thesis
on artificial intelligence.
I was hired by Google
and worked on the recommendations
of YouTube.
And we tried to make
certain recommendations,
strategy A versus strategy B
and looked at whether strategy A
was more effective,
meaning did it keep the user
on the platform for longer,
than strategy B.
Everything we do online
is examined carefully
by artificial intelligence.
Every click,
every scroll, every pause,
every minute of video that we watch
feeds the algorithms
that customise the recommendations
displayed on our screens.
I was on a bus
with someone who was watching
one conspiracy theory after another.
He said, "But there are
so many videos like that.
There are so many videos saying it
that it must be true."
That's when I realised
that it's not the arguments
in the videos that are convincing,
it's the repetition of these videos.
The repetition
of these videos
was exactly what we were doing
at Google when we were trying
to make him watch
as many videos as possible.
So, in fact, he was convinced
by the way
the algorithm itself works.
So, when I saw that,
I said to myself,
"I have to see if it's just him
seeing conspiracy theories
or if it's the YouTube algorithm
that's recommending
conspiracy theories to everyone."
In 2016, I started creating
AlgoTransparency.
I made a robot that goes on YouTube
and follows the recommendations.
I discovered
that the YouTube algorithm
for recommendations does have
a huge tendency to recommend
these conspiracy theories
as they generate watch time.
Like this person who watched them
for six hours straight.
It's a gold mine,
so the algorithm will promote
this type of video.
In my work at Google,
I tried to suggest
content recommendation systems
that allowed the user
to discover new content
and different points of view.
I realised
that whenever I suggested this,
and not only me,
but other Google engineers too,
Google wasn't interested.
My manager said,
"Be careful, I wouldn't continue
working on that if I were you."
For six months, I followed
his advice and then stopped.
When I started working on it
again, I got fired.
Conspiracy theories are driven
by something in our psychology
called epistemic anxiety.
It is that sense
of not just not knowing the answers,
it's not knowing
how to find the answers.
But then conspiracy theories
never satiate epistemic anxiety.
So that's another finding.
That they never fulfil that
desperate yearning for certainty.
And for the companies
that monetise that content,
they see it as an opportunity.
One of our research reports,
"Malgorithm",
it proved that Instagram
if you liked
Covid conspiracy theories,
it would give you QAnon,
it would feed you anti-Semitism
in the recommendation algorithm
because it knew
the way that our brains work.
Based on trillions of clicks,
billions of people worth of data,
they knew
that core psychological insight.
They're constantly feeding
and all the time they're feeding
and feeding and you're clicking
and you're going and looking
and scrolling, every three posts,
there's an ad, money.
Every three posts,
there's another ad.
You become worth 100 billion dollars
by hacking our brains,
the worst tendencies in our brains.
Imagine that you're driving
down the highway,
you see that traffic is slowing
near to a stop and as you go past
you see that there was
a really bad accident.
It's off the road,
but everybody is slowing down
to see what happened.
It's a car crash.
You don't want to look,
but you can't look away.
Imagine if there were a bunch of ads
popping up around that car accident.
That's what we see
with these high-engagement, harmful
and often misinformation posts
on the platform.
You're gonna have
people commenting, engaging,
reacting with an angry face
or a thumbs up
and all of that engagement
just makes that post more valuable,
it lifts it up in people's feeds
and with that, advertising
that will be centred around it.
As you pause and slow
and look at that on your feed,
not only are you gonna get ads
around that harmful activity,
but that activity is gonna
get pushed in other feeds
because the platform knows
you're slowing down to look at it.
It's really concerning to think
about how many people are influenced
by the combination of ad activity
and harmful information that's
being pushed to them in their feeds.
The bad is more powerful
than the good.
We react more strongly
to negative stimuli
than positive stimuli.
We react more strongly
for good reasons,
as programmed by evolution.
So how do websites
and social media hold our attention?
They use negative stimuli,
they use fear,
they use anger,
they use scandal,
they use disgust.
It's the economic logic of a site
that wants us to spend time on it,
to get us to enter
this spiral of negative content.
More than
in any other line of business,
companies that provide online ads
must continually expand
their portfolio of clients
and when recruitment reaches
saturation point in one market,
they must explore new horizons.
But for someone to be able
to use social media,
there is one fundamental condition.
They must have internet access.
There are millions of people
across the world without access
or affordability for internet.
And Mark Zuckerberg came up
with the idea
that you have to get people
who don't have internet online.
Facebook launched a program
that it called Internet.org
in countries like the Philippines,
Indonesia, Brazil.
Facebook was trying to get the rest
of the developing world online.
It partnered with mobile providers
in different countries
to offer Facebook for free.
Zero-rated data.
So anybody could use
these platforms and the internet
without incurring
very expensive data costs.
This was revolutionary.
And as part of that plan,
you could access Facebook for free
and a select number,
usually half a dozen apps,
maybe a weather app,
maybe a job app.
And of course other Facebook apps,
Instagram, WhatsApp.
When you hear the phrase
"Facebook is the internet",
in most of the world,
this is what they mean.
There was no outside internet
beyond Facebook.
That was the universe
people could search.
You couldn't go to Google to find
if something was true or not.
So, when you pair that power
and that narrow internet with
digitally illiterate populations
and a wealth
of misinformation and harm
that is not being moderated
in foreign languages,
you have complete disaster.
Hello and welcome
to Access Asia.
Coming up, Amnesty International
accuses Facebook
of exacerbating
human rights violations in Myanmar.
It claims the social network
amplified anti-Rohingya content.
In Myanmar, Facebook at the time
had no Burmese-speaking moderators.
It launched this Internet.org
free basics programme.
Lots of people got online
and tonnes of misinformation
and hate speech just proliferated,
calling for lynchings,
essentially sparking a genocide.
The UN found Facebook partially
responsible for that activity
and around 2015, 2016,
Facebook quietly ended
that programme in Myanmar.
But what this did was allow
Facebook to create and collect
billions more data points
on the rest of the world
and push ads anywhere,
particularly elections ads
in countries like India
where people who had never before
really been mapped or been online,
now the government had data points
of people they could push ads to.
And we saw the same
in Kenya, in Ethiopia,
across the Middle East
in countries like Iraq.
You see this
boiling over into even more serious
real-world harms.
These free basics programmes
were essentially free internet,
but Facebook's version
of the internet,
which is an unmoderated,
disinformation-filled onlinescape.
Senator, what's happening
in Myanmar is a terrible tragedy
and we need to do more.
- We all agree with that.
- OK.
But...
UN investigators have blamed you,
blamed Facebook
for playing a role in the genocide.
We all agree it's terrible.
How can you dedicate and will you
dedicate resources to make sure
such hate speech is taken down
within 24 hours?
Yes, we're working on this.
And there are three specific things
that we're doing.
One is we're hiring dozens more
Burmese-language content reviewers
because hate speech
is very language specific.
It's hard to do it without people
who speak the local language
and we need to ramp up
our effort there dramatically.
Second is we're working
with civil society in Myanmar
to identify specific hate figures,
so we can take down their accounts
rather than pieces of content.
And third is we're standing up
a product team
to do specific product changes
in Myanmar
and other countries that may have
similar issues in the future
to prevent this from happening.
The social media giants claim
that they are investing heavily
in identifying
and preventing
the dissemination of hate speech.
A combination of human moderators
and artificial intelligence tools
are being used to monitor
the content of posts,
whether or not they are ads.
But do these filters really work?
For the last three years,
we've been investigating
the way that social media companies
are perpetuating human rights abuses
and undermining democracy
around the world.
You can't talk
about human rights
or any of the major issues
that are affecting our world
without looking at the way we're
consuming and sharing information
because it has such a major impact.
So we started looking
at the Kenya election a few months
before the election
actually happened
and we were interested
in the election
because it was
quite a tightly fought race
and also because Kenya has
a really sad history of violence
before and after elections.
We decided to investigate
how well these platforms
were able to detect hate speech.
We investigated Facebook
in this case
and we tested ads that were inciting
violence ahead of the election
in both Swahili and in English.
According to Meta, every ad
published on Facebook
is examined at multiple levels
before publication.
This meticulous analysis is intended
to ensure that advertisers respect
Meta's rules and standards.
Among other things,
the platform prohibits blasphemy,
excessive nudity
and misinformation in ads.
Publishing ads on platforms
is really easy.
They want as many people
to be publishing ads as possible.
We find examples
of hate speech and disinformation
and then we turn them
into very simple ads.
We choose
this horrible content and think
surely this time
the platforms are not
gonna approve it.
It's so obvious or it's so extreme
or it's a crucial moment
the platforms say they care about.
We then upload it onto the platforms
and then we have to wait and see
if the platforms approve it.
And then we get these notifications
saying they've been approved
and your heart just sinks that
even in these dangerous contexts,
they're still happily approving
content that's
inciting violence that could
be putting real lives at risk.
And we expected the English ads
to be picked up immediately
and the Swahili ads
to just go straight through,
but that wasn't what happened.
All of the ads went through,
but first of all, a number
of the English ads were picked up
due to spelling mistakes.
And so the AI
was able to detect that
and once we corrected
those spelling mistakes,
they were all approved
and put through.
And then we reached out
to Facebook
to find out what they had
to say about this
and they said it was a mistake,
they shouldn't have been approved.
So we gave them another chance
and we tested some more ads
thinking that at this point,
they've been alerted
to the fact that this is a big risk,
it was two weeks
before the election,
they should be heavily resourcing
content moderation in this country.
And once again, all of the ads
were immediately approved.
We put a publication date
in the future.
So what we do is we pull the ads
just before they get published
because obviously,
as a human rights NGO,
we don't want to be putting out
more hate speech or disinformation.
As we know
with all of this type of content,
hatred and the organising
of this violence starts online
and then it very, very quickly
spreads into the real world.
We saw that after the US elections,
we've also seen that in Brazil,
we've seen in Myanmar, in Ethiopia,
people are literally being killed
because of hatred
that starts by spreading online.
Through tests
in different languages and places,
the investigations
by Global Witness showed
that content moderation tools
often struggle to identify
obvious examples of hate speech.
If you talk to any
of these companies, Facebook,
Instagram, Twitter, YouTube,
they will tell you
in a canned statement
that they have brand safety policies
and that "x" content is not allowed
on their platform.
They use the same statements
over and over again,
but just because it's not allowed,
doesn't mean it doesn't exist,
because they're not enforcing
their policies.
And when they're not enforcing
their policies,
those brands are threatened
because their ads are showing up
against harmful content.
There's a playbook
for big tech companies
and they learned it from Exxon,
from Philip Morris,
from every other bad actor
in the corporate sphere.
Deny.
So that's the first thing.
So, first of all, say, "No,
are you sure?
No, I don't think it's our ads.
It's probably someone else's.
That must have crept through."
Second, deflect and say,
"Look, it's
a really complicated system.
The ad stack's complicated.
We don't really know
how it works ourselves."
The third thing is delay.
Right now, most of them can
no longer deny there's a problem
because even
counterterrorism professionals,
senators, congresspeople,
members of parliament
are contacting us and saying,
"We know this is a problem.
What do we do about it?"
So now it's delay.
And they're delaying
and how do you delay?
PR, spin, lies,
a little bit of evasion,
take your time getting back
to someone in an e-mail.
It takes us weeks to get
an answer on something.
Why is there a problem?
It's because this is profitable.
If it wasn't profitable any more,
they wouldn't do it.
Spread lies, spread hate
and you can make money.
As you know, next week
we'll publish a new report
about how Twitter is profiting
from hate and disinformation.
We've been calculating
how much money Twitter makes
from accounts that spread hate
against the LGBTQ+ community.
So we've got examples
of companies' ads appearing next to
this really dangerous anti-trans,
anti-LGBTQ+ content.
Which companies?
We identified ads
from five big brands.
I've got Fortune magazine,
Kindle, Disney, NBA and T-Mobile.
All of them appear
next to anti-LGBTQ+ content.
We know a small number
of accounts are responsible for it.
They make about 6.4 million
for Twitter in ad revenue.
Our hope is that by exposing that
to a wider audience,
it will create political, economic
and social reputational costs.
With Elon Musk though,
he is desperate for revenue
and so he's willing to accept
this kind of hateful rhetoric
if he can make a buck out of it.
While many organisations
and advertisers are begging
for more investment
in content moderation,
social media companies underline
the need to strike a balance
that preserves free speech.
Companies like X, formerly Twitter,
say that their users have the right
to express themselves freely
as long as they stay
within the law.
A good sign as to whether
there is free speech is,
is someone you don't like allowed
to say something you don't like?
And if that is the case,
then we have free speech.
It's annoying when someone you don't
like says something you don't like.
That's a healthy functioning
free speech situation.
Elon Musk picking a new legal fight
with a nonprofit that criticises X
formerly known as Twitter.
What's your response
to his attorney saying you're wrong?
He has been casting around
for a reason to blame us
because we all know
that when he took over,
he put up the bat signal
to racists, to misogynists,
to homophobes, to anti-Semites
saying Twitter
is now a free-speech platform.
He welcomed them back on.
He reinstated accounts suspended
for spreading that kind of stuff.
And now he's surprised
when people are able to quantify
that there has been
a resulting increase in hate
and disinformation on his platform.
The reason Elon's annoyed
is because the work
that we did to show that hate
had exploded
after he took over the platform
led 50% of advertisers
to leave the platform.
What we do is look
at hard data and prove
that these people are unleashing
a tidal wave
of hate and disinformation
that is scarring our world.
It's become the primary vector
for proselytising hate
and disinformation online.
The reason he's suing me
is because he wants me to be scared
to say anything about him.
So we are now in a dogfight
with a guy who keeps escalating.
So you have never looked
at a product online.
All you have done
is mention it in conversation
with your phone nearby
and magically,
in minutes or hours or days,
there is an ad for that exact brand
showing to you
and you are creeped the fuck out.
The question is,
is your phone and is Facebook
and Google etc. listening in
to you to target ads to you?
As far as all
of the research on this shows,
as far as everything
that's been gathered, no,
they are not listening to you
to target ads to you.
And I know people refuse
to accept this and believe it
because there have been times
where you have never mentioned
this brand before in your life
or your friend mentions it
and all of a sudden
it's in your Instagram feed
and it seems creepy and it must be
that they're listening to you,
but there is no evidence
that's ever backed that up
that that is what's going on.
And this has absolutely
been looked into in detail.
The answer why this happens
is they have so many
other data points on you
that the likelihood that that brand
is mentioned in a conversation
is roughly the same likelihood
that it also might see you targeted
with an ad for that brand.
They're saying they know you
and your friend group so well,
you think they listen to you.
Companies that live
off online advertising
rely on the data of their users
to attract advertisers
and make money.
This has led to the creation
of a brand-new market
based on the commercialisation
of this data.
So thousands of companies compete
to collect,
organise and make sense
of the huge quantities of data
that our devices generate each day.
Data brokers are a global industry.
Pretty much if you
live in a country with
smartphones,
where there are mobile apps,
a data broker in some form
is collecting or very
soon is going to try
to collect data about you.
The challenge
of understanding data brokers
is that it's really opaque.
It's not a transparent industry.
And so part of what our project
at Duke University does
is we actually go out and buy data
from these data brokers.
You can buy pretty much anything
you want from a data broker.
You have lists of young people
who like fast cars or retirees
who really wanna go on a cruise.
There are data sets on truck drivers
and avid exercisers.
Do you wanna reach single mothers
between the ages of 30 and 35?
We have that list.
Do you wanna reach people
who just graduated from college
and who really like drinking coffee?
We have that list too.
Data brokers sell
demographic information,
so race, religion, gender,
sexual orientation, marital status,
your favourite food
or your phone location
and tracking where people go
and shop and sleep at night.
If you have one smartphone
that's next to another smartphone
on a nightstand
six nights out of the week
and the seventh, it's near
a different phone, it's an affair.
There's all kinds of things
that are intimate to people's lives
that you can find out
if you watch them eat and sleep
and drive and walk
and sit down 24 hours a day.
All of this is out there
and it's a lot cheaper
than most of us would expect.
In 2023, our team at Duke published
a study by Joanne Kim
about data brokers and the sale
of people's mental health data.
This student contacted
a few dozen of these companies
asking to buy information
about people with depression,
about people with anxiety,
about people who might be dealing
with many mental health conditions.
We wanted to say,
"What's out there?
If I'm an advertiser, I'm a scammer,
I'm a pharmaceutical company
and I want to buy data
about mental health conditions,
can I do that?
Where do I get it from?
How much does it cost?"
And what she found
was pretty disturbing.
There were
about a dozen companies
that offered to sell her
data about people
with mental health conditions,
data about people
suffering from depression.
The specific prescriptions
that they might take for depression.
Do they have bipolar disorder?
Do they have
obsessive compulsive disorder?
Is this someone that has
a particular kind of anxiety?
When our team has asked
data brokers to buy data,
they're always trying to upsell you.
So you say, "I want
mental health data."
They'll say, "Great,
and we also have data on race
and age and net worth and zip code
and how many children
are in the home."
And so this was also our experience
in the mental health study.
Alongside what type
of anxiety someone had
or the particular antidepressant
they were on, you could also get
information about their finances,
about their location,
about their demographics
and pair that all together
in a package.
All of that together gives
a really fine-grained look
at a person,
all without them knowing it.
Often, these brokers who've sold us
really sensitive data on people
haven't even asked who we are,
they don't ask
what we're doing with it.
They don't ask why we want it.
They don't ask our intentions.
They get an e-mail and a credit card
and that's enough
to e-mail an Excel spreadsheet
with people's information.
You can buy data
about people's health conditions
for 10 or 20 or 30 cents a person.
So if I wanna pick a town
and buy data on everything they have
on the people in that town,
I might spend just a few hundred
or a few thousand dollars.
The aim of exploiting
all this data on us
is to make digital advertising
more effective
by displaying ads
that are relevant to us.
But this surveillance system
can also lead to personal data
becoming publicly available,
even when we least expect it.
I had been doing a lot of reporting
on the ways in which governments
are using advertising data.
One thing led to another.
I started looking at privacy impacts
to real people
of having this kind of data
available for sale.
While conducting my investigations,
I began looking into a company
called Near Intelligence.
Are consumers flying
or driving on vacation?
Are they staying in hotels,
campgrounds or a friend's home?
Are they shopping online,
in store or both?
Are they eating in or out?
That's where Near comes in.
We are the data storytellers.
Near is an Indian-based
data intelligence company.
It's one of many companies out there
that make data available about
the movement of phones
for sale to their customers.
Near has data
on more than a billion devices.
They have hundreds of companies,
generally advertising clients,
who use Near's data
to understand the movement
of devices
or the interests of consumers
and help target them with ads
based on their location.
Geofencing
is a common advertising technique.
It's essentially simply
drawing invisible lines on a map
and seeing what devices appear
within those lines.
So if you're trying
to reach people who like pizza,
geofence a pizza parlour.
To reach high school students,
you could geofence a high school.
If you're trying
to reach religious people,
you could geofence
a mosque or a church.
It's essentially an invisible fence
that you draw around a spot on Earth
and see what devices appear
within that shape.
Near has data
that can help you do that.
And in doing that kind
of investigation into Near,
I had started to hear from people
close to the company
about this ad campaign
that had been run
using their data by
a conservative group in Wisconsin.
Veritas Society is
a conservative anti-abortion group
that aims to convince women
who are thinking about an abortion
not to do that.
And in this case, what they wanted
to do was run digital ads
and the best place to reach
those people was by trying to see
who was visiting abortion clinics.
And that's where
this data provider, Near, came in.
Their data was used
to target this particular campaign.
They were drawing geofences
around Planned Parenthood clinics
and the parking lots
next to Planned Parenthood clinics.
And so for women
who showed up at these clinics,
as they pulled into the parking lot,
that's when they would cross
this invisible fence
that had been drawn by a data broker
around these clinics.
That's when the data broker would
log them as visiting this place
and create
what they call an "audience".
In an archived old version
of the Veritas Society website,
they explain in pretty great detail
exactly how they ran this campaign.
It says,
"Utilising our advanced
Veritas Society digital technology,
otherwise known as 'polygonning',
we identify and capture
the cell phone IDs of women
that are coming and going
from Planned Parenthood.
We then reach these women on apps,
social feeds and websites
like Facebook,
Instagram and Snapchat
with pro-life content."
So that's an awful lot of data
they're using to target people.
All of that is because of the ways
advertising technology is built
to collect this kind of data
and serve us ads.
Imagine you're a woman who visited
a Planned Parenthood clinic.
If you were targeted
in this ad campaign
in the hours and days
after leaving the clinic
as you were on Instagram,
as you were on Snapchat,
on Facebook,
you might start to see ads
telling you there was a chance
to reverse the abortion procedure.
What those ads essentially said
was something like
"Took the first pill at the clinic?
It may not be too late
to save your pregnancy."
So in the 30 days
after these women's devices were
seen at an abortion clinic,
these ads would follow them
around the web.
Near, the data broker, has policies
against geofencing sensitive sites.
So at some point, they discovered
that Veritas Society
had been drawing
geofences or polygons
around abortion clinics
and told them that was not
an appropriate use of Near's data
and stopped the ad campaign.
Veritas Society's campaign lasted
two years,
disseminating over 14 million ads
without Near or the platforms
where the ads appeared noticing.
Meta, the parent company
of Instagram and Facebook,
said that the ads had been wrongly
classified as apolitical,
whereas content relating to abortion
should have been classified
as a political ad.
This is a 21st century version
of being outside clinics with signs.
The technology in this case was used
to target people seeking abortions,
but it could just as easily be used
for many other things.
The data is there
and groups all across
the political spectrum can use it
for whatever they like.
Thanks to the data shared
by our devices,
platforms have achieved
an unprecedented level
of precision
in their targeting.
This targeting power doesn't only
interest traditional advertisers.
With programmatic advertising,
you can target people.
Wherever they are on the internet,
if the right person for your ad
is there looking at that website
or opening that app,
you can reach them.
But there's always a downside,
another edge of the sword,
which is that if you are a scammer,
if you are a person
maliciously trying to target people,
then those same
powerful targeting abilities
to figure out the exact type
of person who will click on that ad
and pay for your product
or give their credit card number,
that is available
to scammers as well.
And so digital advertising
has revolutionised scamming.
It has enabled people
to scam at a massive scale.
They can run a lot of ads,
they can test them,
they can see which messages, often
false claims about celebrities
for example, resonate most
with people in a certain country
and be able to optimise to steal
as much money as possible.
Creating an opaque, complex system
with pinpoint targeting
is basically
a scammer's dream come true.
In the year of 2019,
I saw an ad on Facebook.
RETIRED RECEPTIONISIt looked like it came
from a newspaper in Sweden.
It was a picture
of two Swedish celebrities.
They promised,
"If you pay 250 euros,
the money will increase."
I had seen these ads
sometimes before.
But this time,
I clicked on that link.
In the online world,
digital ads don't always
keep their promises.
Things that seem too good to be true
can have catastrophic consequences.
I'm Eric van den Berg.
I'm a Dutch journalist.
I do investigative stories in tech.
I think, like most people,
I saw these ads online all the time.
The most common one
was "Bitcoin makes celebrity rich".
And I was curious, I thought
this was quite an interesting story.
So I investigated to find out
who was really behind these ads.
So I visited the online forums
where they hang out
and I pretended to be interested
in working for them
to make a little extra money.
And it wasn't very difficult
to get people talking.
They're always looking
for new people to put
these dodgy ads on social media,
to do their dirty work.
I contacted them and we exchanged
Skype contact information.
So I got a call and
at the other end of the line
is this guy called Vladislav,
very good English
and he was based in Kyiv.
And then he said, "OK,
I'm gonna send you a package."
And he sent me a zipped file.
It contains 48 fake news articles
from 11 markets.
And the idea was
I would put them on social media
and each time I bring someone in
who gives them their credit card,
I would get paid 600 dollars.
Digital scams are often created
in multiple languages
and adapted to specific markets.
Each country's inhabitants see
ads with celebrities they know
and the targeting abilities
of the platforms give swindlers
the means to focus
on the most vulnerable people.
One of the reasons this works
is because
digital advertising is so cheap.
If you target a million people,
there's always gonna be
a couple of people
that are gonna fall prey
to this scam.
That's why it's so successful.
So let's say you're somebody
that's not aware of these ads,
you're scrolling down
your Facebook feed
and all of a sudden you see
a famous person from your country
who apparently has done
some sort of crazy investment
and gotten even more rich.
"This is interesting,
I'm gonna click it."
Then you're asked
to put in your personal data.
So your name, your phone number,
your email address.
Once you do that,
somebody will call you
and try to interest you in Bitcoin.
I think it took three, four hours
before I had a telephone call
from a man.
He explained how
they worked together with the banks
where you could have loans
which they would pay back
in two months.
I was told by him
that I could expect a lot of money.
He started to call me
several times a day
telling me we had work to do
seeking these loans from the banks.
I was stressed, but I didn't worry.
I believed him.
Strongly.
I was convinced.
I was totally clean,
so I had many loans easily.
I could see
my account on the platform.
The money was increasing every day.
It went up and up and up.
It was a lot of money.
But all of a sudden,
my account went
from millions down to zero.
It was gone.
I had invested 360,000 euros
and I lost everything.
One of the big problems
is that it becomes very difficult
for a consumer
scrolling on the internet
to spot the difference
between a scam and a genuine advert.
It's really easy for scammers
to use digital advertising.
All they have to do
is pay
for an advert
to be placed on a platform
and because of the lack
of regulation on those platforms,
what eventually happens is that
any form of criminal can create
a fake advert
that's going to encourage consumers
to engage with them.
And then they can put it anywhere
on the internet for anyone to see.
Fake e-commerce sites,
investment scams,
thefts of bank details.
All over the world,
police forces
specialising in cybercrime
are inundated with reports
of victims of online scams.
The size of the problem has become
so alarming
that the FBI issued
a public information message
advising web users
to use an ad blocker.
If you get caught
putting these ads up on Facebook,
the penalty
is your account gets taken down
so you can just start over
and try again.
We're still seeing
scam adverts all the time.
And even when we report
these adverts to the platforms,
they may disappear briefly,
then something else will reappear
in its place very quickly.
This is a big industry.
These people spend billions
on advertising every year.
It's not one person out there
putting these ads online.
It's an ecosystem.
Sometimes these ads
don't just appear on social media,
but you can see them, for instance,
on the website of your newspaper.
They've been seen
on the website of Le Monde,
Der Spiegel,
even the New York Times.
Really mainstream
and trusted news reporting websites.
Now the news outlets don't have
really anything they can do about it
because they don't have control
over who's placing ads there.
But we're seeing huge numbers of ads
that appear
to be legitimate news stories
that when you click through to them,
they're actually there either to try
and harvest information about you
or to encourage you to invest
or share details or money.
These scammers are paying
Google and Facebook.
The financial interests of big tech
run counter to policing this thing
too strictly.
They do something
to take these ads down.
It's not like they're just
letting it happen.
But you can ask the question,
are they doing all they could?
We believe platforms can control
this flood of scam advertising.
They'll take ads down
if they're reported,
but that's not stopping ads
from being shown in the first place
and that's where
the damage is being done.
We hear from people
with enormous psychological impacts.
Some people have said they've been
suicidal as a result of these scams,
they can't sleep at night,
they have nightmares.
So many people have told us
that they feel silly
or stupid or embarrassed.
I couldn't breathe,
I couldn't eat, I couldn't sleep.
It was devastating.
I had a lawyer.
She said, "Inger, you must prepare
for losing your house."
I was thinking, "No, no,
over my dead body.
I'm not going to leave my house."
But no one had any understanding for
my situation.
So I had to give them my house.
It was awful.
When I took down
the pictures of my grandchildren
from the wall,
that, no...
It destroys your life.
Whether at an individual
or collective level,
the question
is how to prevent the dangers
created by the online ads.
In Europe, a new legal framework
is on the horizon.
It's known
as the Digital Services Act.
Its aim is to make
the digital world safer
by tackling illegal content,
disinformation
and harmful online activities.
It's a wrap.
We have the political agreement
on the Digital Services Act.
Now it is a real thing.
Democracy is back,
helping us to get our rights
and to feel safe when we are online.
The European Commission wants to put
order into the digital economy.
Big tech firms
are firmly in the spotlight.
Whether from our streets
or from our screens,
we should be able to do
our shopping in a safe manner.
Whether we turn pages
or we just scroll down,
we should be able to choose
and to trust the news that we read.
A lot of things
are illegal in our society.
You cannot stand
in front of city hall
and hand out child pornography.
EXECUTIVE VICE PRESIDENOF THE EUROPEAN COMMISSION
But that is not
the same thing online.
Who is to police the "city square"
when you are online?
So the purpose of the Digital
Services Act is to make sure
that what is illegal offline is also
illegal online and treated as such.
Tech companies do what we have seen
industries do time and time again
with regulation, which is at first
they are completely opposed.
They don't want any regulation.
It's about self-regulation, right?
This really great-sounding scenario
where we don't impose
any costs on companies
and they're really responsible
and they handle our data well.
Then as soon as they realise
that regulation is coming,
they change their tune
and then they say,
"We love regulation."
We have had these calls
from big tech for regulation.
But then when we come up with it,
I have this feeling, "Well,
not that kind of legislation.
Something else that you would
only do in the future."
If you say, "I'm Facebook.
I love a privacy law",
that might get you in the room.
Regulators are pushing
for more controls
and companies are heavily lobbying
against those controls happening.
Consumers can write in
with their complaints,
families can voice
how they've been scammed,
but you have
to do that loudly enough
to go up against millions
of dollars in corporate money
to try and stop those kinds
of regulations from happening.
What are you gonna do?
You're gonna stop using Google?
Stop using Facebook?
That's the problem.
There's no competition.
They capture the market.
Facebook used to have competition.
Instagram. It bought it.
It used to have competition.
WhatsApp. It bought it.
What are you gonna do?
Use Yahoo again?
With the European regulation
being put in place,
Brussels is becoming
a crucial battlefield.
Activists and internet giants know
that the results achieved here
could set the tone
for new rules all over the world.
We have a pretty packed schedule.
Tomorrow, we have
about four back-to-back meetings
and then we'll be moving on
to give a webinar.
They've asked us to speak at length
about the "Toxic Twitter" report
that we've just done.
They're really interested
in the way we calculated
how Twitter ads
are monetised compared
to mis- and disinformation actors.
This research that we did
proving that Twitter profits
from hate and disinformation
may not be replicable in future
because Twitter is moving fast
to make itself a more opaque,
less transparent platform.
And unless the EU forces disclosure,
we're gonna be muted.
We won't be able to do our jobs.
No one will be able to.
We've got
the Digital Services Act, right?
The job of the regulators is
to come up with the best regulations
possible to clearly lay out
what they're going to tolerate
and what they won't tolerate,
then to enforce them.
The platforms know that,
which is why in the last year
they've massively upped
their spending in Europe.
Of the top five firms spending
money in Brussels on lobbying,
four of the top five now
are big tech. Amazon, Facebook,
Google, TikTok
spending hand over fist in Europe
because they want
to protect their companies
by corrupting
the regulation-writing process.
My fear now
is that our job never stops
because we're gonna have
to watch the watchdogs.
Who guards the guardians?
We have to make sure that people
meant to guard the public interest
actually do it.
2016 is really the breaking point
between my old career
as just a marketer
and coming across this information
for the first time,
that ads were appearing
on Breitbart.
And then I never left.
We've never solved the problem,
so I've just retooled
my entire career
to dedicate myself to solving this.
Following Breitbart
over these past few years
has given us so much insight
into how bad actors work.
We've been able to scale our impact
in ways I never thought possible.
The ad tech industry is slated to be
the second largest criminal industry
after drug trafficking.
It's going to be
a disinformation tsunami.
The disinformation crisis is serious
and that looks very scary.
But I think the reason to hope is
this is not just a society problem,
it's not just a politics problem,
it's not even just
a community safety problem,
what we're dealing with
is a business problem.
The ad industry controls our world
just as much, if not more
than our government
at this point.
You're not powerless at all.
This entire system was built for you
in order to reach you,
in order to connect with you.
So if you as a user,
if you as a consumer go out
to the advertiser
and say, "This isn't working,
this is actually having
the opposite effect for me",
then there's nothing more important
in this world to the advertiser
than to correct that issue.
That's how we create change.
The money that funds
the internet has the power
to change the world
for good or for bad.
So the money that at the moment
has been a part of the problem
could be part of the solution
if advertisers start
to become more careful
about where their money is going.
Newspapers, like the Daily Mail
or websites like Breitbart,
they can say what they want,
but the rest of us also have
the rights of free speech
and we have the right
to say "not with my money".
We know there's a problem,
but we don't know what it is.
And we're kind of terrified.
We're lying there in bed,
we're covering our eyes
with the duvet
and saying, "There's a monster,
but I don't know what it is."
And it's really important
that we have the courage
to take that duvet down
and understand what it is
that's causing those noises,
what it is
that's making us so scared
and then deal with it
because we need to build a movement.
It won't get better
until we make it better.
We are dependent
on these tech platforms, right?
We are dependent
on these advertising systems.
These approaches that suggest
that we can all just
turn our devices off and unplug
really ignore how entrenched
they are, right? We need them
for jobs,
we need them to apply for loans
and get banking
and access medical information.
So those risks
and those benefits exist entangled
and you have to figure out
how do we get the benefits
that we want out of this
while really making sure
we're not harming people
and we're not harming society
in the process.
We're not anti-technology.
I use all of these platforms,
I use this technology,
but in the same way I don't want
clothes made with sweatshop labour,
I don't want technology platforms
I use to communicate with family
and rely on to be harmful actors
in the broader
sociocultural ecosystem.
Public opinion polling has shown
time and again
that average people
all over the world
want to see tech accountability
because people don't want
the internet to be a harmful place.
They don't want scam ads,
they don't want content
that makes them feel unsafe.
The financial motivation
behind scammers,
behind disinformation peddlers,
behind a wide range
of malicious online actors,
that financial motivation
is what puts them in this business.
And so until the big players apply
transparency to their business,
this will always be
a murky black box of a system,
yet half a trillion dollars flow
through it every year.
Thanks to the economic system
of digital advertising,
the online world has changed
completely.
As a result, it's changed us too.
As individuals and as a society.
Will we be able to change
this system in turn?
OK, so what kind of ads
do you see?
I do get adverts for,
I mean, if I want to be honest,
cashmere jumpers because
it knows that I spend
far too much time browsing websites
with cashmere jumpers on them.
I would see ads for outdoor life.
I don't have a real outdoor life,
I work quite a lot,
but I have a dream
of an outdoor life
and I think
it has been sensed there.
I just bought a shower head
as the result of a Facebook ad.
Is it still working?
Yes, it's great,
I'm very happy with it.
But it's one thing
in return for hundreds of hours
lost looking at silly things.
I cover national security
and law enforcement,
so I'm actually pretty careful
with my device.
They get very strange
if you don't give them data.
So, for example, I see a lot of ads
for women's clothing, for example,
all because they don't know
that much about me.
I constantly see
ads for phone chargers.
I must have bought one once
and since then
I've been inundated
with ads for phone chargers.
I must be a great customer
for phone chargers.
These days,
I've learned to train my algorithm
so I don't get all horrible things.
So on Instagram, for instance,
I get a lot of ads
for cat toys for my cat.
It's a nice break from my day job
when I'm in my militia profile
and I'm getting ads for body armour
and weapons holsters,
then I go home and I get to see
the ads for cat toys.
That's how I give myself
my mental break.
surrounded by digital advertising.
Google, Meta, the parent company
of Facebook and Instagram,
these are basically
advertising companies.
The vast majority of their revenue
comes from advertising.
Online advertising
is now worth 400 billion dollars,
far exceeding
traditional advertising.
Thanks to the harvesting
of more and more data on web users,
this system has generated
huge amounts of revenue,
but it also has a dark side.
We know exactly
where this person is.
We know their real name.
We know their credit card number.
We know where they have travelled.
Imagine that, Mr Advertiser, of what
you can do to reach these people.
But traditional advertisers
aren't the only ones to benefit
from the platforms' targeting power.
Scammers spend billions
on advertising every year.
It's not one person out there
putting these ads online,
it's an ecosystem.
Sometimes those ads are obvious.
Sometimes they are sneaky
and deceptive and they trick us.
Sometimes they steal money from us
to the point of leading us
into financial ruin.
I couldn't breathe,
I couldn't eat, I couldn't sleep.
I've lost everything.
It destroys your life.
Dramatic on an individual scale,
the dangers of online ads reach
catastrophic proportions
when they affect thousands
or even millions of people.
The truth is social media companies
discovered prioritising hate,
misinformation, conflict
and anger is highly profitable.
Recent years have seen
the emergence of a new type of scam
that acts at a collective level:
disinformation.
I think the disinformation crisis
is one of the greatest threats to
humanity right now.
Democracies all over the world
are under threat
because of disinformation.
Misinformation is sticky,
it's addictive, it's chewy content
and for the companies
that monetise that content,
they see it as an opportunity.
Spread lies, spread hate
and you can make money.
Has online advertising become
an out-of-control machine
that is gradually bending
humanity to its will?
Senator, we run ads.
Every day,
without much thought, we accept
general terms and conditions
and cookies.
In doing so, we give websites,
apps and social media
access to our personal data.
They use it to offer us targeted ads
on topics that interest us.
This act seems banal and harmless,
but this customised advertising
is really the result
of a real revolution
caused by digital technology.
A revolution
with unexpected consequences.
Before digital ads,
if you wanted
to buy ads in a magazine,
if you wanted
to buy ads on a TV station,
you would at some point
be talking directly to the place
you wanted to buy ads from.
There would be
a salesperson at your newspaper
that would sell an advertisement.
They would say, "How big
do you want it to be?
When do you want it to appear?"
They'd work with you on that.
But we've really gone
far away from that.
In the 1990s,
the internet became more and more
present in our lives,
offering us
a space of freedom and sharing.
Quickly, it became a requirement
that everything online
be available for free.
And this extended over time.
As we got into the late 1990s
and early 2000s,
massive amounts of piracy for video
and music made people think
"I should be able to get my music,
videos and other things for free"
and it also created a necessity
for the creators of information
to figure out a business model
that could work in this environment.
And so naturally, advertising
is the one that would make sense.
When Google started,
they didn't know that
advertising would be
the primary business model.
There was
no business model for Google.
It just lost
lots and lots of money
for a very long period of time.
One thing they were trying
to figure out with advertising
is that you can't make it work
with a sales team,
a human group of people
that takes calls
and then puts up the ad
on the website
because Google, they get
about 99,000 searches a second.
Billions of searches
every single year.
And so the only way
to make it scalable
is to get rid of the human team
and really have a system which is
data driven, algorithm driven
and really moves at lightning speed.
We think about Silicon Valley,
you say, "Oh, computer programmers
working out of garages",
but lots of the people
who got involved early on,
they were former Wall Street people,
former ad sales people.
They worked
with a lot of economists
and people who had
previous experience on Wall Street
who really constructed
these types of markets.
The only difference is rather than
selling stocks or bananas, we sell
attention.
Shares of Google,
the company that makes the world's
most popular internet search engine,
went on sale to great fanfare today.
Google has doubled its profits
selling internet advertising.
Despite not expecting advertising
to be the business model of Google,
when they figured it out,
it was just this gold mine.
They made so much money
they instantly went
from a no name start up
to being one
of the richest companies.
They raised
1.7 billion dollars in cash
and made the stock of Google
as valuable as General Motors.
The template that Google created
influenced many other companies.
When Facebook got started,
they had nothing to innovate.
"We'll take
the Google business model,
figure out how it works here."
This new ad technology created
a whole new territory
for modern capitalism to conquer,
the attention economy.
Now, you don't buy
space in an outlet;
you literally just buy eyeballs.
Advertising became so successful
that it became very difficult
to create other business models
that were viable in this internet.
One result of that is that we assume
everything on the internet is free
because advertising has driven
the cost down to zero dollars.
You're not paying a dime
to use Facebook.
You're not paying a dime
to use Google's search engine.
There's an old saying,
"If you aren't the customer,
then you're the product."
We believe that we need to offer
a service that everyone can afford.
We're committed to doing that.
Well, if so, how do you sustain
a business model
in which users don't pay
for your service?
Senator, we run ads.
Behind the ads we see online
lies a complex story.
Whether they appear on a site
or in our social media feeds,
most of these commercial ads
are the result of a secret war
that the brands engage in
to capture web users' attention.
Every time you visit a website,
details that have been collected
about you
are sent by that website
into the open bidding market
so that advertisers who might want
to reach somebody like you
have a chance
to see that information,
see the website you're on and
place a bid to show an ad to you.
They are putting in the parameters
of "I wanna reach
men between 25 and 35
who might be interested
in buying a car
and who live in New York."
And then the system is supposed
to find those people
as they are looking on apps,
as they are visiting websites,
watching videos
and show the ads to them.
And what happens is there's
a process called real-time bidding
where algorithms representing
different media buyers will say,
"I'm willing to pay
a dollar for this person",
"Two dollars
for this person".
And it's literally a marketplace
where people competitively bid
for the value of your attention.
So if I am a very valuable consumer
that lots of people want to bid to,
it'll be more money to do so.
So one classic thing
is that it turns out
that buying ad space
for people who have iPhones
tends to be more expensive
than people who have Android
because people who have iPhones
tend to have more money.
So this is a more valuable form of
attention you're trying to capture.
This is a process that happens
in the blink of an eye.
The bids are taken,
the winner gets to place the ad
and show it in front of you.
The systems that Google runs,
the systems that Facebook run,
they operate at a scale that is
really unheard of in human history.
There are billions and billions
and billions of ads
and ad requests at any given time
throughout the day,
overnight, around the world.
But the problem
is that the system is so opaque,
it is so complex,
it is so confusing even to people
who work in digital advertising
that nobody actually knows
what's going on.
And that is to me
an absolutely insane scenario
where you have roughly
about half a trillion dollars
transacting every year
in digital advertising
and people don't understand
where all the ads are going
and where all the money is going.
This complex automated system
is the central axis around which
the internet has evolved
into what we know today.
It may be a dream for advertisers,
but giving control
to algorithms is not without risk.
In 2016, I was just
a one-woman marketing team.
I was working
mostly in Europe at the time
and my boss suggested to me,
"Why don't we run
our first Google
ads campaign?"
I said, "OK."
I went into the Google Ads dashboard
and attempted to run
my first campaign
to see...
I was hoping
my ads would end up on like CNN
or New York Times
or Washington Post or something,
but then when I went to see
my ad placements,
I was surprised and shocked
to see none of that.
I saw a whole bunch of websites
that I'd never heard of.
There were all kinds of domains
that had weird numbers in them
that I knew that nobody I know
is visiting those websites.
I just filed that away as strange,
but also with the understanding that
we don't really know
where our ads are going.
A few months later, I visited
Breitbart.com for the first time.
Breitbart in 2016
was the biggest source
of disinformation in the country.
They'd been more influential
than Fox News
and the Washington Post combined.
They had more Facebook followers,
more engagement.
They played a huge role
in getting Trump elected in 2016.
The first thing that I saw
when I visited the website was ads.
I was just assaulted with
all these brands and advertisers.
Ads that were following me around.
So they're brands
that I shop with, am familiar with
and to see those ads
against that kind of content
was something
that was impossible to ignore.
These brands probably
don't wanna be here
and are probably running their ads
like I did a couple of months ago.
They're probably just
not checking their ad placements.
Due to their reliance on algorithms,
advertisers are totally disconnected
from the sites
on which their ads appear.
Due to this disconnection,
sites such as Breitbart
can receive ad revenue
without the brands realising.
Online advertising is a machine
for communicating blind.
I do remember
being in the room by myself
and I literally looked
right and left.
I was like, "Am I really
the only person seeing this?"
I wrote a blog post that day.
"Everybody, go and block
Breitbart now in your Google Ads.
If all of us do it at the same time,
we can put this outlet
out of business."
I co-founded Sleeping Giants
as a way to alert advertisers
that their ads were on this website
and we started
tweeting at companies
with screenshots
of their own ads on Breitbart
and asking them whether
they wanted their ads to be there.
And the brands were coming back
almost instantaneously.
Some of these guys were
coming back within minutes to say,
"We didn't know.
Thank you so much for alerting us
and we will take steps
to block it from our media buy."
And it grew super fast.
As we ran this campaign, over time,
we were able to collectively contact
thousands of brands.
Breitbart lost 90%
of its ad revenues
within the first three months
of our campaign.
They were on track to making
eight million dollars in ad revenue.
And...
we cut that off.
Within months, there were
other accounts popping up.
Just citizens of these countries
who saw what we were doing
and realised that they could
probably apply the same tactics
to their own local Breitbarts.
We realised that this was
something that we could take global.
This is Google's
location.
They are our main target for this.
OK, I feel like right here
is great, guys.
This is a good location.
Let's do it.
We are in New York City.
It's where Ad Week is taking place.
Google is also here at Ad Week.
They're one of the sponsors
of the event
and we're trying to get Google
to stop monetising disinformation
and to make
their publishers' data transparent.
Are you an advertiser?
Find out
where your ad money is going.
It's not going
where you think it's going.
So the plinko board is actually
a fun way for us
to engage advertisers on this issue.
So we want them to know
that they are unknowingly funding
disinformation and hate speech.
They put a coin into the board
and that represents their ad budget.
They wait to see
where their coin ends up.
Is it going to end up
on a good website
or is it going to end up on one
of these disinformation websites
that's spreading climate disinfo
and election disinfo
and all this other harmful stuff?
Hopefully, we'll be able to raise
awareness with the advertisers
and they will pressure
Google to take action.
But I think we can wrap it up now.
The automation
of online advertising has
real repercussions.
The way it works is at the heart
of what we now call
the "economy of disinformation".
One of the realities of false
and misleading information today
is that it has the potential
to be profitable
more than ever before in history
because it's easy to get
in the business of disinformation.
When we talk about
the "disinformation
economy",
most people think
of Facebook because
Facebook amplifies
and monetises disinformation,
but disinformation does not
make money on Facebook.
They make money when you click
on the article on Facebook
and go to their website.
That's where they run ads
and that's where
they basically cash in.
That is where the real money lies.
They are able to set up websites,
write their conspiracy theories
and their disinformation articles
and then receive Google ads
on their websites
and those Google ads are putting
dollars in their pockets.
Let's split Google into three parts.
Google has the search,
which has advertising
which is native to Google search,
so you can buy the search results.
Second bit is YouTube, which is big.
That's advertising that appears
and you can pay
to target people's eyeballs
and it will come before a video.
The third is,
what most people don't know about,
is Google's ads platform
that goes on other people's websites
and will serve
that advertising to you.
And then it will give
a bit of money to that content site.
90% of all websites carrying
advertising carry Google Ads.
Google and ad companies make it
very difficult for brands
to know exactly
where their ads appear.
So advertisers don't realise
that their ad dollars are going
onto these websites
that are spreading disinformation.
And so you have major corporations,
big brands,
buying ads on sites
with content about terrorism,
with hate content,
with disinformation.
And it's basically created
a whole business model
for people who would never have made
money from advertising before
to suddenly be able to earn
lots of money from advertising.
There are
so many countries now around
the world
where the arrival of Google Ads
has then been followed
by a huge proliferation of websites
whose core business model
is around hateful clickbait,
pushing out hundreds
of toxic inflammatory stories
targeting minority groups
which they then monetise
through online advertising.
I think the disinformation crisis
is one of the greatest threats
to humanity right now.
Democracies are under threat
because of disinformation.
COVID VACCINES MODIFY DNA
One of the really crazy things
we found in our investigation,
it's an article claiming that
COVID-19 vaccines change your DNA.
This is from a Serbian website.
We have an ad for Amazon Prime Video
and Discovery Plus,
as well as an ad
for Spotify Premium on it.
Creating disinformation can
absolutely be a job
and it can be a job paid to you
by the biggest brands in the world
thanks to Google.
It's a good business.
That's what's
so extraordinary about it
is that it encourages
low-quality content.
Why do you think it's so hard
for real journalism to prosper?
We have disinformation thriving.
It made more than
three billion dollars last year.
And meanwhile,
journalism is getting cut globally.
Newspaper ad revenue has been cut
by half in the last five years.
This is a crisis
where journalistic standards decline
in power and revenue,
and disinformation rises.
BREAKDOWN OF WORLDWIDE
ADVERTISING SPENDING
The traffic controllers
of ads and money and data
are deciding
who gets revenue and who doesn't.
Here's the thing about fake news,
it's displacing good news,
useful news,
well-researched news,
evidence-driven thinking
considered opinions
and that's a real problem
for a society
because it makes us
dumber as a society.
It's making our democracies weaker
and turning us into an idiocracy.
Clickbait is replacing thought.
3 SIMPLE TRICKS
FOR STAYING YOUNG
The new thing about clickbait
is that you have to click
on the title
to see what it's about
because that's
how the website gets paid.
When you write an article
for a normal newspaper,
the title must tell you
what it's about and ideally
tells you exactly
what you need to know.
Then you read the rest
of the article to find out about it.
Clickbait does precisely
the opposite.
It makes sure not to tell you
what you need to know,
so that you click on it
and generally end up disappointed.
THE NUMBER 1 CARB
YOU SHOULDN'T EAMAKE LOTS OF MONEY FAS3 SIMPLE TRICKS
FOR STAYING YOUNG
EAT THIS FRUIAND LIVE AN EXTRA 20 YEARS
For these companies
to make a lot of money,
web users must spend lots of time
on their sites and social media.
The more time they spend there,
the more ads they see.
And the more ads they see,
the more money the company makes.
The aim is clear:
capture our attention
by any means possible.
Social media and the internet
in general are competing
with the world around us
for our attention.
They are winning
this competition.
Something that I've noticed
and we can all see
is that when you're
on public transport,
everyone is on their phone.
The only reason we don't notice
is we too are looking
at our phone
rather than the people around us.
Facebook's competition
isn't Twitter.
Facebook's competition
is spending time with your kids,
spending time with your wife,
reading a book.
They need to keep your eyeballs
engaged so they can pump ads to you.
One of the things
that keeps our eyeballs on a site
engaging with the content
is disinformation and hate.
Hyper-emotional states.
I'm Imran Ahmed.
I'm CEO and founder of the Centre
for Countering Digital Hate.
We're an organisation
that looks at the architecture
by which hate
and misinformation spread online,
how they affect the real world,
and we try and find
ways to disrupt that.
The algorithms pick it out.
They see the stuff
that gets the most reactions,
the most people staying
and watching for longer.
They think,
"That's where the money is."
You know, I think initially
it was an accident.
It was amoral.
There was no morality.
They weren't choosing
disinformation or hate.
But now,
if you look at the revelations
from whistleblowers,
if you look at the work
that my centre's done in showing
that there are super-spreaders
of disinformation,
their failure to act,
their failure to change
the algorithms, the business model.
That's where it stops being amoral
and it becomes immoral.
Tributes to the Labour MP Jo Cox
who's died after being stabbed
and shot in West Yorkshire.
She was 41,
married with two young children
and was elected to parliament
just over a year ago.
My friend and my colleague Jo Cox,
who was a Member of Parliament.
She was shot, stabbed
and beaten to death
by a white supremacist terrorist
radicalised online
in the middle of the EU referendum.
It just broke my heart
to see someone who had been fed
lies and disinformation
and decided that it was rational
to kill a young woman,
so yeah, I mean,
it's really personal to me.
When the guy killed her, he shouted,
"Britain First, death to traitors."
Britain First was the first
political movement in the UK
to achieve a million likes
on Facebook.
When it happened, I remember
my reaction was to say, "Who cares?
They've got a million likes.
They have clicks.
We've got members."
And I was wrong.
I was just wrong.
The main place
where information was being shared,
people were deciding
on norms of behaviour and attitude,
negotiating what is true
and what is not fact
were shifting
to these online spaces
and we didn't understand them.
We've made a deal with the devil.
The devil said to us,
"I can let you see
what your friends are up to.
I can let you be connected
with your family so you can
share with them, so you don't
actually have to call them
and it just makes things
more convenient for you."
For convenience, we've handed over
our data, our attention.
In 2016, when I said online harms
were causing offline harms,
I remember being treated
like I was crazy.
People would laugh at me.
I used to get asked the question,
"Yeah, but this is online.
How does this affect
the real world?"
But after 6 January,
I don't think anyone's asking
that question any more.
They got ten people
trying to stop us.
06/01/2021
ATTACK ON THE CAPITOL
WASHINGTON DC, USA
It's time that somebody did
something about it.
And Mike Pence,
I hope you're gonna stand up
for the good of our constitution
and for the good of our country
and if you're not, I'm gonna be
very disappointed in you,
I will tell you right now.
Facebook's algorithms
were amplifying harmful content
that led to 6 January.
They were showing
groups to people
that were interested
in the "stop the steal" narrative
and those groups were able
to catch on fire
and grow in hours
to hundreds of thousands
because of their algorithms
and their recommendation systems.
Hey!
Fcking hey, man.
Why is this happening?
Why are we here?
The truth is, social media companies
discovered prioritising hate,
misinformation, conflict
and anger is highly profitable.
It keeps users addicted
so they can serve them ads.
CCDH's research has documented
bad actors causing harm,
but also bad platforms encouraging,
amplifying and profiting
from that harm.
Leading up to the insurrection
and after the insurrection,
I've been using a dummy profile,
a sock puppet profile
that solely joined militia groups
and engaged with extremist groups
on the platform
and Facebook,
following the insurrection,
not only pushed in the feed
an array of false election claims,
but pushed those
alongside ads for essentially
military gear-style materials.
Body armour, bulletproof vests,
specific holsters,
sites to make
your weapon fire more carefully.
This is exactly the kind of material
we saw Facebook pushing to users
in the wake of 6 January.
We also looked at the ad interests
the company was using
to target this individual.
If you get on your Facebook profile
and click
through a whole layer of things,
you can eventually get
to why you're being shown that ad
and it'll come up with the interests
you're being targeted with.
And one of the ones this profile
was tagged with was "militia".
Months earlier, in August 2020,
Facebook made a big public show
of the fact that they had
banned militia
and domestic extremist movements,
but not only
were those movements operating,
there was
an entire ad interest category
that could be used to target
militia for the company to profit.
So, as far as we can tell
from the outside,
Facebook's algorithm
is single-minded
and nearly sociopathic.
It is designed to do one thing:
to maximise engagement,
to maximise activity,
and it's going to find the content
and promote the content
that does the best job of that.
And in many cases,
egregious types
of content get amplified
many, many times more
than they would in the real world.
And just as important,
in the real world,
these egregious types of speech
would be met with counterspeech.
Algorithmic amplification separates
the counterspeech
from the damaging speech
in a systematic way
that doesn't happen
in the real world.
I'm Guillaume Chaslot. I studied
at engineering school in Lille,
then did a thesis
on artificial intelligence.
I was hired by Google
and worked on the recommendations
of YouTube.
And we tried to make
certain recommendations,
strategy A versus strategy B
and looked at whether strategy A
was more effective,
meaning did it keep the user
on the platform for longer,
than strategy B.
Everything we do online
is examined carefully
by artificial intelligence.
Every click,
every scroll, every pause,
every minute of video that we watch
feeds the algorithms
that customise the recommendations
displayed on our screens.
I was on a bus
with someone who was watching
one conspiracy theory after another.
He said, "But there are
so many videos like that.
There are so many videos saying it
that it must be true."
That's when I realised
that it's not the arguments
in the videos that are convincing,
it's the repetition of these videos.
The repetition
of these videos
was exactly what we were doing
at Google when we were trying
to make him watch
as many videos as possible.
So, in fact, he was convinced
by the way
the algorithm itself works.
So, when I saw that,
I said to myself,
"I have to see if it's just him
seeing conspiracy theories
or if it's the YouTube algorithm
that's recommending
conspiracy theories to everyone."
In 2016, I started creating
AlgoTransparency.
I made a robot that goes on YouTube
and follows the recommendations.
I discovered
that the YouTube algorithm
for recommendations does have
a huge tendency to recommend
these conspiracy theories
as they generate watch time.
Like this person who watched them
for six hours straight.
It's a gold mine,
so the algorithm will promote
this type of video.
In my work at Google,
I tried to suggest
content recommendation systems
that allowed the user
to discover new content
and different points of view.
I realised
that whenever I suggested this,
and not only me,
but other Google engineers too,
Google wasn't interested.
My manager said,
"Be careful, I wouldn't continue
working on that if I were you."
For six months, I followed
his advice and then stopped.
When I started working on it
again, I got fired.
Conspiracy theories are driven
by something in our psychology
called epistemic anxiety.
It is that sense
of not just not knowing the answers,
it's not knowing
how to find the answers.
But then conspiracy theories
never satiate epistemic anxiety.
So that's another finding.
That they never fulfil that
desperate yearning for certainty.
And for the companies
that monetise that content,
they see it as an opportunity.
One of our research reports,
"Malgorithm",
it proved that Instagram
if you liked
Covid conspiracy theories,
it would give you QAnon,
it would feed you anti-Semitism
in the recommendation algorithm
because it knew
the way that our brains work.
Based on trillions of clicks,
billions of people worth of data,
they knew
that core psychological insight.
They're constantly feeding
and all the time they're feeding
and feeding and you're clicking
and you're going and looking
and scrolling, every three posts,
there's an ad, money.
Every three posts,
there's another ad.
You become worth 100 billion dollars
by hacking our brains,
the worst tendencies in our brains.
Imagine that you're driving
down the highway,
you see that traffic is slowing
near to a stop and as you go past
you see that there was
a really bad accident.
It's off the road,
but everybody is slowing down
to see what happened.
It's a car crash.
You don't want to look,
but you can't look away.
Imagine if there were a bunch of ads
popping up around that car accident.
That's what we see
with these high-engagement, harmful
and often misinformation posts
on the platform.
You're gonna have
people commenting, engaging,
reacting with an angry face
or a thumbs up
and all of that engagement
just makes that post more valuable,
it lifts it up in people's feeds
and with that, advertising
that will be centred around it.
As you pause and slow
and look at that on your feed,
not only are you gonna get ads
around that harmful activity,
but that activity is gonna
get pushed in other feeds
because the platform knows
you're slowing down to look at it.
It's really concerning to think
about how many people are influenced
by the combination of ad activity
and harmful information that's
being pushed to them in their feeds.
The bad is more powerful
than the good.
We react more strongly
to negative stimuli
than positive stimuli.
We react more strongly
for good reasons,
as programmed by evolution.
So how do websites
and social media hold our attention?
They use negative stimuli,
they use fear,
they use anger,
they use scandal,
they use disgust.
It's the economic logic of a site
that wants us to spend time on it,
to get us to enter
this spiral of negative content.
More than
in any other line of business,
companies that provide online ads
must continually expand
their portfolio of clients
and when recruitment reaches
saturation point in one market,
they must explore new horizons.
But for someone to be able
to use social media,
there is one fundamental condition.
They must have internet access.
There are millions of people
across the world without access
or affordability for internet.
And Mark Zuckerberg came up
with the idea
that you have to get people
who don't have internet online.
Facebook launched a program
that it called Internet.org
in countries like the Philippines,
Indonesia, Brazil.
Facebook was trying to get the rest
of the developing world online.
It partnered with mobile providers
in different countries
to offer Facebook for free.
Zero-rated data.
So anybody could use
these platforms and the internet
without incurring
very expensive data costs.
This was revolutionary.
And as part of that plan,
you could access Facebook for free
and a select number,
usually half a dozen apps,
maybe a weather app,
maybe a job app.
And of course other Facebook apps,
Instagram, WhatsApp.
When you hear the phrase
"Facebook is the internet",
in most of the world,
this is what they mean.
There was no outside internet
beyond Facebook.
That was the universe
people could search.
You couldn't go to Google to find
if something was true or not.
So, when you pair that power
and that narrow internet with
digitally illiterate populations
and a wealth
of misinformation and harm
that is not being moderated
in foreign languages,
you have complete disaster.
Hello and welcome
to Access Asia.
Coming up, Amnesty International
accuses Facebook
of exacerbating
human rights violations in Myanmar.
It claims the social network
amplified anti-Rohingya content.
In Myanmar, Facebook at the time
had no Burmese-speaking moderators.
It launched this Internet.org
free basics programme.
Lots of people got online
and tonnes of misinformation
and hate speech just proliferated,
calling for lynchings,
essentially sparking a genocide.
The UN found Facebook partially
responsible for that activity
and around 2015, 2016,
Facebook quietly ended
that programme in Myanmar.
But what this did was allow
Facebook to create and collect
billions more data points
on the rest of the world
and push ads anywhere,
particularly elections ads
in countries like India
where people who had never before
really been mapped or been online,
now the government had data points
of people they could push ads to.
And we saw the same
in Kenya, in Ethiopia,
across the Middle East
in countries like Iraq.
You see this
boiling over into even more serious
real-world harms.
These free basics programmes
were essentially free internet,
but Facebook's version
of the internet,
which is an unmoderated,
disinformation-filled onlinescape.
Senator, what's happening
in Myanmar is a terrible tragedy
and we need to do more.
- We all agree with that.
- OK.
But...
UN investigators have blamed you,
blamed Facebook
for playing a role in the genocide.
We all agree it's terrible.
How can you dedicate and will you
dedicate resources to make sure
such hate speech is taken down
within 24 hours?
Yes, we're working on this.
And there are three specific things
that we're doing.
One is we're hiring dozens more
Burmese-language content reviewers
because hate speech
is very language specific.
It's hard to do it without people
who speak the local language
and we need to ramp up
our effort there dramatically.
Second is we're working
with civil society in Myanmar
to identify specific hate figures,
so we can take down their accounts
rather than pieces of content.
And third is we're standing up
a product team
to do specific product changes
in Myanmar
and other countries that may have
similar issues in the future
to prevent this from happening.
The social media giants claim
that they are investing heavily
in identifying
and preventing
the dissemination of hate speech.
A combination of human moderators
and artificial intelligence tools
are being used to monitor
the content of posts,
whether or not they are ads.
But do these filters really work?
For the last three years,
we've been investigating
the way that social media companies
are perpetuating human rights abuses
and undermining democracy
around the world.
You can't talk
about human rights
or any of the major issues
that are affecting our world
without looking at the way we're
consuming and sharing information
because it has such a major impact.
So we started looking
at the Kenya election a few months
before the election
actually happened
and we were interested
in the election
because it was
quite a tightly fought race
and also because Kenya has
a really sad history of violence
before and after elections.
We decided to investigate
how well these platforms
were able to detect hate speech.
We investigated Facebook
in this case
and we tested ads that were inciting
violence ahead of the election
in both Swahili and in English.
According to Meta, every ad
published on Facebook
is examined at multiple levels
before publication.
This meticulous analysis is intended
to ensure that advertisers respect
Meta's rules and standards.
Among other things,
the platform prohibits blasphemy,
excessive nudity
and misinformation in ads.
Publishing ads on platforms
is really easy.
They want as many people
to be publishing ads as possible.
We find examples
of hate speech and disinformation
and then we turn them
into very simple ads.
We choose
this horrible content and think
surely this time
the platforms are not
gonna approve it.
It's so obvious or it's so extreme
or it's a crucial moment
the platforms say they care about.
We then upload it onto the platforms
and then we have to wait and see
if the platforms approve it.
And then we get these notifications
saying they've been approved
and your heart just sinks that
even in these dangerous contexts,
they're still happily approving
content that's
inciting violence that could
be putting real lives at risk.
And we expected the English ads
to be picked up immediately
and the Swahili ads
to just go straight through,
but that wasn't what happened.
All of the ads went through,
but first of all, a number
of the English ads were picked up
due to spelling mistakes.
And so the AI
was able to detect that
and once we corrected
those spelling mistakes,
they were all approved
and put through.
And then we reached out
to Facebook
to find out what they had
to say about this
and they said it was a mistake,
they shouldn't have been approved.
So we gave them another chance
and we tested some more ads
thinking that at this point,
they've been alerted
to the fact that this is a big risk,
it was two weeks
before the election,
they should be heavily resourcing
content moderation in this country.
And once again, all of the ads
were immediately approved.
We put a publication date
in the future.
So what we do is we pull the ads
just before they get published
because obviously,
as a human rights NGO,
we don't want to be putting out
more hate speech or disinformation.
As we know
with all of this type of content,
hatred and the organising
of this violence starts online
and then it very, very quickly
spreads into the real world.
We saw that after the US elections,
we've also seen that in Brazil,
we've seen in Myanmar, in Ethiopia,
people are literally being killed
because of hatred
that starts by spreading online.
Through tests
in different languages and places,
the investigations
by Global Witness showed
that content moderation tools
often struggle to identify
obvious examples of hate speech.
If you talk to any
of these companies, Facebook,
Instagram, Twitter, YouTube,
they will tell you
in a canned statement
that they have brand safety policies
and that "x" content is not allowed
on their platform.
They use the same statements
over and over again,
but just because it's not allowed,
doesn't mean it doesn't exist,
because they're not enforcing
their policies.
And when they're not enforcing
their policies,
those brands are threatened
because their ads are showing up
against harmful content.
There's a playbook
for big tech companies
and they learned it from Exxon,
from Philip Morris,
from every other bad actor
in the corporate sphere.
Deny.
So that's the first thing.
So, first of all, say, "No,
are you sure?
No, I don't think it's our ads.
It's probably someone else's.
That must have crept through."
Second, deflect and say,
"Look, it's
a really complicated system.
The ad stack's complicated.
We don't really know
how it works ourselves."
The third thing is delay.
Right now, most of them can
no longer deny there's a problem
because even
counterterrorism professionals,
senators, congresspeople,
members of parliament
are contacting us and saying,
"We know this is a problem.
What do we do about it?"
So now it's delay.
And they're delaying
and how do you delay?
PR, spin, lies,
a little bit of evasion,
take your time getting back
to someone in an e-mail.
It takes us weeks to get
an answer on something.
Why is there a problem?
It's because this is profitable.
If it wasn't profitable any more,
they wouldn't do it.
Spread lies, spread hate
and you can make money.
As you know, next week
we'll publish a new report
about how Twitter is profiting
from hate and disinformation.
We've been calculating
how much money Twitter makes
from accounts that spread hate
against the LGBTQ+ community.
So we've got examples
of companies' ads appearing next to
this really dangerous anti-trans,
anti-LGBTQ+ content.
Which companies?
We identified ads
from five big brands.
I've got Fortune magazine,
Kindle, Disney, NBA and T-Mobile.
All of them appear
next to anti-LGBTQ+ content.
We know a small number
of accounts are responsible for it.
They make about 6.4 million
for Twitter in ad revenue.
Our hope is that by exposing that
to a wider audience,
it will create political, economic
and social reputational costs.
With Elon Musk though,
he is desperate for revenue
and so he's willing to accept
this kind of hateful rhetoric
if he can make a buck out of it.
While many organisations
and advertisers are begging
for more investment
in content moderation,
social media companies underline
the need to strike a balance
that preserves free speech.
Companies like X, formerly Twitter,
say that their users have the right
to express themselves freely
as long as they stay
within the law.
A good sign as to whether
there is free speech is,
is someone you don't like allowed
to say something you don't like?
And if that is the case,
then we have free speech.
It's annoying when someone you don't
like says something you don't like.
That's a healthy functioning
free speech situation.
Elon Musk picking a new legal fight
with a nonprofit that criticises X
formerly known as Twitter.
What's your response
to his attorney saying you're wrong?
He has been casting around
for a reason to blame us
because we all know
that when he took over,
he put up the bat signal
to racists, to misogynists,
to homophobes, to anti-Semites
saying Twitter
is now a free-speech platform.
He welcomed them back on.
He reinstated accounts suspended
for spreading that kind of stuff.
And now he's surprised
when people are able to quantify
that there has been
a resulting increase in hate
and disinformation on his platform.
The reason Elon's annoyed
is because the work
that we did to show that hate
had exploded
after he took over the platform
led 50% of advertisers
to leave the platform.
What we do is look
at hard data and prove
that these people are unleashing
a tidal wave
of hate and disinformation
that is scarring our world.
It's become the primary vector
for proselytising hate
and disinformation online.
The reason he's suing me
is because he wants me to be scared
to say anything about him.
So we are now in a dogfight
with a guy who keeps escalating.
So you have never looked
at a product online.
All you have done
is mention it in conversation
with your phone nearby
and magically,
in minutes or hours or days,
there is an ad for that exact brand
showing to you
and you are creeped the fuck out.
The question is,
is your phone and is Facebook
and Google etc. listening in
to you to target ads to you?
As far as all
of the research on this shows,
as far as everything
that's been gathered, no,
they are not listening to you
to target ads to you.
And I know people refuse
to accept this and believe it
because there have been times
where you have never mentioned
this brand before in your life
or your friend mentions it
and all of a sudden
it's in your Instagram feed
and it seems creepy and it must be
that they're listening to you,
but there is no evidence
that's ever backed that up
that that is what's going on.
And this has absolutely
been looked into in detail.
The answer why this happens
is they have so many
other data points on you
that the likelihood that that brand
is mentioned in a conversation
is roughly the same likelihood
that it also might see you targeted
with an ad for that brand.
They're saying they know you
and your friend group so well,
you think they listen to you.
Companies that live
off online advertising
rely on the data of their users
to attract advertisers
and make money.
This has led to the creation
of a brand-new market
based on the commercialisation
of this data.
So thousands of companies compete
to collect,
organise and make sense
of the huge quantities of data
that our devices generate each day.
Data brokers are a global industry.
Pretty much if you
live in a country with
smartphones,
where there are mobile apps,
a data broker in some form
is collecting or very
soon is going to try
to collect data about you.
The challenge
of understanding data brokers
is that it's really opaque.
It's not a transparent industry.
And so part of what our project
at Duke University does
is we actually go out and buy data
from these data brokers.
You can buy pretty much anything
you want from a data broker.
You have lists of young people
who like fast cars or retirees
who really wanna go on a cruise.
There are data sets on truck drivers
and avid exercisers.
Do you wanna reach single mothers
between the ages of 30 and 35?
We have that list.
Do you wanna reach people
who just graduated from college
and who really like drinking coffee?
We have that list too.
Data brokers sell
demographic information,
so race, religion, gender,
sexual orientation, marital status,
your favourite food
or your phone location
and tracking where people go
and shop and sleep at night.
If you have one smartphone
that's next to another smartphone
on a nightstand
six nights out of the week
and the seventh, it's near
a different phone, it's an affair.
There's all kinds of things
that are intimate to people's lives
that you can find out
if you watch them eat and sleep
and drive and walk
and sit down 24 hours a day.
All of this is out there
and it's a lot cheaper
than most of us would expect.
In 2023, our team at Duke published
a study by Joanne Kim
about data brokers and the sale
of people's mental health data.
This student contacted
a few dozen of these companies
asking to buy information
about people with depression,
about people with anxiety,
about people who might be dealing
with many mental health conditions.
We wanted to say,
"What's out there?
If I'm an advertiser, I'm a scammer,
I'm a pharmaceutical company
and I want to buy data
about mental health conditions,
can I do that?
Where do I get it from?
How much does it cost?"
And what she found
was pretty disturbing.
There were
about a dozen companies
that offered to sell her
data about people
with mental health conditions,
data about people
suffering from depression.
The specific prescriptions
that they might take for depression.
Do they have bipolar disorder?
Do they have
obsessive compulsive disorder?
Is this someone that has
a particular kind of anxiety?
When our team has asked
data brokers to buy data,
they're always trying to upsell you.
So you say, "I want
mental health data."
They'll say, "Great,
and we also have data on race
and age and net worth and zip code
and how many children
are in the home."
And so this was also our experience
in the mental health study.
Alongside what type
of anxiety someone had
or the particular antidepressant
they were on, you could also get
information about their finances,
about their location,
about their demographics
and pair that all together
in a package.
All of that together gives
a really fine-grained look
at a person,
all without them knowing it.
Often, these brokers who've sold us
really sensitive data on people
haven't even asked who we are,
they don't ask
what we're doing with it.
They don't ask why we want it.
They don't ask our intentions.
They get an e-mail and a credit card
and that's enough
to e-mail an Excel spreadsheet
with people's information.
You can buy data
about people's health conditions
for 10 or 20 or 30 cents a person.
So if I wanna pick a town
and buy data on everything they have
on the people in that town,
I might spend just a few hundred
or a few thousand dollars.
The aim of exploiting
all this data on us
is to make digital advertising
more effective
by displaying ads
that are relevant to us.
But this surveillance system
can also lead to personal data
becoming publicly available,
even when we least expect it.
I had been doing a lot of reporting
on the ways in which governments
are using advertising data.
One thing led to another.
I started looking at privacy impacts
to real people
of having this kind of data
available for sale.
While conducting my investigations,
I began looking into a company
called Near Intelligence.
Are consumers flying
or driving on vacation?
Are they staying in hotels,
campgrounds or a friend's home?
Are they shopping online,
in store or both?
Are they eating in or out?
That's where Near comes in.
We are the data storytellers.
Near is an Indian-based
data intelligence company.
It's one of many companies out there
that make data available about
the movement of phones
for sale to their customers.
Near has data
on more than a billion devices.
They have hundreds of companies,
generally advertising clients,
who use Near's data
to understand the movement
of devices
or the interests of consumers
and help target them with ads
based on their location.
Geofencing
is a common advertising technique.
It's essentially simply
drawing invisible lines on a map
and seeing what devices appear
within those lines.
So if you're trying
to reach people who like pizza,
geofence a pizza parlour.
To reach high school students,
you could geofence a high school.
If you're trying
to reach religious people,
you could geofence
a mosque or a church.
It's essentially an invisible fence
that you draw around a spot on Earth
and see what devices appear
within that shape.
Near has data
that can help you do that.
And in doing that kind
of investigation into Near,
I had started to hear from people
close to the company
about this ad campaign
that had been run
using their data by
a conservative group in Wisconsin.
Veritas Society is
a conservative anti-abortion group
that aims to convince women
who are thinking about an abortion
not to do that.
And in this case, what they wanted
to do was run digital ads
and the best place to reach
those people was by trying to see
who was visiting abortion clinics.
And that's where
this data provider, Near, came in.
Their data was used
to target this particular campaign.
They were drawing geofences
around Planned Parenthood clinics
and the parking lots
next to Planned Parenthood clinics.
And so for women
who showed up at these clinics,
as they pulled into the parking lot,
that's when they would cross
this invisible fence
that had been drawn by a data broker
around these clinics.
That's when the data broker would
log them as visiting this place
and create
what they call an "audience".
In an archived old version
of the Veritas Society website,
they explain in pretty great detail
exactly how they ran this campaign.
It says,
"Utilising our advanced
Veritas Society digital technology,
otherwise known as 'polygonning',
we identify and capture
the cell phone IDs of women
that are coming and going
from Planned Parenthood.
We then reach these women on apps,
social feeds and websites
like Facebook,
Instagram and Snapchat
with pro-life content."
So that's an awful lot of data
they're using to target people.
All of that is because of the ways
advertising technology is built
to collect this kind of data
and serve us ads.
Imagine you're a woman who visited
a Planned Parenthood clinic.
If you were targeted
in this ad campaign
in the hours and days
after leaving the clinic
as you were on Instagram,
as you were on Snapchat,
on Facebook,
you might start to see ads
telling you there was a chance
to reverse the abortion procedure.
What those ads essentially said
was something like
"Took the first pill at the clinic?
It may not be too late
to save your pregnancy."
So in the 30 days
after these women's devices were
seen at an abortion clinic,
these ads would follow them
around the web.
Near, the data broker, has policies
against geofencing sensitive sites.
So at some point, they discovered
that Veritas Society
had been drawing
geofences or polygons
around abortion clinics
and told them that was not
an appropriate use of Near's data
and stopped the ad campaign.
Veritas Society's campaign lasted
two years,
disseminating over 14 million ads
without Near or the platforms
where the ads appeared noticing.
Meta, the parent company
of Instagram and Facebook,
said that the ads had been wrongly
classified as apolitical,
whereas content relating to abortion
should have been classified
as a political ad.
This is a 21st century version
of being outside clinics with signs.
The technology in this case was used
to target people seeking abortions,
but it could just as easily be used
for many other things.
The data is there
and groups all across
the political spectrum can use it
for whatever they like.
Thanks to the data shared
by our devices,
platforms have achieved
an unprecedented level
of precision
in their targeting.
This targeting power doesn't only
interest traditional advertisers.
With programmatic advertising,
you can target people.
Wherever they are on the internet,
if the right person for your ad
is there looking at that website
or opening that app,
you can reach them.
But there's always a downside,
another edge of the sword,
which is that if you are a scammer,
if you are a person
maliciously trying to target people,
then those same
powerful targeting abilities
to figure out the exact type
of person who will click on that ad
and pay for your product
or give their credit card number,
that is available
to scammers as well.
And so digital advertising
has revolutionised scamming.
It has enabled people
to scam at a massive scale.
They can run a lot of ads,
they can test them,
they can see which messages, often
false claims about celebrities
for example, resonate most
with people in a certain country
and be able to optimise to steal
as much money as possible.
Creating an opaque, complex system
with pinpoint targeting
is basically
a scammer's dream come true.
In the year of 2019,
I saw an ad on Facebook.
RETIRED RECEPTIONISIt looked like it came
from a newspaper in Sweden.
It was a picture
of two Swedish celebrities.
They promised,
"If you pay 250 euros,
the money will increase."
I had seen these ads
sometimes before.
But this time,
I clicked on that link.
In the online world,
digital ads don't always
keep their promises.
Things that seem too good to be true
can have catastrophic consequences.
I'm Eric van den Berg.
I'm a Dutch journalist.
I do investigative stories in tech.
I think, like most people,
I saw these ads online all the time.
The most common one
was "Bitcoin makes celebrity rich".
And I was curious, I thought
this was quite an interesting story.
So I investigated to find out
who was really behind these ads.
So I visited the online forums
where they hang out
and I pretended to be interested
in working for them
to make a little extra money.
And it wasn't very difficult
to get people talking.
They're always looking
for new people to put
these dodgy ads on social media,
to do their dirty work.
I contacted them and we exchanged
Skype contact information.
So I got a call and
at the other end of the line
is this guy called Vladislav,
very good English
and he was based in Kyiv.
And then he said, "OK,
I'm gonna send you a package."
And he sent me a zipped file.
It contains 48 fake news articles
from 11 markets.
And the idea was
I would put them on social media
and each time I bring someone in
who gives them their credit card,
I would get paid 600 dollars.
Digital scams are often created
in multiple languages
and adapted to specific markets.
Each country's inhabitants see
ads with celebrities they know
and the targeting abilities
of the platforms give swindlers
the means to focus
on the most vulnerable people.
One of the reasons this works
is because
digital advertising is so cheap.
If you target a million people,
there's always gonna be
a couple of people
that are gonna fall prey
to this scam.
That's why it's so successful.
So let's say you're somebody
that's not aware of these ads,
you're scrolling down
your Facebook feed
and all of a sudden you see
a famous person from your country
who apparently has done
some sort of crazy investment
and gotten even more rich.
"This is interesting,
I'm gonna click it."
Then you're asked
to put in your personal data.
So your name, your phone number,
your email address.
Once you do that,
somebody will call you
and try to interest you in Bitcoin.
I think it took three, four hours
before I had a telephone call
from a man.
He explained how
they worked together with the banks
where you could have loans
which they would pay back
in two months.
I was told by him
that I could expect a lot of money.
He started to call me
several times a day
telling me we had work to do
seeking these loans from the banks.
I was stressed, but I didn't worry.
I believed him.
Strongly.
I was convinced.
I was totally clean,
so I had many loans easily.
I could see
my account on the platform.
The money was increasing every day.
It went up and up and up.
It was a lot of money.
But all of a sudden,
my account went
from millions down to zero.
It was gone.
I had invested 360,000 euros
and I lost everything.
One of the big problems
is that it becomes very difficult
for a consumer
scrolling on the internet
to spot the difference
between a scam and a genuine advert.
It's really easy for scammers
to use digital advertising.
All they have to do
is pay
for an advert
to be placed on a platform
and because of the lack
of regulation on those platforms,
what eventually happens is that
any form of criminal can create
a fake advert
that's going to encourage consumers
to engage with them.
And then they can put it anywhere
on the internet for anyone to see.
Fake e-commerce sites,
investment scams,
thefts of bank details.
All over the world,
police forces
specialising in cybercrime
are inundated with reports
of victims of online scams.
The size of the problem has become
so alarming
that the FBI issued
a public information message
advising web users
to use an ad blocker.
If you get caught
putting these ads up on Facebook,
the penalty
is your account gets taken down
so you can just start over
and try again.
We're still seeing
scam adverts all the time.
And even when we report
these adverts to the platforms,
they may disappear briefly,
then something else will reappear
in its place very quickly.
This is a big industry.
These people spend billions
on advertising every year.
It's not one person out there
putting these ads online.
It's an ecosystem.
Sometimes these ads
don't just appear on social media,
but you can see them, for instance,
on the website of your newspaper.
They've been seen
on the website of Le Monde,
Der Spiegel,
even the New York Times.
Really mainstream
and trusted news reporting websites.
Now the news outlets don't have
really anything they can do about it
because they don't have control
over who's placing ads there.
But we're seeing huge numbers of ads
that appear
to be legitimate news stories
that when you click through to them,
they're actually there either to try
and harvest information about you
or to encourage you to invest
or share details or money.
These scammers are paying
Google and Facebook.
The financial interests of big tech
run counter to policing this thing
too strictly.
They do something
to take these ads down.
It's not like they're just
letting it happen.
But you can ask the question,
are they doing all they could?
We believe platforms can control
this flood of scam advertising.
They'll take ads down
if they're reported,
but that's not stopping ads
from being shown in the first place
and that's where
the damage is being done.
We hear from people
with enormous psychological impacts.
Some people have said they've been
suicidal as a result of these scams,
they can't sleep at night,
they have nightmares.
So many people have told us
that they feel silly
or stupid or embarrassed.
I couldn't breathe,
I couldn't eat, I couldn't sleep.
It was devastating.
I had a lawyer.
She said, "Inger, you must prepare
for losing your house."
I was thinking, "No, no,
over my dead body.
I'm not going to leave my house."
But no one had any understanding for
my situation.
So I had to give them my house.
It was awful.
When I took down
the pictures of my grandchildren
from the wall,
that, no...
It destroys your life.
Whether at an individual
or collective level,
the question
is how to prevent the dangers
created by the online ads.
In Europe, a new legal framework
is on the horizon.
It's known
as the Digital Services Act.
Its aim is to make
the digital world safer
by tackling illegal content,
disinformation
and harmful online activities.
It's a wrap.
We have the political agreement
on the Digital Services Act.
Now it is a real thing.
Democracy is back,
helping us to get our rights
and to feel safe when we are online.
The European Commission wants to put
order into the digital economy.
Big tech firms
are firmly in the spotlight.
Whether from our streets
or from our screens,
we should be able to do
our shopping in a safe manner.
Whether we turn pages
or we just scroll down,
we should be able to choose
and to trust the news that we read.
A lot of things
are illegal in our society.
You cannot stand
in front of city hall
and hand out child pornography.
EXECUTIVE VICE PRESIDENOF THE EUROPEAN COMMISSION
But that is not
the same thing online.
Who is to police the "city square"
when you are online?
So the purpose of the Digital
Services Act is to make sure
that what is illegal offline is also
illegal online and treated as such.
Tech companies do what we have seen
industries do time and time again
with regulation, which is at first
they are completely opposed.
They don't want any regulation.
It's about self-regulation, right?
This really great-sounding scenario
where we don't impose
any costs on companies
and they're really responsible
and they handle our data well.
Then as soon as they realise
that regulation is coming,
they change their tune
and then they say,
"We love regulation."
We have had these calls
from big tech for regulation.
But then when we come up with it,
I have this feeling, "Well,
not that kind of legislation.
Something else that you would
only do in the future."
If you say, "I'm Facebook.
I love a privacy law",
that might get you in the room.
Regulators are pushing
for more controls
and companies are heavily lobbying
against those controls happening.
Consumers can write in
with their complaints,
families can voice
how they've been scammed,
but you have
to do that loudly enough
to go up against millions
of dollars in corporate money
to try and stop those kinds
of regulations from happening.
What are you gonna do?
You're gonna stop using Google?
Stop using Facebook?
That's the problem.
There's no competition.
They capture the market.
Facebook used to have competition.
Instagram. It bought it.
It used to have competition.
WhatsApp. It bought it.
What are you gonna do?
Use Yahoo again?
With the European regulation
being put in place,
Brussels is becoming
a crucial battlefield.
Activists and internet giants know
that the results achieved here
could set the tone
for new rules all over the world.
We have a pretty packed schedule.
Tomorrow, we have
about four back-to-back meetings
and then we'll be moving on
to give a webinar.
They've asked us to speak at length
about the "Toxic Twitter" report
that we've just done.
They're really interested
in the way we calculated
how Twitter ads
are monetised compared
to mis- and disinformation actors.
This research that we did
proving that Twitter profits
from hate and disinformation
may not be replicable in future
because Twitter is moving fast
to make itself a more opaque,
less transparent platform.
And unless the EU forces disclosure,
we're gonna be muted.
We won't be able to do our jobs.
No one will be able to.
We've got
the Digital Services Act, right?
The job of the regulators is
to come up with the best regulations
possible to clearly lay out
what they're going to tolerate
and what they won't tolerate,
then to enforce them.
The platforms know that,
which is why in the last year
they've massively upped
their spending in Europe.
Of the top five firms spending
money in Brussels on lobbying,
four of the top five now
are big tech. Amazon, Facebook,
Google, TikTok
spending hand over fist in Europe
because they want
to protect their companies
by corrupting
the regulation-writing process.
My fear now
is that our job never stops
because we're gonna have
to watch the watchdogs.
Who guards the guardians?
We have to make sure that people
meant to guard the public interest
actually do it.
2016 is really the breaking point
between my old career
as just a marketer
and coming across this information
for the first time,
that ads were appearing
on Breitbart.
And then I never left.
We've never solved the problem,
so I've just retooled
my entire career
to dedicate myself to solving this.
Following Breitbart
over these past few years
has given us so much insight
into how bad actors work.
We've been able to scale our impact
in ways I never thought possible.
The ad tech industry is slated to be
the second largest criminal industry
after drug trafficking.
It's going to be
a disinformation tsunami.
The disinformation crisis is serious
and that looks very scary.
But I think the reason to hope is
this is not just a society problem,
it's not just a politics problem,
it's not even just
a community safety problem,
what we're dealing with
is a business problem.
The ad industry controls our world
just as much, if not more
than our government
at this point.
You're not powerless at all.
This entire system was built for you
in order to reach you,
in order to connect with you.
So if you as a user,
if you as a consumer go out
to the advertiser
and say, "This isn't working,
this is actually having
the opposite effect for me",
then there's nothing more important
in this world to the advertiser
than to correct that issue.
That's how we create change.
The money that funds
the internet has the power
to change the world
for good or for bad.
So the money that at the moment
has been a part of the problem
could be part of the solution
if advertisers start
to become more careful
about where their money is going.
Newspapers, like the Daily Mail
or websites like Breitbart,
they can say what they want,
but the rest of us also have
the rights of free speech
and we have the right
to say "not with my money".
We know there's a problem,
but we don't know what it is.
And we're kind of terrified.
We're lying there in bed,
we're covering our eyes
with the duvet
and saying, "There's a monster,
but I don't know what it is."
And it's really important
that we have the courage
to take that duvet down
and understand what it is
that's causing those noises,
what it is
that's making us so scared
and then deal with it
because we need to build a movement.
It won't get better
until we make it better.
We are dependent
on these tech platforms, right?
We are dependent
on these advertising systems.
These approaches that suggest
that we can all just
turn our devices off and unplug
really ignore how entrenched
they are, right? We need them
for jobs,
we need them to apply for loans
and get banking
and access medical information.
So those risks
and those benefits exist entangled
and you have to figure out
how do we get the benefits
that we want out of this
while really making sure
we're not harming people
and we're not harming society
in the process.
We're not anti-technology.
I use all of these platforms,
I use this technology,
but in the same way I don't want
clothes made with sweatshop labour,
I don't want technology platforms
I use to communicate with family
and rely on to be harmful actors
in the broader
sociocultural ecosystem.
Public opinion polling has shown
time and again
that average people
all over the world
want to see tech accountability
because people don't want
the internet to be a harmful place.
They don't want scam ads,
they don't want content
that makes them feel unsafe.
The financial motivation
behind scammers,
behind disinformation peddlers,
behind a wide range
of malicious online actors,
that financial motivation
is what puts them in this business.
And so until the big players apply
transparency to their business,
this will always be
a murky black box of a system,
yet half a trillion dollars flow
through it every year.
Thanks to the economic system
of digital advertising,
the online world has changed
completely.
As a result, it's changed us too.
As individuals and as a society.
Will we be able to change
this system in turn?
OK, so what kind of ads
do you see?
I do get adverts for,
I mean, if I want to be honest,
cashmere jumpers because
it knows that I spend
far too much time browsing websites
with cashmere jumpers on them.
I would see ads for outdoor life.
I don't have a real outdoor life,
I work quite a lot,
but I have a dream
of an outdoor life
and I think
it has been sensed there.
I just bought a shower head
as the result of a Facebook ad.
Is it still working?
Yes, it's great,
I'm very happy with it.
But it's one thing
in return for hundreds of hours
lost looking at silly things.
I cover national security
and law enforcement,
so I'm actually pretty careful
with my device.
They get very strange
if you don't give them data.
So, for example, I see a lot of ads
for women's clothing, for example,
all because they don't know
that much about me.
I constantly see
ads for phone chargers.
I must have bought one once
and since then
I've been inundated
with ads for phone chargers.
I must be a great customer
for phone chargers.
These days,
I've learned to train my algorithm
so I don't get all horrible things.
So on Instagram, for instance,
I get a lot of ads
for cat toys for my cat.
It's a nice break from my day job
when I'm in my militia profile
and I'm getting ads for body armour
and weapons holsters,
then I go home and I get to see
the ads for cat toys.
That's how I give myself
my mental break.