Monday 15 July 2019

#TrueStory: It isn't easy to build a business model for truth Startups are dreaming up ways to train algorithms to weed out fake news. The intent is noble, but making a business of it could open up a whole new can of worms.

How much are you willing to pay for the truth?

It’s a question Lyric Jain has been puzzling over. The 22-year-old mechanical engineering graduate from Cambridge University is the founder of Logically, a company backed by Massachusetts Institute of Technology (MIT). Logically is building an artificial intelligence (AI)-based model to “detect political bias, misleading information, and logical fallacy” that people are exposed to from “current distributed content platforms”.

In other words, Jain’s task is to help defang one of the deadliest threats to democracy and civility, aka fake news.

Jain says Logically’s AI has been trained on over 20 million sentences from more than 700,000 articles across 100,000 websites, online archives, and social-media platforms (Twitter and Reddit). When needed, the output is verified by a human fact-checking team, whose feedback is then incorporated into the AI. This means significantly reducing the burden on humans in the arduous process of fact checking.

Jain, an earnest, well-intentioned young man, claims his algorithms are able to weed out nearly nine out of 10 fake articles. He wants this power in the hands of the people once Logically goes live — the plan is later this year — and is building teams across major markets, including India. Here, a team of journalism graduates is adapting the AI to the country’s cultural and linguistic idiosyncrasies. “We feel local citizens would be best placed to alter our existing models to make sure we don't exhibit any cultural biases,” Jain says. Lyric Jain, founder, Logically; image courtesy: @LogicallyHQ via TwitterThe problem is, all this costs money. Logically is not intended to be a not-for-profit entity. And in India, according to his survey, the propensity for individuals to pay for a fake-news app is at a scrawny 2.7%, against 14.6% in the UK and 16.9% in the US. The total number of Indians willing to pay for an ad-free news-verification service is a sixth of the number of Americans, his survey shows.

That’s why, between putting up a paywall and opening the platform to advertisers, in India Jain is choosing the latter.

Of course, the idea of paying for a dedicated truth-telling service is startlingly new. Historically, that’s what people have paid the media for. But the business model of trust, as embodied in the old news business, has been imperilled by the rise of fakery. The watchdog, it seems, now needs a watchdog.

As Jain’s tribe — there are more like him, as we shall see — figures out the economics of distilling truth, the opposite trade of selling falsehoods and misinformation already has a mature business model, thanks to bustling underground marketplaces in Russia, China, and West Asia.
Year-long fake news campaign: USD400,000
Discrediting a journalist by promoting fake counter-stories, bots attacking their social media feeds, etc.: USD50,000
Manipulating online petitions by signing up paid petitioners: USD2,664 for 25,000 petitioners
Erasing undesirable news: USD50 for Russian users, USD100 for English speakers
Press release distribution to news outlets: USD802
Make a video appear on YouTube’s main page for 2 minutes: USD621
Make 20 videos trend on YouTube’s main page for 2 minutes (up to 6 minutes on request): USD7,992
Source: BBC, Trend Micro

We are all fake busters
Though #fakenews is trendier, quietly a coalition of algo-armed truth-seekers has been taking shape. The mechanics of how these tools work, especially the text-based ones (the process for checking images is slightly different), are similar.

They process data, think news clippings, to map biases or variations, and then filter out the ones that deviate wildly.
Higher values are assigned to content from sources, both publications and individuals, which are considered to practice greater journalistic rigour versus, say, a blog.
Other things these algorithms watch out for include how news spreads on social media, how well the headlines connect with the article text, or numbers that look suspect.
Roy Azoulay, founder and CEO at Serelay, a company based in Oxford, is developing software that scans through pixels and metadata to determine if digital media — videos or photographs — have been manipulated.

Dhruv Ghulati, co-founder, CEO, and research scientist at London-based FactMata, has raised USD1 million to build a “fact-checking community” based on AI.

MachineBox, a company headquartered in San Rafael, California, has launched FakeBox, which has been trained on thousands of real and fake articles. MachineBox’s tools are available for a fee to anyone who wants to build a fake-news-detection programme. Aaron Edell, the company’s CEO and co-founder, tells ET Prime it is important to let “anyone build their own fake news detectors and let people decide what fake news is.” Edell says while his product is new, a news-aggregation site based in Los Angeles and a company in Germany have been using it and have reached “90% accuracy really quickly”.

Closer home, there’s Ponnurangam Kumaraguru, a professor in the Indraprastha Institute of Information Technology, Delhi. Kumaraguru had earlier developed TweetCred and Facebook Inspector — browser extensions meant to assess the credibility of content on social media in real time. These days, he is developing an algorithm to filter out fake news. While he is not looking to create a business out of it, he says he has had conversations with Twitter about letting the platform, which is bogged down by allegations of proliferating fakes, use his creation.

Algorithms are only as good as the humans feeding it
While the profusion of such tools is a good thing, the reality is the fight against fake news isn’t black and white. Several hurdles stand between these innovators and ensuring the fruits of their labour do what is expected of them.

The biggest question is around the credibility of the algorithms themselves. Ultimately, any algorithm is only as good as the training data fed into it. Those writing the code need to demonstrate they lack bias in the first place.

The tendency of algorithms to distort or create prejudiced output is well documented. Even Jack Dorsey, Twitter's CEO, had to recently acknowledge this while being grilled by lawmakers in the US.


jack
✔@jack

 • Sep 5, 2018


Replying to @jack
Our technology was using a decision making criteria that considers the behavior of people following these accounts. We decided that wasn’t fair, and corrected. We‘ll always improve our technology and algorithms to drive healthier usage, and measure the impartiality of outcomes.


jack
✔@jack

Bias in algorithms is an important topic. Our responsibility is to understand, measure, and reduce accidental bias due to factors such as the quality of the data used to train our algorithms. This is an extremely complex challenge facing everyone applying artificial intelligence.
11:26 PM - Sep 5, 2018 • Washington, DC

358

194 people are talking about this
Twitter Ads info and privacy
Depending on the political, economic, cultural, religious, or social perceptions of those creating it, fake-busting tools can fall into the same trap. This is especially pertinent since platforms such as MachineBox have given people the opportunity to create their own version of a fake-news detector.

Then there is the problem of usability. For instance, apps like the one Logically is building require users to input stories one by one to determine whether they are fake or not. This isn’t exactly a simple routine. For better user experience, such products may need to integrate into, say, a messaging app like WhatsApp, and allow for checking veracity of content within that environment. As Kumaraguru, the professor from Delhi, puts it, users should ideally be able to press a button from within WhatsApp to figure out how credible that forward is.

But this is easier said than done. Logically has been trying, but it hasn’t yet signed up any messaging platform.

One reason is that messaging platforms don’t have a direct incentive to open themselves up any more than they have to. Especially so in the context of letting these apps read messages and forwards in the background, lest that lead to allegations of diluting privacy standards.

To really catalyse change, you need to have scale to influence and weed out fake news in real time — at the point of origin and of virality. That is something that only those owning the big pipes that connect people, think Facebook, Google, and Twitter, have access to.

But again, that would mean these monoliths taking some of the editorial responsibility for the kind of stuff that is shared on their platforms. Their position, as platforms neutral to the content that users share on them, has thus far largely immunised them. That could be challenged if they take on more editorial authority.

The question also is whether you would want, say, Facebook to have the power to determine the authenticity of what is shared. After all, making both the good and the bad viral has been core to social networks’ business model of converting engagement into cash.

As Ghulati of Factmata points out, “This [fake] content spreads well on existing platforms, which unfortunately still promote it because the content tends to also be popular/viral and this is how their algorithms work. The content often makes money if it is also catchy or arresting, and so there are incentives to produce it.”

With pressure from governments around the world likely to get more intense, Facebook, Google, Twitter, et al, need to be seen as taking the necessary steps. Starting with the US, Facebook has initiated a literacy campaign to educate users to spot false news. These tips would appear on top of their news feed.

Facebook has also announced an initiative to engage scholars in their research on misinformation on the platform. Peer reviews will determine which scholars receive the funding from Facebook. As recently reported by ET, Google is also training 8,000 journalists in English and six Indian languages on fact checking, online verification, and digital hygiene.

The tech giants will also likely have some of the startups on their radar. According to Forbes, Facebook’s recent acquisition of the London-based AI company Bloomsbury.ai may have had “more to do with the challenges of fake news”. Bloomsbury.ai’s CTO Sebastian Riedel has worked closely with Factmata.

Their earnestness and smarts are evident when speaking to folks like Jain and Kumaraguru. But ultimately, this is a fight that is difficult for these smaller startups to win on their own, and the urgency of the problem they are fighting means they don’t have much time. So whether we like it or not, it is likely that the big technology companies will play the central role in determining which way the war against fake news goes.

The truth haves and have-nots
The business of truth is loaded with a moral challenge: Will it create an even wider chasm in terms of access to credible information between those who can afford to pay for it and those who cannot?

As Roy Azoulay of Serelay asks, “[Are we] closer to solving the problem, or rather that we have a class divide over access to information?”

“To me the more pertinent questions are, do we reach a tipping point where people walk out on platforms due to misinformation, or do we reach one where a regulator steps in? I think either of these constitutes a more effective change catalyst than users’ propensity to pay,” Azoulay says.

Jain says the target audience for Logically are young or middle-aged consumers who already subscribe to various journalism platforms. Trouble is, that is not exactly the bulls-eye of the audience who are swayed by the falsehoods and lies that spread on social media. Far from it. While the menace of misinformation affects everyone, particularly badly hit are the less media savvy and not so media literate.

If Jain’s tribe has to create the impact their intent demands, their services cannot be limited to those who can afford to pay. This means they will need to take a good hard look at balancing between subscription and alternative forms of monetising — and perhaps not just in India.

No comments:

Post a Comment