Your browser is old and is not supported. Upgrade for better security.

Invest in Factmata

Scalable AI to combat the spread of fake news online

Highlights

1
☑️ On a mission to give everyone a better understanding of online content
2
🚀 Raised from Mark Cuban, Biz Stone (co-founder of Twitter), Craig Newmark (Founder, Craigslist)
3
🤝 Built by top PhDs from the University of Cambridge, Amazon, Microsoft, UCL, Google, and more
4
👨‍💻 The first startup to block fake news in the ad tech market both for SSPs and DSPs
5
🧠First UK government-approved tool to detect disinformation (G-Cloud 11 framework)
6
10 US patents pending on methods to score content for reliability and safety using experts +. AI

Our Team

Founder & CEO
Dhruv was the first PM at import.io, a Silicon Valley startup building auto-scraping AI. He studied Economics at LSE and MSc CS at University College London, with a thesis in automated fact-checking using NLP. He is a Forbes 30U30 and Techstars alum.
Unsafe, misleading, hateful, and untrustworthy online content is eroding our trust in civil institutions, communities, governments, and democracies. We have to spend time, money, and resources on technology that tackle it scalably and efficiently before its too late.
Co-CEO & COO (joining in April)
Ant brings 21+ years of PR, Media and tech experience. He countered Taliban propaganda for the MOD in Afghanistan in 2008, developed counter-terrorism narratives at the Home Office and he's been building tech since 2000 and AI products since 2014.
CTO & Cloud Architect
Tomas was Cloud Architect at the European Institute for Energy Research (EIFER) developing tools for cloud analytics and worked at the University Institute of Intelligent Systems and Numeric Applications in Engineering (SIANI).

A pioneer in tackling disinformation online

The internet is out of control. Hate speech. Trolls. Coordinated Disinformation. Factmata is tackling the biggest issues on the internet, and doing it at scale, using AI.

Hate speech on online forums. Coordinated rumors spread by bots about anyone. Fake news about vaccines, COVID-19, the elections, and even major brands. The problem is getting worse. 

It’s estimated “fake news” costs the world $78 billion a year. 

  • Commercially, brands lose $39 billion a year from social media attacks and fake news.
  • Misinformation about health remedies and vaccines costs authorities $9 billion a year
  • $9bn is spent globally on online reputation management. 
  • At least $235m is spent by brands advertising on known propaganda sites, damaging their brand alignment. 
  • $3bn a year is spent on social media content moderation. 

When you add hate speech, disinformation, and misinformation, the indirect costs on society are immeasurable. These include:

  • Making it harder to make sensible, informed decisions about our health, the environment, the climate, urban policy, and more
  • Loss of trust in our governments, public institutions, and public initiatives
  • Inciting violence, hatred, and tension amongst each other, evidenced by a rise in hate crimes towards different racial groups and ethnicities e.g. refugees
  • Psychological harm caused by racism, sexism, cybersecurity, narrative manipulation 

Misinformation and disinformation are eroding our ability to trust anything, and threaten the very fabric of civilized society. The World Economic Forum ranks the spread of misinformation and fake news as among the world's top global risks. Half of Americans view fake news as a bigger threat than terrorism.

In a world where disinformation flows virally, knowing who and what to trust is more important than ever.

Current methods to tackle fake news aren't working. These include:

  • Fact-checking sites which do great work, but struggle to obtain reach, distribution, and popularity of their content
  • Human teams of 20-30 corporates analysts who cannot keep up with all the rumors out there, read all the underlying opinions, and form strategies to counter them
  • Existing sentiment analysis tools, which can detect negative sentiment but cannot detect coordinated fake news/disinformation campaigns by bots and trolls
  • Website rating sites, which use human teams to rate websites, but cannot keep up and assess content at the page level and contain implicit biases in their methodologies
  • “Blacklists” of harmful websites, which have to be manually maintained by human teams and often go out of date.

Social media platforms are indeed making efforts to remove false news - but they are not incentivized to do so, since it adds to their operating costs. Moreover, because fake news is often very engaging and entertaining, online platforms don’t want to take it down.

Factmata has been building patent-pending AI technology since 2016 to detect hate speech, disinformation, and misinformation.

Factmata has built an engine that can extract all the key claims and assertions made on the internet, and cluster them together, even if they are expressed in slightly different ways. We call these “narratives”.

We then built a series of algorithms that can score and classify content across 12 major dimensions, including controversiality, non-objectivity, hate speech, racism, sexism, toxicity, obscenity, verbal threats, hyper-partisanship, and clickbait. All of these are strong linguistic markers of content that might be also propaganda, disinformation, or fake news. 

To build these algorithms, we used listed experts in relevant subjects in science, public policy, health, and more, to label thousands of articles and Tweets. Even though this method was more expensive and difficult than traditional methods of training machine learning algorithms, it has led to our algorithms being fairer, more trustworthy, and more accurate. This allows us to analyze any online narrative, including news, Tweets, and online comments, and give it a score across multiple dimensions of safety, threat, and risk.

Using this powerful engine, we can track fake news and hate speech about any brand, product, or issue online, and assist those who need help to identify it, flag it, and take it down.

We have 10 patents pending on our technology and collaborated with experts at universities like University College London, Imperial College London, Hasso Plattner Institute, and Technische Universität Darmstadt. We’ve won multiple R&D grants to build our solution worth over £1m, and spent over $3.8m perfecting the engine.

We’ve worked with multiple social media platforms, programmatic advertising agencies, news companies, ad tech firms, PR agencies, and more

In 2018, we built the first integration with a supply-side ad platform fully focused on fake news, Sovrn. We completed trials with more than 15 ad exchanges including AppNexus & the Trade Desk and found on average up to 7% of URLs in samples contained toxic language, hate speech, or propaganda.

In 2019, we became the first listed tool on the UK government agency contracting system, G-Cloud-11, to offer a disinformation monitoring service. We will be renewing this service in 2021. 

We helped Taboola, one of the biggest ad networks in the world, remove hundreds of propaganda sites from their network. We help Social Sweethearts, the largest publisher of family-friendly content online, to check their content for clickbait headlines before it gets published on Facebook.

In 2020, we launched the first integration with an ad platform called Launch4D (owned by Silver Bullet) that allows brands to not advertise on fake news pages, in real-time, via their ad bidding infrastructure/demand-side platform.

We’ve also experimented with public tools that help readers know what they can trust online, annotate news content, and help hold platforms accountable for the content they are putting up online. Our browser extension Trusted News hit #4 on Product Hunt, and our hate speech monitoring tool Bleepr hit #5 this year. You can try our core technology on try.factmata.com - it’s free to use.

We just launched our Narrative Monitoring Platform, which integrates all our R&D in one place, and helps monitor and detect fake news about any topic online

In 2021, after 3 years of deep R&D research, we’ve built a new platform for brand & government intelligence analysts. The platform allows analysts to track fake news across Twitter, news articles, Reddit, and other sources. It brings all of our AI into one product and offers the following ground-breaking features:

  • It flags new narratives that are trending and emerging about any topic online, from COVID-19 (if you are a major government communications agency) to Adidas’ new product launch (if you are Adidas’ communications/marketing team)
  • It allows analysts to dig in and see what sub-narratives and ideas are evolving within each piece of fake news, and how it is trending
  • It displays key influencers and bots driving the conversation, as well as influencers that could be advocates to help counter the fake news based on what they say online.

Within 2 months of launch, we started working with communications analysts, and are generating recurring revenues. We’ve been tracking topics as diverse as labor unions, climate change, COVID-19, QAnon, Adidas, L’Oreal, bitcoin. We’ve detected narratives around Adidas using Uighur labor camps to make its shoes, and tracked the evolution of the claim that COVID-19 was a strategic hoax by a collection of 5G telecoms companies in the US.

This new platform has a potential market to sell into worth over $83bn.

Join us and build the future of the web.

We are on a mission to make social media a lot healthier, accurate, and positive, in 3 stages:

  • Sell high-powered tools that help brands, media agencies, and PR agencies to tackle rumors and fake news about them, and platforms remove harmful content more effectively
  • Release public tools, powered by the same technology, to achieve mass-market scale
  • Integrate our engine into other platforms, search engines, news feeds, and more, so any company can make use of our core technology.

We’ve been building our technology since 2016, and after multiple experiments and iterations, we’re ready to launch our Narrative Monitoring Platform.

We plan to use the capital raised to continue building core features within our new Narrative Monitoring product that aid analysts in tackling fake news. These include:

  • Enabling clients to create feeds to any topics without any manual set-up by Factmata 
  • Analyzing key influencers spreading fake news and age, demographics, and interests
  • Analyzing which different words/phrases reappear across narratives, to help aid which words are best at countering misinformation/disinformation
  • Analyzing YouTube transcripts, comments, Parler, Gab, and even the dark web
  • Enabling the AI to get smarter the more our product is used by analysts
  • Analyzing content in French, Italian, Spanish, Portuguese, Russian, Arabic

We will also add personnel in sales, marketing, and operations, to transition from a heavily technical R&D-focused company to hitting  $1m+ in recurring revenues by Dec 2021.

We’re backed by Mark Cuban, and the founders of Craigslist, Twitter, and Zynga, some of the most pioneering internet companies of our time. 

Join us, and make the internet a safer, better place.


Downloads

Overview