Spot misleading info while you read

Public Editor is a machine learning based system that labels specific reasoning mistakes in the daily news, so we all learn to avoid biased thinking.

READ SMARTER

Get in-line media training to identify misleading information, including issues with probability, language, reasoning, evidence and sources. If we haven’t reviewed a news article yet, you can send it our way.

Understand the news where you read

Newsreaders view labeled articles using our Chrome extension, newsfeed, or newsletter. Each article is given a credibility score (learn more) & is layered with in-line explanations detailing each type of reasoning error.

A reasoning checker not a fact checker.

We evaluate how claims are made, ensuring they use sound reasoning and solid evidence from credible sources, without misleading rhetoric.

Bi-Partisan Annotation

Misleading information is labeled by a consensus of several machine learning models designed to avoid political bias.

Annotating is a team effort!

Annotations are created by a consensus of several machine learning models that have been trained as specialists in different types of misinformation. We train volunteer annotators to check Machine Learning outputs, ensuring that there is a human in the loop as we improve the ML models.

Labels use Consensus

Public Editor has a built-in process to find consensus among Machine Learning models requiring at least 2 of 3 machine learning models to independently apply the same label before it is displayed on an article.

Constantly Improving

Our tests show that our Machine Learning models are reliably identifying the same issues that our trained annotators. As we continue to train the models we welcome any feedback. Your feedback is invaluable in ensuring that we are constantly improving.

Sign up for our newsletter

Join our mailing list to receive news and updates about whatever is happening at Public Editor!

FIGHT MISINFORMATION & TRAIN AI

Join our community of annotators!

Help us stop the spread of misinformation. Train as a citizen scientist to identify misinformation through a self-directed online curriculum! You will learn to identify issues with probability, language, reasoning, evidence and sources. You will then label the outputs from our Machine Learning models, playing a critical role in improving the Machine Learning labels used by our system.

CAN’T FIND WHAT YOU WERE LOOKING FOR?

Select other ways you can get involved!

HELP US CORRECT MISINFORMATION

Play your part to preserve our democratic values!

We need your help to ensure we can continue identifying misinformation and building trust in a shared reality.

Donate online

OR

Sponsors helping us improve media literacy across the globe!

Schmidt Futures logo
Alfred P. Sloan Foundation logo
McCune Foundation Logo
National Science Foundation (NSF) logo
Sage Publishing logo

Partners

Berkeley Institute of Data Science (BIDS) Logo
Alliance4Europe Logo
World Economic Forum Logo
SciStarter Logo
hypothes.is logo
Media Smarts Logo
Massachusets Institute of Technology (MIT) Logo
Harvard Logo
The Hong Kong University of Science and Technology Logo
Simon Fraser University Logo
Library of Congress Logo