Artificial Intelligence

Countering bad information with good logic

December 21, 2022
10 min read

The promise of the internet is that a world of information is at your fingertips. At its best, it enables you to find both mundane facts (like who starred in that comedy you watched last year) and arcana (like the mechanisms that enable drugs to fight off disease) in seconds. At its worst, it becomes a marketplace for conspiracy theories, junk science, and fake news.

Worse yet, by providing lots of facts in seconds, it has had a perverse impact. The democratization of knowledge has led to a devaluation of expertise. Access to facts has led to the illusion of understanding that comes from years of study and practice. Suddenly, everyone feels like they can diagnose diseases and understand the economic basis of the value ascribed to currency.

This uncomfortable juxtaposition of significant truths and baldfaced lies poses a problem not just for people browsing the internet, but also for powerful algorithms that aim to develop human-level abilities to reason. Over the past decade, machine learning techniques that have their roots in cognitive science research from the ‘80s and ‘90s have demonstrated some remarkable abilities. These techniques find subtle statistical relationships among elements in a large body of data like images or text that enable programs to detect patterns that make the programs appear to have deep knowledge about the world.

The most recent marvel is ChatGPT, which formulates fluent answers to almost any query that a user types. Not only that, but the answers seem really plausible. They take advantage of an age-old bias—used for advantage by conmen and snake-oil sellers for generations—for people to believe things that are stated fluently and confidently. The system would seem to be an expert on almost every topic. Unfortunately, many of the statements the program generates about complex topics are woefully incomplete or flat-out untrue.

Because these algorithms function by finding statistical relationships hidden in data, there is no clear way for these systems to justify why they have reached the conclusions they have. As a result, users have to accept the statements spit out by the program as true without being able to check up on where those statements come from.

Part of the reason that the results churned out by programs like ChatGPT seem so reasonable is that much of what they say is true. The statistical relationships they find enable systems like this to quickly reach a reasonable level of accuracy. The ability to solve 80% (or even 90%) of problems quickly, gives rise to the illusion that it is only a matter of a short period of time before these programs achieve near-perfect performance.

If you look carefully at human intelligence, though, you will quickly notice that we have two different systems that support our amazing ability to navigate the information environment. Like the deep learning algorithms underlying ChatGPT, we are sensitive to statistical relationships in the data we encounter. That is one of the reasons why we are often able to finish the sentences of loved ones we have spent years with and predict where to look next when driving on a busy city street. This intuitive statistically-based reasoning ability was (somewhat unhelpfully) called System 1 reasoning by Keith Stanovich and Richard West in an influential psychology paper and popularized by Danny Kahneman in his widely-read book Thinking Fast and Slow.

But, we also have more rule-based reasoning systems that enable us to use logic to think through problems and to detect contradictions. We use this effortful process (called System 2 reasoning by Stanovich and West) to go the last mile in our intelligent behavior. When someone says “Look before you leap,” and then a little while later, “He who hesitates is lost,” we can notice that these two proverbs appear to be making opposing recommendations. We can then reason about situations in which it might be useful to be cautious versus those in which it is worth diving in before it is too late. If we only had System 1 (or could only reason the way ChatGPT does), we would never notice the contradiction at all, and as a result, would not learn to distinguish between when we should be cautious and when it is ok to take a gamble.

There is a long tradition within the artificial intelligence community that has focused on the power of systems that reason based on rules, symbols, and even formal logic to solve problems. The cognitive scientists who work on these systems recognize that bridging the gap between the output of statistically-based systems like ChatGPT and the kind of expert performance that true artificial general intelligence will require will necessarily involve integrating statistically-based (System 1) algorithms with more rule-based and logical (System 2) programs.

One reason why this recognition matters is that there have been a number of bold claims about how close we are to having AI programs that will be able to replace people in tasks ranging from city driving to making crucial financial decisions. These claims are rooted in how well System 1 type programs have gotten 90% (-ish) of the way toward human-level performance. But, they do not acknowledge both how hard the last 10% is going to be as well as the role that System 2 type programs will play in bridging that gap.

In recognition of the power of logic to solve key aspects of the problems faced by humanity, in 2019 UNESCO declared that January 14 of each year would now be World Logic Day. This date was selected, because of its association with two of the most prominent logicians of the 20th century—the death date of. Kurt Godel and the birth date of Alfred Tarski.

World Logic Day will celebrate the importance of logic in education and reasoning. It will be marked by meetings and conferences that bring together experts and the general public to renew our interest in the abilities central to System 2 that enable people to continue to surpass the best machines in their intelligence.

These celebrations also give us the opportunity to highlight the achievements of programs that make use of logic to support intelligent decision making.

One of those programs is the Logic.Wiki, wiki of actionable knowledge, which turns complex questions into simple interactive "help me decide" tools. These tools walk users step-by-step through expert-logic to guide them in making the best decision for themselves. Most importantly, the content is built by experts and academics providing verifiable and trusted logic that is powered by HI (Human Intelligence).

Get started for FREE

Your AI-powered single source of truth
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.