AI blooper— 1080 Particles in the Universe?

Dominic Widdows
3 min readJan 28, 2020

--

Using this as an interesting place to share “AI bloopers” and hopefully as a not-too-obnoxious way to file a “bug report” in the hope that some friend at Google might know where to route this to.

I happened to be looking up the estimated number of particles in the universe. Imagine my surprise to see the answer “1080”.

I remember that this was quoted at 10⁷² when I was young. And I know that the answer is a lot lot more than 1080, because I can see more blades of grass than that right now! So what’s going on? Turns out that the answer in the Google answer box is scraped from text that says:

“The commonly accepted answer for the number of particles in the observable universe is 10⁸⁰.”

Aha! That superscript that makes the difference between 1080 and 10⁸⁰ is rather crucial in this context :)

This struck me for a few reasons, not because it’s surprising, but quite the opposite, because it’s familiar.

Firstly, I’d like to praise Google’s question answering service — it’s usually right, and even in this case where it made a silly mistake, it was easy for me to follow the pointer to the source and figure out the answer I was looking for.

Secondly — it’s a good example of where AI hasn’t “solved everything”. The bundle of AI, Information Extraction, and NLP technologies has done a wonderful job of pattern matching and extrapolation, but it lacks a whole collection of “failsafe” checks that tell humans that an output just can’t be right. These failsafes in the back of our minds take years to develop, they are hugely varied, and are so second-nature to us that we barely notice ourselves using them. They need to be available the whole time to avoid mishaps, but only a few are activated in any situation, and knowing which ones to apply in a given situation is part of the second-nature. (Any parent who has taught their children to fry an egg and wash up without flooding or setting fire to the kitchen knows this all too well!) In short, AI is good at being smart, but lousy at knowing when it’s being silly.

Thirdly — specifically in text processing, as more and more tools are available for semantic modeling, increasingly I spend my own coding time dealing with “trivial” issues like character sets, normalization, punctuation, whitespace. These decisions are often more important with programming languages and mathematics than with paragraph text. (Try searching for “python } vs ]” at the moment and you’ll see from the suggested answers that the brackets aren’t recognized at all.)

This will come as no surprise to machine learning practitioners, because it’s part of the generalization “We spend more time gathering, cleaning, analyzing and evaluating data than building new learning algorithms.” It’s just a particular kind of data cleaning and preprocessing that is especially germane to text and language processing (which are not the same thing!)

In general, if we really want to improve AI, we need to know a lot more about context, appropriateness, and how to avoid silly gaffes. Getting ever better results on encapsulated problems that systems are already good at solving won’t be the key challenge here. On the upside, if we could improve our humble character-stop-lists to make them more dynamic and context-aware, programmers might be able to get good results from general-purpose search engines even when our queries involve curly braces!

--

--

Dominic Widdows
Dominic Widdows

Written by Dominic Widdows

Works at IonQ on AI and quantum computing, particularly natural language processing. See http://puttypeg.net for more.

No responses yet