The questions I’m asking here are tuned to inquire how we might arm people with stronger signals about how much we can trust an answer machine’s response.

This approach suggests a certain faith in human beings’ capacity to kick in with critical thinking in the face of ambiguous information. I’d like to be optimistic about this, to believe that we can get people thinking about the facts they receive if we give them the proper prompts.

We’re not in a good place here, though. One study found that only 19% of college faculty can even give a clear explanation of what critical thinking is—let alone teach it. We lean hard on the answer machines and the news-entertainment industrial complex to get the facts that guide our personal and civic decisions, but too many of us are poorly equipped with skills to evaluate those facts.

So the more Google and other answer machines become the authorities of record, the more their imperfect understanding of the world becomes accepted as fact. Designers of all data-driven systems have a responsibility to ask hard questions about proper thresholds of data confidence—and how to communicate ambiguous or tainted information.

How can we make systems that are not only smart enough to know when they’re not smart enough… but smart enough to say so and signal that human judgment has to come into play?

Big Medium → Systems Smart Enough To Know When They're Not Smart Enough