Knowledge
-
Knowledge Without a Knower
How do we define knowledge in the age of AI? Can new knowledge even be created if we’re outsourcing our thinking to the models or the systems we built around the models?
Let’s start with what knowledge actually is. Traditionally, to know something, you have to believe it’s true and have some justification for that belief. It’s implicit knowledge earned through experience, study, or reasoning.
AI doesn’t work that way. To the tools, it’s a probabilistic map of patterns extracted from massive amounts of text. There’s no belief, no understanding in the human sense. It’s knowledge without a knower.
That distinction matters more than we might think.
From Retention to Curation
The way we work with knowledge is shifting. For centuries, the paradigm was retention: memorize facts, write things down, build personal libraries of information.
Now we have tools that can do that for us, often better and faster than we ever could.
So what’s our new role?
Curation.
The skills that matter now are about what we can retrieve, what we can verify, and what we can synthesize.
We don’t need to remember everything, we need to know how to find it, evaluate it, and combine it in useful ways.
The Skills We Actually Need
If we’re not going to be the primary repositories of knowledge anymore, what should we focus on?
Spotting bullshit. This might be the most important skill of the next decade. When the tool outputs something that doesn’t match what we know to be true, can we catch it? AI systems are confident even when they’re wrong. They don’t hedge. They don’t say “I’m not sure about this.” So we need that internal alarm that goes off when something doesn’t add up.
Asking good questions. This has always been important, but it’s now essential. Understanding the problem means knowing where the gaps in your knowledge actually lie. A well-formed question is half the answer. An AI can give you a thousand responses, but only a good question will get you a useful one.
Reasoning about reasoning. How did the system arrive at that answer? What steps did it take? Why does it think that’s the case? We need to be able to trace the logic, not just accept the output. This is meta-cognition applied to our tools.
The Human in the Loop
New knowledge will continue to need humans. Not for the grunt work of data processing or pattern matching, AI can handle that better than we ever could.
Instead our role is to identify the anomalies. We need to become detectives, finding the errors in the data. Skepticism will be extremely valuable in the times ahead.
Being a critical thinker. We need to be able to evaluate the evidence, weigh the pros and cons, and make informed decisions.
In computing, we see error correcting used in the semiconductor industry, and we see a different technique also used in the quantum computing industry. And while reducing the amount of errors in a given system will continue to be important, what we really after here?
Well, the truth, right?
I propose we come up with a new name for truth. I think it should be called “HAT” or a “human accepted truth.”
The aggregate of HATs is what we shall call “knowledge.” Knowledge is the sum of all human accepted truths.