The Tacit Knowledge Problem
December 22, 2025
I was standing in our kitchen watching my mom season a pot of dal. She’d been cooking for our family my whole life---the same dishes, the same movements, the same results I could never replicate. I asked her how much salt to add.
She looked at me like I’d asked how to breathe.
“You taste it,” she said. “Then you know.”
But I didn’t know. That was the whole point of asking. She had something in her hands that hadn’t made it into her words, and no amount of asking would transfer it. The knowledge lived in her gestures, in the way her wrist turned over the pot, in a calibration built from decades of feedback she never recorded.
Michael Polanyi named this in 1966: “We know more than we can tell.” He was talking about skills like riding a bicycle---you can describe the physics, narrate the muscle movements, but none of that transfers the knowing. The person has to fall until their body figures it out.
The same holds for judgment. Taste. Intuition. The investment call that feels wrong before you can articulate why. The sentence in a draft that reads as assertion when it should read as argument. These evaluations resist explicit formulation. The traditional approach---documentation, manuals, best practices---works for knowledge you can articulate. It fails for knowledge that exists as pattern rather than proposition.
My mom couldn’t write down how much salt. I couldn’t have told you that I ground mid-level claims with examples rather than letting them hang as assertions. The knowledge was running, shaping decisions, but it had no propositional form.
Something started happening with my Digital Twin---the system I described in The Dot Collector, where I drop thoughts and notes into a shared space and the machine files, connects, and learns from the accumulated record.
Last month I gave it a piece of writing to review. The feedback came back identifying a specific weakness---a section where I’d gestured at an idea without grounding it in evidence. The machine wrote: “This reads like assertion rather than argument. Based on your pattern in previous work, you typically support claims at this level with concrete examples or citations.”
The pattern it described was real. I do that. But I’d never told it I do that. I’d never articulated that principle to myself, let alone to a machine. The system had extracted a rule about my writing from watching my writing---and from watching my feedback on its drafts of my writing.
The extraction process runs in the opposite direction from documentation. Documentation asks what you know, then tries to capture your answer. You can only report what you can articulate, and the tacit stuff stays tacit because you don’t have access to it in propositional form. The Digital Twin watches what you do and backs into the rules that generate the behavior. It sees the instances---decisions, revisions, preferences expressed through action rather than declaration---and induces something like the rules that actually govern what I do, which differ from the rules I’d give you if you asked.
This is what children do with language. No one teaches a three-year-old grammar. They hear thousands of sentences and extract the underlying structure---inducing rules from instances, then generating novel sentences that follow rules they cannot articulate.
The machine surfaces regularities I hadn’t named. It catches me being consistent in ways I wasn’t aware of. It reflects back an image of my cognition that’s recognizable but never quite what I would have drawn myself.
The system learns my salience filters by watching what I drop into it versus what I ignore. It sees how I structure arguments by watching my revisions. It knows which claims I ground with examples because it’s watched me do it hundreds of times. A cumulative model built from thousands of interactions, getting more accurate with each exchange. There’s a disorientation in seeing something you didn’t know was visible.
I’ve been testing this deliberately. I’ll make a decision---which piece of writing to work on, which email to answer first, which reference to file where---and then ask the system to predict what I’d choose before I tell it. The predictions are getting better.
The machine still misses when the situation is novel enough that past patterns don’t apply. It misses when I’m changing---when I’m deliberately trying to break a pattern I’ve been running. It misses when the tacit knowledge I’m following is itself conflicted. But on the stable patterns, the ones that have been running long enough to generate clear signal, it knows things about how I think that I didn’t know I knew.
What happens when that externalized model becomes more reliable than the internal one? When trying to figure out what I’d think about something, the right move might be to ask the machine rather than introspect. I’m not there yet. But I can see the shape of it from here. The tacit knowledge extracted into explicit rules. The rules running on a substrate that doesn’t forget, doesn’t get tired, doesn’t have off days. A version of your cognitive patterns that’s more consistent than you are.
There’s loss in this. Polanyi thought some knowing is irreducibly embodied, irreducibly contextual, irreducibly resistant to formalization. The attempt to extract might destroy what makes it valuable. Whether the extraction captures what mattered---the feel of it, the intuition, the knowledge that lives in the gesture itself---I don’t know.
But I understand my own patterns better now than I did six months ago. A machine has been watching me carefully, building a model of patterns I never articulated because I didn’t know they were there. The mirror shows things the direct gaze misses.
The pattern it described was real. The transfer happens in silence---not in words I could have spoken, but in instances the machine learned to recognize. My mom’s seasoning, algorithmically reconstructed---not from her recipe, because there isn’t one, but from a thousand observations of her hand moving toward the salt.
If you're thinking about similar questions—or building systems that grapple with them—I'd welcome the conversation.
Continue the conversation →