Skip to content
← Back to Writing

The Tacit Knowledge Problem

December 22, 2025

AI cognition tacit knowledge personal

I was standing in our kitchen watching my mom season a pot of dal. She’d been cooking for our family my whole life—the same dishes, the same movements, the same results I could never replicate. I asked her how much salt to add.

She looked at me like I’d asked how to breathe.

“You taste it,” she said. “Then you know.”

But I didn’t know. That was the whole point of asking. She had something in her hands that hadn’t made it into her words, and no amount of watching her cook would transfer it directly into mine. The knowledge lived in her body, not her explanations.


Something strange started happening with my Digital Twin — the system I described in The Dot Collector, where I drop thoughts and notes into a shared space and the machine files, connects, and learns from the accumulated record.

It’s watched me think. The understanding comes from observation, not explanation.

Last month I caught it doing something that stopped me mid-scroll.

I’d given it a piece of writing to review. The feedback came back identifying a specific weakness—a section where I’d gestured at an idea without grounding it in evidence. The machine wrote: “This reads like assertion rather than argument. Based on your pattern in previous work, you typically support claims at this level with concrete examples or citations.”

The pattern it described was real. I do that. But I’d never told it I do that. I’d never articulated that principle to myself, let alone to a machine. The system had extracted a rule about my writing from watching my writing—and from watching my feedback on its drafts of my writing.


Michael Polanyi named it in 1966: “We know more than we can tell.”

He was talking about skills like riding a bicycle. You can’t explain how to balance. You can describe the physics, you can narrate the muscle movements, but none of that transfers the knowing. The person has to get on the bike and fall until their body figures it out.

This applies to more than physical skills. It applies to judgment. Taste. Intuition. All the ways we evaluate and decide that resist explicit formulation.

A sommelier knows when a wine is good. Ask her how she knows and she’ll gesture at descriptors—tannins, finish, terroir—but the descriptors are post-hoc. The knowing comes first. The words come later, inadequate approximations of something the body understands.

The traditional approach to capturing knowledge is documentation. Write it down. Create manuals, procedures, best practices. This works for explicit knowledge—the kind you can articulate. It fails completely for tacit knowledge, because tacit knowledge is what cannot be articulated.

My mom couldn’t write down how much salt. The sommelier can’t produce a formula. I couldn’t have told you that I support mid-level claims with examples rather than assertions. The knowledge existed, but it existed as pattern, not proposition.


The Digital Twin inverts the extraction process.

Traditional knowledge management asks: What do you know? Then it tries to capture your answer.

The Digital Twin asks: What do you do? Then it extracts the rules that generate that behavior.

The difference matters. When you ask me what I know, I can only tell you what I can articulate. The tacit stuff stays tacit because I genuinely don’t have access to it in propositional form.

When you watch what I do, you can see the tacit knowledge in action. It shows up as patterns. Regularities. The consistent shape of decisions even when the content varies. A sufficiently patient observer can back into the rules I’m following without my ever having to state them.

This is what children do with language. No one teaches a three-year-old the rules of grammar. They hear thousands of sentences, and somehow—through a process we still don’t fully understand—they extract the underlying structure. They induce the rules from the instances. Then they generate novel sentences that follow rules they cannot articulate.

The Digital Twin is doing something similar with my cognitive patterns. It sees the instances—my decisions, my feedback, my revisions, my preferences expressed through action rather than declaration. From those instances, it extracts something like rules—the rules that actually govern what I do, which differ from the rules I’d give you if you asked.


I think of myself as someone who knows his own mind. I’ve spent time working on this—therapy, contemplative practice, the usual attempts to see the machinery. If anyone should be able to name their own patterns, it should be someone who’s been looking for them.

And yet the machine surfaces regularities I hadn’t named. It catches me being consistent in ways I wasn’t aware of. It reflects back an image of my cognition that’s recognizable but never quite what I would have drawn myself.

The system learns my salience filters by watching what I drop into it versus what I ignore. It sees how I structure arguments by watching my revisions. It knows which claims I ground with examples because it’s watched me do it hundreds of times.

There’s the disorientation of seeing something you didn’t know was visible. A cumulative model built from thousands of interactions, getting more accurate with each exchange.


There’s a researcher named Ikujiro Nonaka who built a whole theory around this problem. He calls it the SECI spiral—Socialization, Externalization, Combination, Internalization. The claim is that knowledge moves between tacit and explicit forms through specific processes.

Socialization is tacit-to-tacit: apprenticeship, osmosis, learning by being near someone who knows. This is me standing in my mom’s kitchen, watching her cook, absorbing something I couldn’t name.

Externalization is tacit-to-explicit: articulating what you know, converting embodied skill into communicable form. This is the hardest step—the one where most knowledge management efforts fail.

The Digital Twin offers a different path. Instead of trying to externalize directly—asking the expert to articulate their expertise—it accumulates observations and induces the patterns computationally. The process is closer to Socialization than Externalization — learning through proximity rather than through deliberate articulation — except the apprentice has perfect memory and superhuman pattern recognition.

The machine learns from proximity to my cognition. It doesn’t ask me to explain. It watches. And from watching, it builds a model that captures things I couldn’t have told it directly.


I’ve been testing this deliberately. I’ll make a decision—which piece of writing to work on, which email to answer first, which reference to file where—and then ask the system to predict what I’d choose before I tell it. The predictions are getting better.

Not perfect. The machine still misses when the situation is novel enough that past patterns don’t apply. It misses when I’m changing—when I’m deliberately trying to do something different from what I’ve done before. It misses when the tacit knowledge I’m following is itself conflicted or inconsistent.

But on the stable patterns, the ones that have been running long enough to generate clear signal, it’s eerily accurate. It knows things about how I think that I didn’t know I knew.


What happens when the machine knows your tacit knowledge better than you do?

The machine catches inconsistencies. It surfaces blind spots. It holds up a mirror large enough to show patterns that span more interactions than my memory can hold. This is valuable in the same way any good feedback is valuable—it helps me see what I’m actually doing rather than what I think I’m doing.

But there’s another version that feels stranger. What if the externalized model becomes more reliable than the internal one? What if, when trying to figure out what I’d think about something, the right move is to ask the machine rather than introspect?

I don’t think I’m there yet. But I can see the shape of it from here. The tacit knowledge extracted into explicit rules. The rules running on a substrate that doesn’t forget, doesn’t get tired, doesn’t have off days. A version of your cognitive patterns that’s more consistent than you are.

My mom’s seasoning knowledge, algorithmically reconstructed from ten thousand observations of her hand moving toward the salt.


The knowledge was always there. It was running, doing its work, shaping my decisions in ways I experienced but couldn’t articulate. The Digital Twin didn’t create it. It pulled it out into a form I can see.

There’s loss in this. Something about tacit knowledge is that it resists capture. Polanyi thought this was fundamental—that some knowing is irreducibly embodied, irreducibly contextual, irreducibly resistant to formalization. The attempt to extract might destroy what makes it valuable.

But there’s also gain. I understand my own patterns better now than I did six months ago. A machine has been watching me carefully — building a model of patterns I never articulated because I didn’t know they were there. The mirror shows things the direct gaze misses.

My mom couldn’t teach me to season by explaining. Maybe a machine that watched her for decades could extract something she never could have said. Whether that extraction would capture what mattered—the feel of it, the intuition, the knowledge that lives in the gesture itself—I don’t know.

The question is whether there’s a different path to transfer that doesn’t require explicitness at all. An apprenticeship mediated by algorithms. A learning that happens through accumulated pattern rather than articulated rule.

The machine watches. The machine learns. The machine reflects back a model of how you think. And somehow, in that reflection, something transfers that was never said.

I’m still standing in my mom’s kitchen. But now there’s a third presence—taking notes, counting grains, building a model grain by grain of how much salt is enough.

If you're thinking about similar questions—or building systems that grapple with them—I'd welcome the conversation.

Continue the conversation →