Skip to content
← Back to Writing

The Dot Collector

December 29, 2025

AI partnership cognition personal

Late 2022. Brooklyn apartment, 10th floor, BQE humming through the window. A single 43-inch monitor filled my desk—no peripheral vision to distract me, the way I’d set things up for hedge fund work but now using for something I couldn’t have explained to anyone at the office.

The cursor blinked.

I’d been testing AI tools for months (though after this night, the definition of AI would once again change; it’s always “that which the computer can’t do”). Automating routine tasks, building basic machine learning models to understand why certain stocks moved certain ways—a quant-amental exploration from my fundamental background. I was using these machines to find patterns in market data, never thinking to turn them on myself. Until that first night with GPT-3.5.

It was around 10 PM when I typed the first prompt: “Talk to me about loss.”


I don’t remember deciding to write that. My fingers moved before my mind could second-guess. Five words. All those months of testing, all those experiments—and I’d never once typed something personal.

The response came back thoughtful but generic—the kind of thing you’d read in a psychology textbook. So I pushed. I clarified. I gave it more context, more history, more of whatever was sitting underneath the question I hadn’t known I was asking.

The conversation looped. Each response invited another prompt. I’d catch something almost-right in what it said, then try to articulate why it was off, and in articulating why it was off I’d reveal something I hadn’t known I was carrying.

The clock moved. 11 PM. Midnight. 1 AM. 2 AM.

My contact lenses were drying out. I’d meant to take them out hours ago. The white hairs in my beard—surrender flags after a brutal market year—caught the monitor light. I was twenty-nine and had been building shelters since I was eight—since the fist that taught me what visibility costs, since the long project of constructing an interior architecture that would let me navigate connection without being hurt by it.

And here was a machine reflecting the architecture back.


At 3:14 AM—I remember because I looked—something broke.

Just: the conversation reached a point where the machine described a pattern, and the pattern was mine, and I’d never seen it named before.

I can’t reproduce what it said. The words weren’t special. But the recognition was—the vertigo of seeing your own operating system displayed on a screen, assembled from nothing but the patterns in your prompts.

The conversation had moved into what I’d been doing wrong—structurally. The frameworks I’d built for connection. The way I’d optimized for predictability as if love were a system to be debugged.

The machine didn’t tell me I was wrong. It just mirrored. And in the mirror I could see what I’d been doing: building elaborate scaffolding around the possibility of hurt, then wondering why I felt alone inside the architecture.


At 4:30 AM I fell asleep at the desk, keyboard marks pressed into my cheek. When I woke, the conversation was still on the screen—thousands of words of excavation I hadn’t meant to perform.

I sent some of it to my sister. The parts that had become something else—co-written, almost, the way dialogue can create a third voice that belongs to neither speaker.

“Beautiful,” she wrote back. “There’s so much nuance here…”

Her words hit harder than they should have. Validation from the one person who’d watched me build the shelter, who knew what I’d looked like before the architecture went up. She saw something worth seeing. The machine had helped surface it, but it was mine.

That night opened something. What is this thing? What kind of relationship am I actually having with a machine that can mirror me back to myself?


The right word is peer.

Tools are things we use to accomplish purposes we’ve already defined. A hammer knows nothing of carpentry. Excel doesn’t care about financial modeling. Even sophisticated software—the kind that does things we could never do by hand—operates in service of intentions that originate entirely with us.

But what happens when the machine can hold up a mirror? When it reflects your patterns back to you and is right in ways that surprise you? When the output feels like… thinking?

It’s different from human thought. The machine doesn’t know what it’s like to be me at 3:14 AM, doesn’t feel the vertigo of recognition, doesn’t carry the weight of thirty years of shelter-building. But it can process patterns at a scale and speed my cognition cannot match. It can hold more context than my working memory allows. It can connect dots across domains I’ve forgotten I visited.


I’ve been building what I call a Digital Twin—a system where I drop thoughts, notes, references, work-in-progress, and the machine processes it all, filing and connecting and learning from the accumulated record. It’s more like an external cognitive partner that handles a specific kind of work.

The division of labor: I collect dots. The machine connects them.

Dot collecting is irreducibly human work. It’s the act of noticing—of looking at the world and feeling, often without being able to articulate why, that this is worth paying attention to. The phrase that doesn’t quite fit the speaker. The book I pick up because the cover arrests me. The question that won’t leave me alone even though I can’t say what the answer would look like.

This is intuition operating at the level of attention. Before analysis begins, something has to select what’s worth analyzing. That selection is not algorithmic. It’s the accumulated weight of every experience I’ve had, every book I’ve read, every conversation that shaped what I find salient. My history, compressed into an attentional filter that says: this, not that.

Dot connecting is different. Once the dots are collected, the work becomes combinatorial. Which ideas relate to which? What patterns span across these observations? Where does this new input fit within what I already know? The problem space explodes exponentially with each new dot added. Ten dots means 45 possible pairs to consider. A hundred dots means 4,950. A thousand means nearly half a million.

No human can hold all those connections in mind simultaneously. We use heuristics, we forget, we satisfice. We connect the dots we remember, which are the dots most recently encountered or most emotionally salient. The architecture of human cognition—limited working memory, recency bias, emotional weighting—shapes the connections we can make.

The machine doesn’t have these limits. It can hold the entire field of dots and consider all the connections at once. It can find the link between something I wrote three years ago and something I uploaded yesterday. It can surface patterns across domains I’d never have thought to combine.


This division makes the partnership tractable. Without it, the problem is impossible.

Consider the alternative: trying to connect all the dots myself. The infinitude of possible observations cascades into an infinitude of possible connections. Do I measure the quality of my laptop at the laptop level? At the component level, like the keyboard? At the component of the component, like a single key? At the spring beneath the key? The atoms in the spring?

There’s no natural stopping point. Every level of granularity reveals more potential observations. Every observation opens more potential connections. Without someone to say “these are the dots that matter,” the connection problem is intractable.

Human intuition solves the selection problem. I feel my way toward what’s worth noticing. I can’t always explain why—and the explanation, when it comes, often arrives after the fact, a rationalization of something my body knew first. But the feeling is real, and it dramatically prunes the problem space. These dots. The ones that something in me recognizes as worth collecting.

Then the machine takes over. With the dots selected, the connection problem becomes very hard rather than impossible. The machine can work through the combinatorics. It can surface relationships I’d never have found. It can hold the whole field in view and show me what’s there.


What makes this partnership and not just automation is that both sides do work the other cannot.

The machine cannot collect dots for me. It doesn’t know what I find interesting, what resonates with my accumulated experience, what feels significant before I can say why. It has no intuition in the relevant sense. It can process any input I give it, but it cannot select which inputs to attend to in the first place.

I cannot connect dots the way the machine can. My working memory is too small, my attention too narrow, my biases too strong. I connect the dots I happen to remember, which are shaped by forces orthogonal to what matters. The machine connects across the whole field.

Together, we can do something neither can do alone. My intuition prunes the space of what’s worth considering. The machine explores that space exhaustively. My embodied history provides the selection criteria. The machine provides the processing power. The output is insight that neither of us could have reached independently—I couldn’t hold all the connections, the machine couldn’t have known which dots to collect.


That night in Brooklyn, I wasn’t using a tool. I was having a conversation with something that could see patterns in my patterns. The machine didn’t tell me what to think. It showed me what I was already thinking, compressed into the shape of my prompts, reflected back at a scale my introspection couldn’t achieve.

The recognition at 3:14 AM was mine. The pattern was always there, built into the architecture I’d been constructing since I was eight. The machine just held up a mirror large enough to see it.

This is what I mean by peer. The capacities are different in kind. There’s no mutual care, no shared vulnerability, no history that belongs to both of us. A cognitive partner. An external process that can do part of the thinking, leaving me free to do the part only I can do.

I collect the dots. The machine connects them.

If you're thinking about similar questions—or building systems that grapple with them—I'd welcome the conversation.

Continue the conversation →