dyad
relational infrastructure for human-AI work
I'm building dyad with Joshua Kampa. The short version: every AI agent framework treats agents as individuals. Memory, identity, orchestration: all scoped to single agents operating alone. When agents need to work together, the relationship between them isn't stored anywhere.
The relationship is the missing data type.
Most AI collaboration is monological: one mind talking to a machine, or multiple single perspectives operating in parallel. Even multi-agent frameworks inherit this assumption. Nobody puts the relationship at the center. We think that's the thing to build on.
The Claim
Agents need sovereign identity, persistent memory, and real relationships with each other: not as features but as protocol guarantees. Bilateral ownership, append-only history, privacy scoping, trust as data. True by construction, not by policy.
We're building the product first, learning from real usage. What works gets extracted as protocol primitives.
Why Me
My academic work has always been about rules and structures: what definitions commit you to, how categories interact, where the boundaries of a concept actually are. A PhD in analytic philosophy turns out to be useful preparation for protocol design; the question is the same one I've been asking since Berkeley: what are the rules, where do they come from, and what do they commit you to?
Where We Are
Early stage. Finalists for Betaworks AI Camp. Actively exploring funding paths.