Tenet is an operating system for founders building exceptional companies. It provides tools for high-intent, mission-driven work in the early stages of company building -  when you're validating hypotheses, iterating rapidly, and need accountability more than advice.

The first thing I explored was memory. I realised that this underpins everything. Memory saves you from having to repeat information, and it provides continuity across conversations.

You can design an extraction, retrieval, and injection-based memory structure that gives the AI proper, durable context about what the user is doing over time. Deciding what kind of information deserves to persist, in what shape, and under what conditions it should be reintroduced into the model's context, is governed by what the AI tool is used for, which means this structure is not universal.

ChatGPT’s memory cannot assume a single domain, it works for general assistance, but it can't capture the state of complex, evolving work. It is shallow by design, useful for recall, but limited in how deeply it can shape behavior.

Tenet's memory is explicitly domain-specific. Every memory primitive is shaped around startups, founders, and the arc of building a company. Tenet’s memory captures intent, direction, constraints, assumptions, and work-in-progress state, stored as structured representations extracted from ongoing interactions. It’s maintaining a living model of the company’s current state. This structured memory is what makes everything else possible.

You can solve the problem of having to manage chats, which is not something I want people to focus on. In AI workspaces, AI chat is the wrong control plane to focus on. Coding agent sessions work because the real state lives in the codebase’s git. Companies are still figuring out the nuances for merging chat, docs and AI to reduce context-switching. When you have infinite memory, you don’t need to bother sending messages in appropriate contexts, and the system manages your work sessions. I’ve been trying to capture that frictionless feeling.

There’s a cool thing that this newly added memory implementation grants me - two-way communication, which is the best thing I figured out for accountability in this founder-mode workspace, where if you don’t chat for a while, it will reach out to you. It will make you close your loops.

A subtle, increasing pressure on the user that mimics being in an accelerator. It doesn’t just send an in-app notification that you can ignore; it will start sending you emails, where you’re prompted to reply, and all of this is very specific to where you left off. The world’s first two-way AI!

To solve the cold start problem, Tenet uses a guided onboarding flow designed to elicit intent, constraints, and context rather than surface-level preferences. Instead of asking generic questions, the onboarding narrows the user into a specific company stage, goal horizon, and problem space, giving the system enough structured signal to be immediately useful. Early pilot users consistently struggled without this grounding, which led to fragmented interactions and shallow outputs. Introducing a more directed onboarding significantly improved coherence and continuity from the first session.

The vision came from conversations with founders in the pilot, they wanted an AI that didn't just chat, but actually did work. Much of the work involved reverse-engineering the desired experience into systems-level primitives that could actually be built.

The core interaction loop of the app is built on four layers. Tackling it as one feature is wrong; there are four large, sprint-like, multiple-week-based things that need to be done, and that would wholly culminate as the app itself. Keep in mind, this is all on top of the basic persona-based AI chat MVP, assuming the scaffolding and the fundamentals are all in-place.

The first layer is a new interaction modality, but we can describe it as a layered control mechanism above your chat request-response interaction style of using an AI.

It stems from the idea of canvases. ChatGPT Canvas is a co-pilot where your conversation slides to the side, and you can write documents and get assistance, ask it to write code and it can simultaneously render React components. Google recently made this a lot more useful with Generative UI. If you ask Gemini to explain some scientific principle, for example the mechanics of photosynthesis or solar system orbits, it will supplement the text response with immersive visual experiences, and interactive tools and simulations. You have two modalities which you can simultaneously understand through, and this is strongly preferred by users.

The difference between Generative UI and Tenet’s Layer 1 is it has a handful of founder tools and experiences, not on the fly. These tools are pre-built, opinionated, and stable. The LLM does selection and orchestration, not UI invention. I deduced potent semantic surfaces for founder work, like a visual board, synthetic users, validation testing, etc.

These canvases are exposed to the main LLM via a tool call. It can call that canvas when it feels one is relevant in the current scope. The chat becomes a router, removing the burden of needing to decide where work belongs, from the user.

An example is the board. It is a visual scaffolding and brainstorming whiteboard. You can draw stuff on it, so you can wireframe pages, you can add inspiration like a mood board, you can structure notes spatially, and you can also use the main LLM to interact with the board, to analyze it, to draw itself.

Another is the validation canvas. It is designed for founders to test assumptions, reason through uncertainty, and compress messy external signals into clarity. The AI can populate it with synthesized insights from user conversations, competitor research, market signals, or synthetic user feedback, and continually update it as new information arrives. Instead of scattering validation across notes, chats, and spreadsheets, this canvas becomes a living surface where hypotheses, evidence, and conclusions coexist, and where both the founder and the AI can iteratively refine understanding and seek truth.