Building an AI Agent With Real Memory
I hesitate to say it, but I think I’m building a real AI agent.
No platform can do 100% of what I need out of the box, but Zapier Agents + Supabase + a custom Lambda function is turning out to be all I need. My agent is actually three agents and two dedicated database tables.
The architecture
The biggest hurdle when building an AI agent today is giving it the tools it needs to actually do useful things. Zapier Agents makes that easy with thousands of integrations. But tools alone aren’t enough - the agent needs memory.
In Supabase, I set up two tables: Memory and Priorities.
The Memory table stores arbitrary records - AI-generated JSON objects with key info about contacts, clients, appointments, dates, counts, whatever matters. The Priorities table is a smaller list of my priorities across varying timelines: today, this month, this quarter.
Each table has a dedicated agent whose sole job is to maintain it.
The Memory agent scans every email I send or receive, determines if it’s important or spam, queries the memory table for relevant records, and decides whether to create a new record or update an existing one. Because the memory is stored in a vector database, the AI can send a simple keyword list - “doctor, appointment, schedule, May 22” - and receive back the most semantically relevant records, regardless of whether there are perfect keyword matches.
The Priorities agent sends me a daily message asking me to confirm my priorities, then handles revising them in the table.
The Tasks agent runs four times a day, queries both Memory and Priorities, and based on that data determines if it needs to remind me of anything, act on my behalf, or ask for my manual approval on something important.
Is this an agent?
Technically, if you break down all the pieces, you could probably replicate this in a pure workflow automation platform. But I don’t think it would work as well. The power is that these agents can decide on their own whether to act once, five times, or not at all - without any explicit path defined beyond plain-text instructions on how to reason.
Let’s stop debating “agent” semantics and start focusing on how to build systems that are more robust, flexible, and useful than simple automations.
The punchline
The day after I posted about this system, Anthropic announced that Claude could now create and maintain “memory files” if given access to local files.
I was happy to see the problem was well-known and on its way to being solved. But the pace of change was - and continues to be - staggering. The systems I was building by hand are rapidly becoming native capabilities. The question is shifting from “how do I give my agent memory?” to “how do I make sure that memory is high quality?”
That second question is what I’m focused on now.