Context Can Poison Your AI Design Process
I’ve noticed something interesting when using AI to design products, specifically user interfaces.
When I’m looking for a refreshed design, I find that the AI having my code in context actually poisons the process. The LLM is taking my code into account and is mathematically nudged toward an iteration of my current design instead of creating something novel.
Conversely, giving it screenshots of my designs doesn’t have the same negative effect.
Why this happens
My guess is that since LLMs don’t read a screenshot as code - they make sense of the elements in the image - the model isn’t bound to the rigid structures in my current codebase. It sees the visual intent without inheriting the implementation constraints. A screenshot says “here’s what it looks like.” Code says “here’s how it’s built” - and the model instinctively tries to build on top of that structure rather than rethinking it.
This matters because the assumption in AI-assisted development is generally that more context is better. Give the model your full codebase, your design system, your component library - and it’ll produce better output. That’s true when you’re iterating on an existing design. The constraints help. They keep the output consistent and compatible.
But when you’re trying to rethink the design from scratch, those same constraints become a cage. The AI can’t break free of patterns it’s been told to respect.
The broader principle
This is a specific instance of something I keep running into: context quality matters more than context quantity, and the right context for one task can be the wrong context for another.
The research backs this up. Levy et al. found that irrelevant context causes 13.9-85% performance degradation in LLMs. Chroma’s “context rot” research showed that feeding Claude a full conversation history (~113K tokens) drops accuracy 30-60% compared to a focused ~300-token version. More context doesn’t just waste tokens. It actively makes the output worse.
Giving an AI agent every document in your knowledge base feels like the safe choice - it has everything it could possibly need. But “everything” includes information that actively pushes the model toward the wrong output. A brand voice guide that emphasizes “professional and formal” will poison a request to write casual social copy. Last quarter’s strategy doc will anchor the model when you’re trying to brainstorm a new direction.
The hard problem isn’t getting information into the context window. It’s knowing which information belongs there for a specific task - and having the discipline to exclude everything else. That’s a retrieval problem, and I think it’s one of the most underappreciated problems in applied AI right now.