A personality-driven usability simulation tool. Upload a PRD, generate personas, run a simulation, get findings for every room in the building: Designer, Product, Engineering, and Leadership. From a single run.
Why I built this
I've been in too many design reviews where the feedback came down to "we don't know how users will respond to this." And the answer was always: run a test. Schedule a session. Find participants. Wait two weeks. By then the design had already moved. The moment had passed.
Signal started as a question: what if you could get directional usability signal in the time it takes to make coffee? Not to replace real research, but to catch the obvious problems before they become real problems, and to give every stakeholder findings in language they actually understand.
Who it's for
One simulation run generates findings for every room in the building. The friction in cross-functional alignment isn't a lack of information. It's a translation problem. Signal fixes the translation.
Output 01
Frame-level findings mapped back to specific screens. Interaction issues, flow gaps, and persona friction points with enough specificity to act on immediately.
Output 02 + 03
Two separate outputs, each framed for its audience. Product gets requirements gaps and user impact. Engineering gets edge cases and technical concerns.
Output 04
Risk summary, confidence level, and strategic flags. The signal that belongs in a steering meeting, not a design review.
How it works
01
Upload your PRD
Signal parses the requirements document to understand the intended experience, user goals, and success criteria before generating any personas.
Web App + Plugin02
Build your personas
You define the persona details and map each one to an industry. Signal generates a structured persona card from your input, which Claude uses as the lens for the simulation.
Claude API03
Run the simulation
Claude receives the actual Figma frames as images via the plugin, sees what's on screen, and maps findings back to specific frames in the flow.
Claude Vision04
Get the report
Four simultaneous outputs: Designer, Product, Engineering, Leadership. Each formatted for how that audience acts on information, not a single report everyone decodes on their own.
4-Tab Output
AI Collaboration
I treated Claude as a senior technical partner throughout the build: not for code generation, but for architecture decisions, edge case pressure-testing, and the kind of questions that usually require a room full of people. These are three conversations that shaped what Signal became.
"How should I model personality as a structured input so the simulation produces meaningfully different outputs per persona, not just paraphrased versions of the same feedback?"
A three-axis persona framework: goal orientation, tolerance for ambiguity, prior mental model. Each axis produces genuinely divergent simulation output. Each persona reads as a person, not a stereotype.
"What edge cases should I stress-test in the four-tab report before assuming the structure is solid — specifically the Engineering and Leadership tabs?"
14 edge cases I hadn't considered: conflicting signals across persona types, PRD gaps that only manifest in simulation, and findings that are technically correct but strategically irrelevant to leadership. Each became a test case.
"The web app requires exporting screens manually from Figma. That friction is killing the use case. What are my options for getting real frame data directly into Claude without that step?"
The Figma plugin architecture. Claude outlined the plugin API's image export capabilities, sandbox constraints, and how to pass frame data to Claude Vision. The pivot from web app to plugin traces directly to this conversation.
Where it stands
Live
Fully deployed. PRD upload, persona generation, screen upload, four-tab report output. The core loop is complete and running real API calls.
View live appIn Progress
Functional locally. Frame selection, PRD parsing, and Claude Vision analysis all working. Visual design pass and persona reuse in progress before public release.
In Progress
Planned
Reflections
Testing Signal against real Clover work surfaced issues I hadn't caught in my own designs. The tool became its own proof of concept before it was finished. Using the thing you're building to improve the work you're doing is something I had read about but not experienced until Signal.
It would have been easier to ship the web app and call it done. The concept worked, the demo was strong, and the story was clean. But I kept asking why we couldn't use the real screenshots. The answer wasn't good enough. The pivot to the Figma plugin added months and required learning a new build environment. It was the right call.
The quality of what Claude contributed was directly proportional to how well I framed the problem. Vague prompts produced vague answers. Specific, constrained, well-structured questions produced architecture decisions and edge cases I genuinely hadn't considered. Learning to collaborate with AI well is its own design discipline.