Fullscript Logo

From Signal to Solution

Author

Andrew Hillier

Date Published

Share this post

Every engineering team I've been on has had some version of the same problem: users tell us what's painful, and somewhere between that feedback and a real fix, most of the signal gets lost. It gets triaged, summarized, re-summarized, slotted behind other priorities, and by the time someone sits down to work on it, a lot of the original context has evaporated.

I've been experimenting with a feature I built called Feedback Builder to see if we can close that gap.

The idea is pretty straightforward on paper. We capture user feedback at key moments in their journey — the points where people are most likely to have a strong signal about what's working and what isn't. An AI layer parses and classifies that feedback into recurring themes. And once a theme crosses a certain threshold, something more interesting happens: the problem gets handed off to our in-house agentic development tool, Nitro, which builds out a working solution.

The output isn't a ticket. It isn't a summary doc. It's a full environment of our platform with the proposed solution actually implemented — something a team can click through, react to, and decide what to do with.

It's live in production now, and the early signs are genuinely encouraging. I want to be careful not to oversell it — this is still an experiment, and it's entirely possible the themes it surfaces turn out to be less useful over time, or that the shape of the loop needs significant rework as we learn more. I don't know yet. What I do know is that the loop works end-to-end, and the quality of what's coming out the other side has been higher than I expected. A couple of the solutions have been close to shippable with only minor adjustments.

There's still plenty this isn't trying to do. Feedback Builder isn't making product decisions — it's producing a starting point. Humans still own the judgment calls: is this the right problem to solve, is this the right shape of solution, does this fit where the product is going. What's changed is how quickly we can get to the part of the conversation where those decisions actually happen.

The part I keep thinking about isn't really any one piece of this. We already had ways to collect user feedback. We already had AI that could classify and summarize. We already had Nitro, which can write code and stand up environments. What I was curious about was whether stitching those existing capabilities into a single loop would produce something more than the sum of its parts.

So far, it seems to. The more interesting question to me is whether this shape — chaining the AI capabilities we already have into systems that can carry a problem from observation to a tangible proposed solution with very little human work in the middle — is a pattern worth repeating elsewhere. Humans still do the hard parts: deciding what matters, shipping the final thing, and owning the outcome. But the distance between "a user said something" and "we have something concrete to react to" gets dramatically shorter.

Feedback Builder is one experiment in that direction. I'm curious to see how it holds up, and whether there are other loops worth trying.

Andrew Hillier, Manager, Engineering @ Fullscript


Share this post