What Happens When Designers Can Build, Not Just Design
Most product teams rely on pre-built charting libraries. They're practical, well-documented, and they solve a lot of common problems. But they also constrain what you can build. When your data doesn't fit neatly into a bar chart, a line graph, or a standard scatterplot, you're stuck — either forcing the data into a format that doesn't serve it, or writing a spec and hoping engineering can figure out something custom.
The chart above illustrates what this difference looks like in practice — the same feature built two ways, with the traditional design-development cycle taking roughly twice as long. Most of that extra time is rework caused by discovering problems late.
AI-assisted development tools like Cursor have changed this equation for designers. Not because AI is good at visual problem-solving — it isn't, and that's an important distinction. But because once you know what you want to build, AI can help you build it fast enough to validate the idea before committing engineering resources.
This article walks through two examples from my work at NFTfi, a peer-to-peer NFT lending protocol, where AI-assisted prototyping allowed me to build bespoke solutions that no off-the-shelf library would have provided — and a simpler example where the value was in speed and compatibility rather than visual invention.
Example 1: A bubble chart that needed to do something bubble charts don't do
We needed a tool for lenders to explore the existing loan landscape — which loans were active, when they were expiring, and at what APR. The goal was to help lenders spot opportunities: loans about to mature, underserved market segments, gaps in the offer landscape.
A bubble chart was the right starting point. Timeline along the x-axis, APR on the y-axis, bubble size representing loan amount. Three dimensions in a single view. But the standard implementation fell apart immediately with real data: collection offers in NFT lending generate many loans at identical coordinates — same amount, same APR. On a standard bubble chart, these stack invisibly. You see one bubble where there are thirty.
Pre-built charting libraries don't solve this. You can't configure your way out of it with Chart.js or Recharts. This needed something bespoke.
Here's where it's important to be honest about what AI can and can't do. I didn't ask Cursor to "find me a chart that handles overlapping bubbles." AI is not good at visual discernment. It's the equivalent of that viral clip where someone asks AI to help them with fashion and it tells them they look great while they're wearing the most ridiculous outfit imaginable. If you rely on AI to make visual judgments for you, you're going to end up in a bad place.
What I did was go to Observable — the showcase and documentation site for D3 — and spend time browsing examples. Looking at how other people had solved density and overlap problems. I came across D3's packSiblings method, a circle-packing algorithm that forces overlapping circles to spread apart and cluster around each other rather than stacking. It was designed for a different purpose, but I could see that the underlying behaviour was exactly what I needed.
That's the part that requires a designer's eye and judgment. No AI suggested this. I saw it, understood the visual behaviour, and recognised how it could be adapted to solve my specific problem.
Once I knew what I wanted to build, AI became extremely useful. Implementing a D3 force simulation with packSiblings, managing the packing algorithm's edge cases, handling tooltip interactions on individually packed bubbles — this is where Cursor accelerated the work dramatically. The implementation mechanics that would have taken me days of reading D3 documentation and debugging were compressed into hours.
The result is a chart where clustered loans spread apart visually, each bubble remains individually inspectable, and the density of clusters communicates where market activity is concentrated. It's a bespoke solution that no pre-built library offers, but it was built and validated in under a week — from first concept to working interactive prototype with real loan data.
The takeaway for designers: AI won't tell you what to build. It won't make good visual choices for you. But it frees you from the implementation bottleneck that used to prevent you from exploring bespoke ideas. Previously, if you had a concept that went beyond what a standard charting library offered, you either had to be a strong D3 developer yourself or write a spec and wait for engineering. Now, if you do the design research — browse Observable, study examples, understand the visual behaviour you need — you can prototype the solution yourself and validate it with real users before anyone else touches it.
Not every problem requires inventing something new. Sometimes the value of AI-assisted prototyping is simply in how fast you can validate a known pattern and confirm it works with your specific data and tools.
Example 2: A responsive table component — 20 minutes from idea to validated code
Not every prototype needs to be a complex visualisation. Sometimes the value of AI-assisted prototyping is in validating small UX decisions quickly, using the exact tools your development team will use in production.
We had a data-dense table in the NFTfi interface showing loan details — asset name, loan amount, APR, duration, status, and an action button at the end of each row. On a wide monitor, the layout worked fine. On a laptop screen, the action button consumed disproportionate space, forcing either horizontal scroll or cramped columns.
The idea was straightforward: below a certain screen width, replace the full-text action button with a compact dropdown icon that expands into a menu. Standard responsive pattern. But the question was whether it would actually work with our data density and layout — and whether the right component existed in our library.
This is where a specific detail matters: at NFTfi, our front-end is built on MUI Minimal — a specific component library within the MUI ecosystem. I didn't just ask Cursor to build a generic responsive table. I specified that it should use the MUI Minimal library, the same library our developers use in production.
This is important because what's easy to implement in one component library can be surprisingly difficult in another. By prototyping with the exact same tools the dev team uses, I could be confident that what I built was actually feasible in production — not a concept that would need to be re-engineered with different components. When I showed the working example to our developers, they could look at the actual code, see which MUI components I'd used, and reference it directly in their implementation.
I pasted a screenshot of the actual table with real loan data into Cursor, asked it to implement the responsive behaviour using MUI Minimal, and had a working example in about twenty minutes. Real column widths, real data density, the correct component identified and implemented.
The time saving isn't just in the twenty minutes. It's in what doesn't happen downstream. The developer receiving this doesn't need to guess which component to use, whether the responsive breakpoint works with real data, or whether the layout holds up at different screen widths. Those questions are answered. The handoff friction that normally stretches small UX decisions across multiple review cycles is almost eliminated.
User testing without the development bottleneck
There's another dimension to this that's easy to overlook. In a traditional cycle, it's not just bad data that can send you back to the drawing board — user testing can do the same thing. You design a feature, engineering builds it, you put it in front of users, and they tell you it doesn't solve their problem the way you expected. Now you're back in design, then back in engineering, burning another cycle.
With AI-assisted prototyping, you can test with real users before developers are ever involved. The prototype is functional enough — with real data, real interactions, real components — that meaningful user feedback comes in at the cheapest possible moment. If users don't understand the interface, or want something different, or use it in a way you didn't anticipate, iterating costs you hours, not weeks. And critically, your developers never spent time building something that needed to be rethought. They only start work once the concept has already survived contact with actual users.
For the bubble chart, we shared working versions with Calvin — a power user who runs a serious lending operation on NFTfi. Within minutes of using the prototype with real data, he identified clustering patterns that confirmed the packSiblings approach was working, and flagged edge cases in how expiring loans were displayed that we hadn't considered. That kind of feedback is only possible when someone is interacting with a functional tool using their own data — not reviewing a static mockup.
What designers should understand about AI's limitations
The two examples above illustrate different sides of working with AI as a designer.
In the bubble chart case, AI was useless for the creative problem-solving. It couldn't browse Observable for me, recognise the visual potential of packSiblings, or decide that circle packing was the right approach for overlapping loan data. That required design research, visual judgment, and the ability to see an algorithm's behaviour and imagine it applied to a different context. AI accelerated the implementation of that idea, but the idea itself came from doing the work that designers have always done — looking at how other people solve visual problems and adapting those approaches to your own context.
In the table case, AI was useful end to end — because the problem was well-defined, the component library was established, and the task was essentially "implement this known pattern using these specific tools." There was no visual invention required. The value was in speed and specificity.
Knowing when AI will help and when it won't is a skill in itself. As a general rule: if the problem is about what to build or how something should look, do your own research. If the problem is about implementing a decision you've already made, AI can compress that work dramatically.
What this means for teams
For founders and product managers evaluating how to structure teams, this represents a meaningful shift in what a product-oriented designer can deliver.
A designer working with AI tools can now validate concepts against real data before writing a spec, reducing the risk that engineering builds something that doesn't hold up in practice. They can build with the exact component libraries the dev team uses, ensuring compatibility rather than producing mockups that need reinterpretation. They can explore bespoke solutions — beyond what pre-built libraries offer — and validate them in days rather than sprint cycles. And they can deliver working prototypes that developers reference directly, reducing the ambiguity that usually fills the gap between a Figma file and production code.
The financial impact is straightforward. A single wasted sprint cycle — where engineering builds something that doesn't survive contact with real data or real users — can cost a startup tens of thousands in developer time. Multiplied across a product's lifecycle, the cumulative cost of late discovery is one of the biggest drains on early-stage budgets. AI-assisted prototyping doesn't eliminate all risk, but it moves the most expensive discoveries to the cheapest possible moment.
This doesn't replace engineering. Production code needs architecture, testing, and performance work that a prototype doesn't provide. But it means that by the time engineering starts building, the core product questions have already been answered. What to build, which components to use, how it behaves with real data, whether users understand it — all validated before the expensive work begins.
Tools used: D3.js (with packSiblings), Cursor (AI-assisted development), MUI Minimal component library, Observable (D3 showcase and documentation)
