D3 Was for Engineers. AI Changed That. Here's What I Learned (The Hard Way)

    NFTfi / CerebralProduct Developer2025

    D3.js is the most powerful data visualisation library available. It's also historically been inaccessible to anyone who isn't a serious developer. Unlike Chart.js or Recharts, where you pass data to a pre-built component and get a chart, D3 gives you raw control over every pixel — but expects you to understand scales, axes, data binding, SVG manipulation, force simulations, and a dozen other concepts before you can draw a single line.

    AI-assisted development tools like Cursor have changed who can use D3. Over the past year, I've used AI to build ten distinct D3 projects — from interactive NFT lending scatterplots to animated Voronoi tessellations to bar chart races to D3-generated SVGs baked into video via Remotion. None of these would have been feasible for me in a traditional workflow. I'm a product designer, not an engineer.

    But I want to be honest about what this actually looks like in practice, because the "I built this amazing thing in 20 minutes with AI" narrative is mostly fiction. My largest D3 project involved 274 prompts over several weeks, included multiple moments where the AI broke the codebase beyond repair, and ended with me scrapping the whole thing and starting from a clean spec document. I still shipped. And I learned a lot about how to work with AI effectively — and where it will reliably let you down.

    This article is a practical guide for designers who want to use D3 with AI assistance. It covers what's genuinely possible, the specific traps you'll fall into, and the techniques I developed to get out of them.

    What I built

    Before getting into process, here's the range of what's possible when a designer works with D3 and AI. These are all projects I built over the past year:

    NFTfi Offer Landscape Scatterplot — Interactive chart with draggable "your offer" point, live on-chain data, IQR outlier filtering, and density heatmaps. Five days, 42 commits. React + TypeScript + D3.

    NFTfi Loan Depth Chart — LTV ratio visualisation with cumulative/non-cumulative toggle, custom log scaling for outlier-heavy distributions, Cloudflare Workers backend. This is the 274-prompt project. It taught me the most about where AI fails.

    Cerebral Animation Gallery — 14 independent D3 animations for a blockchain technical whitepaper: force-directed graphs, Voronoi tessellations, hex-grid propagation animations, a zoomable sunburst chart, radial cluster layouts. Built as a scrollable gallery with IntersectionObserver-triggered rendering.

    Security through cost asymmetry

    1 / 6

    Radial cluster layout: hierarchical validator structure with attack-cost narrative. One of the whitepaper illustrations built with D3’s tree/cluster pipeline.

    UAE Population Bar Chart Race — Built for a talk I gave at EO Dubai in 2026 on using AI for data visualisation. The point was to show the audience something I'd built specifically for them — a bar chart race of UAE population data by emirate over time, with smooth rank transitions and playback controls. Building a chart with locally relevant data as a live demo made the capability tangible in a way a generic example wouldn't have.

    Cerebral Node Security Video — D3 radial tree of 192 validator nodes across 7 geographic clusters, baked as static SVG into a Remotion video composition with attack path animations, zoom, and kinetic text.

    Each of these used a different subset of D3's capabilities — forceSimulation, packSiblings, Delaunay, hierarchy, partition, arc, scaleLinear, scaleLog, scaleBand, transition, timer, zoom. The point isn't the specific methods. It's that D3's full toolkit is now accessible to someone who understands what they want to build visually, even if they couldn't write the implementation from scratch.

    The biggest trap: AI doesn't think in D3

    This is the single most important thing to understand, and I learned it the hard way across multiple projects.

    When you tell AI "animate these circles," it will often reach for direct SVG DOM manipulation — document.querySelector, setAttribute, element.style.opacity — instead of using D3's own methods. This seems harmless. The circles animate. It looks like it's working.

    Then everything breaks.

    D3 maintains its own internal model of what's in the DOM through its selection and data-binding system. When you bypass that system with raw DOM manipulation, D3 loses track of its elements. Transitions conflict. Data joins produce duplicates. Tooltips attach to the wrong elements. The chart renders fine on first load and then falls apart on any state change — filtering, currency toggle, resize.

    The fix is simple but you have to know it matters: at the start of every D3 project, set an explicit rule.

    Use ONLY D3 native methods for all DOM manipulation and rendering.
    No document.createElement, getElementById, innerHTML, or setAttribute.
    All elements created through D3's selection.append() or .data().enter().append() pattern.
    All transitions use D3's .transition().duration() pattern.
    If you need to check whether an element exists, use d3.select() not document.querySelector().

    This rule should go in your project context file, your CLAUDE.md, your cursor rules — wherever your AI reads instructions. And you should still check, because AI will drift back to DOM manipulation over long sessions. One of the most useful debugging prompts I use is simply: "Are you using D3 methods to manipulate these elements or are you using direct SVG DOM manipulation?"

    AI makes aggressive, uninvited changes

    This was the most consistently frustrating pattern across every project. You ask AI to fix one error, and it rewrites three other functions, removes a feature you didn't mention, renames variables, or restructures code you explicitly told it not to touch.

    From my actual prompt history:

    "I only want you to fix the error, you're way out of line removing the tooltip implementation and you're being far too aggressive in your code changes. ONLY FOCUS ON THE ERROR I PRESENTED TO YOU"
    "just do the most basic and simple changes to make the app work again, who told you the debug panel wasn't needed?"
    "you are making massive changes and removing huge parts of the code for something as simple as historical data being null? just keep it as simple as possible"

    This isn't a one-off. It happened consistently. AI treats every prompt as an opportunity to "improve" surrounding code, which often means breaking things that were working.

    How to manage it: Be explicit about scope in every prompt. "Fix ONLY this error. Do not modify any other files or functions." Review the full diff after every change, not just the part you asked for. And when you notice it's gone off-script, call it out immediately — the longer bad changes sit in the codebase, the harder they are to untangle.

    AI ignores rules you've already set

    In my NFTfi depth chart project, I was working with blockchain data — immutable, on-chain, must be used exactly as-is. I told the AI explicitly: no mock data, no toLowerCase, no data normalisation, no aggressive fallbacks. Simple rules.

    It ignored them. Repeatedly. Across dozens of prompts.

    "please I already mentioned never to use data normalisation, toLowerCase, mock data or aggressive fallbacks"
    "don't ever normalise any data"
    "NEVER EVER USE MOCK DATA"
    "why is there a toLowerCase in the code when there are explicit instructions in the code to not use toLowerCase?"

    The problem is that AI doesn't have persistent memory within a long coding session the way you'd expect. It processes each prompt with limited context, and if your rule was set 50 prompts ago, it may have effectively forgotten it.

    The solution I eventually found: embed the rules as comments at the top of every file the AI touches.

    "add a comment to yourself at the start of every single file instructing you to never use toLowerCase, remind you that you are working with blockchain data and that your approach should be applicable to that"

    This works because AI reads the file before editing it. If the rule is right there in the code, it's part of the immediate context. This is essentially what an evergreen project prompt does — it's a persistent set of instructions that stays visible regardless of how long the session runs.

    AI creates duplicates and inconsistencies

    Over the course of a long project, AI will create duplicate files that do the same thing, duplicate data structures that should be centralised, and functions that overlap in purpose but differ in implementation.

    "you are using multiple reservoir.js — how is that even possible?"
    "why do you have featured collections and top collections? this seems like a duplication"
    "why do you have duplication of data like topCollections list? surely this should be centralised, otherwise you will introduce errors"

    This is the classic DRY (Don't Repeat Yourself) violation, but it's worse with AI because you don't always realise the duplication exists. You asked for a feature, AI created a new utility function — but there was already one doing the same thing three files away. Now you have two sources of truth, and when one changes, the other doesn't.

    How to manage it: Periodically ask AI to audit the codebase: "Search for any duplicate files, unused files, or duplicate functions that do the same thing. List them all." Do this every few sessions, not just when things break.

    AI goes in circles on the same bug

    This was the most time-consuming pattern. In my depth chart project, there was a bug where the Reservoir API URL was being constructed as undefined?name=Azuki instead of the actual endpoint. The AI tried to fix it across dozens of prompts — switching between ../ and ../../ relative paths, rewriting the import, creating new files — and kept failing in exactly the same way.

    "clearly you're unable to do something pretty basic... maybe you can analyse your own thought process that leads you to keep making this mistake over and over and over again and think a bit deeper"

    When AI is stuck in a loop, changing the prompt phrasing alone usually doesn't help. What works is changing the approach entirely:

    1. Stop and diagnose first. "Don't write code yet — just explain what you think the problem is." If the AI's mental model is wrong, no amount of code iteration will fix it.

    2. Verify outside the code. "Before making code changes, use curl to verify the API call works first." I learned this the hard way — have the AI test the actual API call independently before touching any code. This separates "is the API wrong?" from "is the code wrong?"

    3. Isolate the problem. "There are five steps where this could be failing. Let's step through each one and test independently." Don't let AI try to fix everything at once.

    4. The nuclear option: start fresh from a spec. When my depth chart project became irreparably tangled, I had the AI write a comprehensive spec document describing exactly what the application should do, including every API call (verified with curl), every data structure, and every architectural decision. Then I started a clean project and fed the spec back in. This turned out to be the most productive decision of the entire project.

    The prompts that actually work

    Through ten projects and hundreds of prompts, these are the patterns that consistently produced better results:

    Before any coding begins:

    "Before we start, I want you to fully explore and understand the existing codebase. Don't write code yet — just deeply understand what's currently happening."

    This prevents AI from making assumptions about your codebase structure. It's especially important when you're resuming a session or switching to a new task within the same project.

    When you need a creative solution:

    "Generate three possible approaches to this problem. Critique each one, evaluate the trade-offs, then recommend the best and explain why."

    Asking for multiple approaches produces better results than asking for one solution. AI tends to default to its first idea; forcing it to generate alternatives surfaces better options.

    When debugging:

    "Before making any code changes, use curl to verify the API call works. Once confirmed, then make the minimum change needed."

    When AI has drifted:

    "Take a step back. We keep going in circles. Think about this far more comprehensively before making another change."

    Use AI as a reviewer, not just a generator:

    Once you have working code, ask AI to critique it: "What's fragile in this code? What would break if the data changed?" This catches issues that generative prompts don't surface.

    AI can't make visual decisions for you

    For the NFTfi bubble chart, I needed to solve a problem where identical loan offers stacked invisibly on top of each other. I didn't ask AI to solve this. AI couldn't have solved it — it has no ability to look at a chart and recognise that the visual output is misleading.

    Instead, I went to Observable — the documentation and showcase site for D3 — and browsed examples of how other people handled density and overlap in visualisations. I found D3's packSiblings method, recognised the visual behaviour, and realised it could be adapted for my problem.

    Where AI excels is in the implementation. Once I knew I wanted packSiblings circle packing, AI handled the force simulation setup, edge cases, tooltip positioning, and responsive behaviour far faster than I could have done manually. The creative decision was mine. The mechanical execution was AI's.

    As a general rule: if the problem is about what to build or how something should look, do your own research. If the problem is about implementing a decision you've already made, AI can compress that work dramatically.

    The evergreen project prompt

    The single most effective practice I adopted was maintaining a persistent project context document. Every D3 project now starts with a file containing:

    ## D3 Rules
    - Use ONLY D3 native methods for all DOM manipulation
    - No document.createElement, getElementById, innerHTML, or setAttribute
    - All elements through D3's .data().enter().append() pattern
    - All transitions use D3's .transition().duration()
    
    ## Data Rules
    - No mock data ever — real data only
    - No normalisation or transformation of source data
    - Data from a single source of truth, no duplication
    
    ## Code Quality
    - DRY: consolidate duplicate functions immediately
    - No unnecessary abstractions or defensive edge-case code
    - Do not modify, rename, or restructure code not mentioned in the prompt
    - After every change, summarise what you changed and why
    
    ## Debugging Protocol
    - Explain the problem before writing any fix
    - Verify API calls with curl before making code changes
    - Isolate problems: test one step at a time
    - If stuck after 3 attempts, stop and reassess the approach entirely

    This file gets referenced in every session. It doesn't prevent every problem — AI will still drift — but it dramatically reduces the frequency of the most common issues.

    It gets easier

    The good news is that it gets easier. My first D3 project — the 274-prompt depth chart — was messy, frustrating, and full of dead ends. By project ten, I had a working evergreen prompt, a reliable debugging protocol, and a much better instinct for when AI was heading in the wrong direction. The problems don't disappear, but you learn to catch them earlier and recover faster. Each project builds on the last — not just in terms of what you know about D3, but in terms of how you work with AI. The guardrails you develop become second nature, the prompts get sharper, and the ratio of productive time to debugging time shifts steadily in your favour.

    For a look at how AI-assisted prototyping compresses the product cycle and what it means for teams, see What Happens When Designers Can Build, Not Just Design.

    Tools used: D3.js v7, Cursor (AI-assisted development), Observable (D3 research and examples), Remotion (video rendering), Cloudflare Workers (API proxy), Vite (build tooling)