I want to tell you about a Monday morning from about a year ago. I had a client presentation at 10am. The deck needed eight new campaign mockups — different aspect ratios, different copy, same brand system. In the old world, that was four hours of work. Duplicate artboard, resize, paste new copy, re-kern, check alignment, export. Repeat eight times. Show up tired.
Instead, I spent forty-five minutes writing a script. Not a Figma plugin with a settings panel and a help menu — just a small program that read from a spreadsheet of copy variants and pushed values into named layers via the Figma API. When it ran, it produced all eight mockups in eleven seconds. I showed up at 10am having had coffee and thought time. That gap — between the old way and the new way — is what this article is about.
The Toolsmith Tradition
There is a long tradition of craftspeople making their own tools. Woodworkers build jigs. Photographers build rigs. Surgeons request custom instrument modifications. The logic is always the same: the general-purpose tool gets you 80% of the way. The last 20% — the part that matches your specific hand, your specific workflow, your specific set of recurring problems — requires something made for you.
Designers have been slower to claim this tradition than engineers. Partly because design tools were historically closed systems. Partly because the discipline trained us to think in deliverables rather than processes. Partly, I think, because there was an unspoken assumption that building tools was the developer's job and using them was ours.
That assumption was always questionable. It is now obsolete. The combination of open APIs, capable language models, and tools like Claude Code means a designer who can describe a workflow in plain English can now automate it. The code is not the hard part anymore. Understanding the problem well enough to specify a solution — that is still entirely design work.
Building tools is not the developer's job and using them is not ours. That division was always a convention. Conventions change.
Where Custom Tools Pay Off
Before getting into the how, it is worth being precise about the where. Not every problem benefits from custom tooling. The highest-value targets share a few characteristics: they recur frequently, they are tedious rather than creative, they involve structured data in some form, and the cost of getting them wrong is real.
In product design, the recurring high-friction problems I have run into are roughly: keeping a component library synchronized with its documentation, running visual regression checks between design versions, generating realistic-but-varied data for tables and lists, translating design tokens from Figma into code-ready formats, and auditing designs for accessibility before handoff. None of these are creative problems. They are correctness problems — the kind where humans get bored and make mistakes, and where a script gets it right every single time.
In marketing design, the list is different but the shape is the same: generating on-brand social assets in bulk from a content calendar, resizing campaign visuals for every required ad format, applying seasonal updates to templated materials, and producing copy variations for A/B tests without manually touching every file. These are production problems dressed up as design problems. Automate the production; preserve your hours for the design.
The Figma API Is Your Inbox
The Figma REST API is the entry point for almost everything I am about to describe. It is well-documented, reasonably stable, and gives you programmatic access to the full structure of any file you own — every frame, layer, component, and design token, expressed as a JSON tree.
Here is the practical shape of a typical tool: you write a Node.js script that authenticates with a personal access token, calls the Figma API to read the current state of a file, transforms the data in some way, and either writes it back to Figma or exports it somewhere else. The reads use GET endpoints. The writes use the Plugin API (which runs inside Figma's sandbox) or, for simpler cases, the Variables API for design tokens.
A concrete example: design token sync. Your design system has color, spacing, and typography tokens defined in Figma. Your codebase has CSS custom properties that should match. The manual process is: open Figma, read the values, open your code editor, update the variables, commit. With a script, the process is: run the script, commit the diff. The script reads the current token values from the Figma Variables API and writes them to a tokens.css file. The whole thing is about sixty lines of JavaScript. Once it exists, token drift — the slow divergence between design and code that plagues almost every design system — becomes a CI problem rather than a human problem.
Token drift — the slow divergence between design and code — becomes a CI problem rather than a human problem. That is a significant upgrade.
Adding Intelligence: When to Reach for a Language Model
Scripts are powerful for deterministic transformations: resize this, rename that, extract these values. But a growing class of design problems is not deterministic. They require judgment — the kind of judgment that, until recently, could only come from a senior designer with pattern recognition built up over years.
This is where language models enter the workflow. I have been experimenting with using Claude as a critique layer — a step in the process that reads a design description (exported from Figma as structured JSON, converted to a natural language summary) and evaluates it against a set of criteria: accessibility contrast ratios, hierarchy clarity, copy conciseness, component consistency. The model does not replace the senior designer. It does the first pass, surfaces the obvious issues, and lets the human focus on the harder calls.
The setup looks like this in practice. Export a Figma frame as JSON using the API. Write a prompt that gives the model context about your design system and asks it to evaluate the frame against specific criteria. Pipe the output into a Slack message or a GitHub comment. What you get is a structured review — not a replacement for human judgment, but a reliable first filter that catches the things that always get caught, every time, without anyone having to remember to check.
A Marketing Design Use Case: The Brief-to-Asset Pipeline
Let me get specific about a workflow I have built and actually use. Most marketing campaigns start with a brief: campaign name, key message, target audience, visual direction, and a list of required deliverables (sizes, formats, channels). The brief lives in Notion or a spreadsheet. The deliverables need to end up in Figma, then exported as production-ready files.
The old workflow: read the brief, open Figma, duplicate templates, update copy, adjust for each format, export. Two to four hours depending on complexity. The new workflow: read the brief, run the pipeline. A script reads the brief from Notion via the Notion API, extracts the structured data (copy variants, required dimensions, brand tier), calls a language model to generate headline and body copy variations within the tone-of-voice guidelines, uses the Figma API to populate a template file with those variations and sizes, and triggers an export job. The output is a folder of named, sized, ready-to-review files.
This is not a fully automated creative process. The templates are designed by a human. The brand guidelines the model reasons against are written by a human. The review step is human. What is automated is the mechanical production work — the part that consumes time without producing insight. That is the right thing to automate.
How to Start: A Practical Sequence
If you have never built a design tool and want to start, here is the sequence I would follow. First, identify one specific recurring task that meets the criteria I described: frequent, tedious, structured, consequential if done wrong. Do not start with a grand vision. Start with the most annoying Tuesday afternoon.
Second, get a Figma personal access token and spend one hour just reading the JSON output of one of your files. Understand the shape of the data. Notice where your components live in the tree, what properties are named, how the API represents the things you care about. This is the most important step. The code is easy once you understand the data.
Third, describe the transformation you want in plain English — input, process, output — and give that description to Claude Code or Cursor. Ask it to write the script. Review the output. Run it on a test file, not a production file. Iterate. The first version will almost certainly be wrong about something. The second version will probably work.
Fourth, add the intelligence layer only if you need it. Most tooling does not require a language model. A language model is the right tool when the transformation requires judgment rather than rules. If you find yourself writing a very long if-else tree trying to cover every edge case, that is a signal that a model might handle the ambiguity better than your logic will.
If you find yourself writing a very long if-else tree to cover every edge case, that is a signal that a model might handle the ambiguity more gracefully than your logic will.
What You Become in the Process
I want to end on something that is less about tools and more about what happens to you when you build them. There is a particular kind of leverage that comes from understanding your own workflow well enough to automate parts of it. It forces a clarity of thinking that you cannot get any other way.
To write a script that does what you do manually, you have to describe what you do manually with precision. You have to notice the steps you skip over, the heuristics you apply without naming them, the edge cases you handle by feel. That process of articulation is, independently of the automation, enormously valuable. It is the same muscle that produces good process documentation, good design critiques, and good feedback. The tool is almost beside the point.
The designers I find most interesting right now are the ones who are building in this way — not replacing their judgment with automation, but using automation to protect the hours where their judgment is irreplaceable. They show up to the hard problems without the easy ones already depleting them.
That Monday morning from a year ago — the one with the presentation at 10am — I did not just save four hours. I arrived at the meeting having actually thought about the campaign, not just executed it. The work was better. The conversation was better. The outcome was better. That is what the toolsmith tradition is actually about. Not the script. The thinking it protects.