The medium

·5 min read

I open the terminal before I open Figma now. Not to write code. To think. I'll start a conversation with Claude Code about a product idea, react to what it surfaces, let it poke holes in my assumptions. What's the data model? What happens when there's nothing to show? Where are the hard edges? A PRD crystallises out of that back-and-forth. The building comes later. But the thinking starts in the terminal, and I didn't expect that.

Something similar is happening across the whole field. Figma is trying to get closer to code. Paper and Pencil are building design canvases that are code. Claude Code and Cursor let designers build directly. The tool landscape is converging on a single idea: design and code shouldn't be separate workflows. And the people I see shipping the most considered product work aren't the ones with the tightest Figma files. They're the ones who understand the medium they're designing for.

The web has properties that shape every design decision you make on it. Content is dynamic. Layouts reflow. Interfaces have states. Interactions happen in time. Data arrives asynchronously. Performance affects experience. Understanding these things changes what you notice, which changes what you prioritise, which changes what you ship.

If you know that data arrives asynchronously, you think about loading states as a core part of the design, not something that gets figured out during implementation. If you understand that layouts reflow, you design compositions that work across viewport widths rather than just at the one you happened to pick in Figma. If you think about state, you design for the empty case, the error case, the edge case where someone has five hundred items instead of five. None of this requires writing code. It requires understanding the nature of the thing you're designing for.

This has always been true, but it used to matter less. An engineer sat between your design and the user. They'd catch the missing loading state, handle the narrow viewport, figure out what happens at scale. Now, AI has collapsed that barrier. A designer who knows what they want can build it directly. No tickets, no handoff, no weeks of back-and-forth. But AI will build whatever you ask for, including an interface with no loading states, a layout that only works at one width, a list that breaks at scale. The gap that the engineer used to fill is yours now. The quality of what you ship is bounded by how well you understand the medium.

Understanding the medium changes more than what you catch though. It changes what you imagine. If you understand that components compose, you design interfaces as systems of parts rather than collections of screens. If you think about data relationships before visual layout, you design screens that actually fit the data they'll display. An architect doesn't calculate structural loads. But they understand how materials behave, how people move through spaces, how light works. Those properties of the physical medium shape every decision they make. The web is the same.

This doesn't mean designers are replacing engineers. But the floor of what a designer can ship is rising fast. The design engineer sits in that expanding gap. Not a hybrid role bolted together from two job descriptions, but a way of working where design judgement and technical understanding live in the same person. Knowing that a 200ms ease-out feels right not because a spec says so, but because you understand both the interaction rationale and the rendering cost. Knowing your data model will create a UX problem before you've designed a single screen.

Not every designer needs to work this way. Research, strategy, brand, service design... these disciplines don't involve building interfaces, and the medium understanding argument doesn't apply to them. But for product designers, this is where things are heading. When AI can generate a working prototype faster than you can lay out the same screens in Figma, the static mockup stops being the most efficient way to explore an idea. The source of truth moves to tools and codebases that speak the language of the web natively.

There's a tension here though. A lot of the specific technical knowledge that feels important right now will likely become less important over time. AI models will get better. The implementation details I use to direct AI today will matter less when AI can figure them out on its own. But you learn that the web is stateful by working with state. You learn that layouts reflow by seeing them reflow. The implementation details are temporary. The intuition they build is not.

At least, that's where things stand right now. Whether the intuition stays valuable or whether the tools eventually make it unnecessary... no one knows. But learning the medium is the most useful thing a product designer can do today. And what you learn by doing it changes how you think in ways that are hard to unlearn. That feels like it's worth something, even if the landscape keeps shifting underneath.