Estimated Read Time: 8 minutes

The best product teams have always understood something counterintuitive: the walls between design and engineering are really just suggestions. At companies like Airbnb or Stripe, designers prototype in code and developers obsess over pixel alignment.

Now those walls are vanishing entirely.

Kris Rasmussen, CTO of Figma, has a front-row seat to this transformation. His company's design platform serves millions of users, from scrappy startups to Fortune 500 enterprises. But here's the thing: only a third of them are actually designers. The rest? Product managers, marketers, executives, engineers—basically everyone involved in shipping software.

And that ratio is about to get even more lopsided.

In a recent conversation, Kris laid out his vision for where software development is heading: toward AI coding agents that understand design intent, neural networks that generate entire interfaces on the fly, and a future where "code" might become an implementation detail rather than the product itself. It sounds like science fiction. Except it's already starting to happen.

The Gap That Never Really Closed

Here's a problem that's plagued software teams since forever: designers create beautiful mocks in tools like Figma. Developers receive those mocks. Then they spend days or weeks translating visual intent into actual working code, inevitably losing something in translation.

Various tools have promised to solve this over the years. Remember Dreamweaver? Or those "design to code" platforms that spit out unusable React components? They all shared the same fatal flaw, they treated design files as complete specifications, ignoring the messy reality that production code needs to integrate with existing systems, follow established patterns, and actually, you know, work.

"We're always trying to meet users where they're at," Rasmussen explained, and this simple statement contains more wisdom than it might initially suggest. "We have designers creating incredible visual context in Figma, and we have developers working in massive existing codebases. The question is: how do we make that handoff seamless?"

Enter Figma's MCP (Model Context Protocol) server, a deceptively simple tool that fundamentally changes the game.

When Context Becomes Currency

The technical implementation is clever: when Anthropic's team (yes, the folks who make Claude) designs a new feature in Figma, they explore multiple directions, create variations, annotate everything. Then, when it's time to build, a developer can take the Figma URL and feed it into GitHub Copilot.

Copilot pings Figma's MCP server. The server sends back the design context, not just how things should look, but annotations, component relationships, responsive breakpoints, the whole nine yards. Crucially, Copilot then reconciles this with the existing codebase.

"It's not just about translating designs to code," Rasmussen clarified. "It's about reusing existing code at the same time. That's what makes it practical."

This matters more than it might seem. Previous attempts at automated design-to-code failed because they created greenfield implementations—pristine, isolated, and utterly incompatible with real codebases that have accumulated years of patterns, conventions, and yes, technical debt. The MCP approach acknowledges that most software development isn't building from scratch. It's integrating, adapting, evolving.

The Macro Hard Future

Elon Musk has floated an audacious idea: software becomes "macro hard," meaning everything from the UI down to the operating system layer gets generated on demand by neural networks. No traditional code. No static components. Just prompts, preferences, and probability distributions somehow conjuring entire applications into existence.

It sounds insane. Rasmussen doesn't dismiss it.

"I think it is possible," he said. "But in that world, it makes everything we're talking about even more important. If there's no code defining what users experience, then all you're left with is context and taste."

Think about what this means. Right now, code serves as a shared reference point. Designers and developers can point to a React component and debate whether it should accept certain props. QA can file bugs against specific functions. But if software becomes an on-demand hallucination from a language model, what's the source of truth?

Rasmussen has clearly been wrestling with this. "We have to figure out what design looks like when you no longer have code in between," he noted. "That's a super fun problem to think about, and a very exciting opportunity."

The optimism is genuine, but so are the challenges. Current models are too slow for responsive interfaces. Consistency across sessions remains unsolved. And there's a deeper philosophical question: if every user's interface can be personalized to their exact preferences and context, how do you maintain brand identity? How do you preserve the muscle memory that makes software learnable?

Great Taste Doesn't Scale, Or Does It?

One of the more fascinating threads in our conversation was around design quality in a generated world. If interfaces are being created dynamically by AI rather than explicitly coded by humans, how do you ensure they don't all look generic? How do you encode taste?

Rasmussen's answer brings things back to fundamentals: great design has always been situational. What works for Salesforce's complex enterprise dashboards isn't what works for Anthropic's conversational interfaces. Companies optimize for different constraints, serve different users, have different opinions about what "good" means.

"Great design comes from exploration," Rasmussen said. "We have to understand the context, build shared mental models, and manage each other's expectations. Ultimately, we're building these things in service of ourselves and other humans, not just what the model can understand."

This gets at something important. Design systems—those meticulously maintained libraries of components and patterns—aren't going away in an AI future. They're evolving. The question becomes less about creating rigid components and more about defining flexible constraints that models can operate within.

Think of it like directing actors versus writing their dialogue word-for-word. You still need a coherent vision, consistent characterization, thematic through-lines. But you're orchestrating rather than dictating.

How Figma Builds Figma

Naturally, Rasmussen's team uses their own tools extensively. They prototype in Figma, use AI coding assistants like Cursor and Claude Code, and yes, they use their own MCP server to translate designs into working features.

But the workflow still centers on human collaboration. Product briefs and explorations start in FigJam, where teams can generate ideas on an infinite canvas without stepping on each other's toes. Designers and engineers evaluate the solution space together before committing to a direction. Sometimes they use Figma Make to prototype. Sometimes they go straight to code.

"We want our team using all the latest and greatest AI tools," Rasmussen said. When pressed on whether he has a preferred model, he stayed diplomatically neutral: "We're very open. We're kind of tool agnostic within the company."

They also experiment with open-source models and do some post-training in-house, designing systems to adapt to whatever comes next. It's a pragmatic approach—bet on capabilities improving, not on any single vendor.

The Customer You Didn't Expect

Here's a stat that surprised me: only one-third of Figma's users are designers. The rest? Everyone else.

"We've really focused on product development teams and the entire company as our customer from day one," Rasmussen explained. "The design tool was a way to help people build better experiences, but ultimately it's a cross-functional process."

AI is accelerating this role-blending. Product managers create mockups to align on strategy. Marketers ensure visual consistency across campaigns. Executives build their own slide decks using Figma Slides, prompting AI to keep everything on-brand.

"We're just going to see a lot more role bending," Rasmussen said. "And we're embracing that."

The implications are profound. If tools become accessible enough that non-designers can contribute meaningfully to visual work, and non-developers can shape code, then the entire bottleneck in product development shifts. It's no longer about having the right specialist available at the right time. It's about context, taste, and collaboration infrastructure.

Which, honestly, is what it always should have been.

Beyond the Chat Box

Dylan Field, Figma's CEO, has described the current moment as the "MS-DOS era of AI"—primitive but pregnant with possibility. Rasmussen agrees completely.

"The future is not literally just linear text history or chatbots," Rasmussen said. "Chatbots also need to embed more visual outputs."

You can already see this happening. ChatGPT and Claude's interfaces are evolving, bringing back traditional UI components rather than staying purely conversational. The pendulum is swinging toward hybrid experiences that blend natural language with visual structure.

And this makes sense, right? Text is an incredible interface for certain tasks—explaining concepts, generating content, having a back-and-forth. But try debugging a layout issue or reviewing design variations through pure conversation. It's maddening. Some things just need to be visual.

"I'm really excited to see how we adapt with this technology and embrace all the different input modalities and experiences," Rasmussen said, "rather than just the low-hanging fruit where we're at right now."

Reimagining Everything

When I asked what he's most excited about for Figma's next six to twelve months, Rasmussen didn't name a specific feature or product launch. Instead: "I'm just really excited about reimagining all software as a result of this new technology."

For a company built on the premise that collaboration is the bottleneck in software development, AI represents a rare opportunity to remove friction while amplifying human creativity.

The technical challenges are real, models need to get faster, design patterns need to evolve, questions about consistency and control remain unsolved. But Rasmussen seems energized rather than daunted by the complexity ahead.

Because here's the thing: software has never actually been about code. Not really. Code is just how we've historically instructed computers to create experiences. If there's a better way, whether that's more context-aware AI agents or full end-to-end neural networks or something we haven't imagined yet, the tool matters less than what we build with it.

"It's no longer about the code," Rasmussen said at one point. "It's about what you actually want the models to output."

Or to put it another way: it's about the experiences we create and the people we create them for. AI is just the latest tool in service of that fundamentally human project.

Kris Rasmussen is the Chief Technology Officer at Figma, where he oversees engineering and infrastructure for the company's design and collaboration platform used by millions worldwide.

Nick Wentz

I've spent the last decade+ building and scaling technology companies—sometimes as a founder, other times leading marketing. These days, I advise early-stage startups and mentor aspiring founders. But my main focus is Forward Future, where we’re on a mission to make AI work for every human.

👉 Connect with me on LinkedIn

Reply

or to participate