Introducing our Dev Mode MCP server: Bringing Figma into your workflow | Figma Blog
Problem: every team approaches their codebase differently—with a different structure, framework, vocabulary, and workflow—and makes decisions that evolve their codebase based on their specific needs. All these differences compound into a unique fingerprint that’s difficult for LLMs to infer.
Solution: gather context—from reading existing code, examining repository history, accessing documentation, and understanding database schemas—and feed it to LLMs so they can generate code in your
Figma knows which specific token is used, and can provide the name of that variable to the LLM via MCP. Even better, if you have provided code syntax in Figma for that variable, the Dev Mode MCP server can provide that exact code to the LLM.
The right code is aligned to design intent, not just pixels. We like to think of screenshots as supplemental information for the code response; a screenshot combined with Figma’s code outputs performs better than either on their own
Content like text, svg, images, layer names, and annotations can help LLMs derive how to do the mapping from design placeholders to your data model’s properties when generating the code. Examples to facilitate mapping to AI agent:
Title [user.name]
Rating [review.rating]
/* onClick: openUserModal */
Today, there are three tools in the MCP server that allow you to get context from Figma for the current selection or specific node id: one for code, another for images, and a third for variable definitions get_code
get_variables_defs
get_code_connec_map
get_image
Glossary