MCP servers are the missing layer that turns a chat assistant into something that can actually do work. Instead of asking Copilot to guess at your environment, you give it a controlled way to talk to tools, resources, prompts, and even small UI surfaces through a standard interface.
That matters because the Copilot experience is strongest when the model has real context and a safe way to act on it. In VS Code, MCP gives you a structured path for that integration. You can install an existing server, point Copilot at a workspace configuration, or build a local server for a specific internal system without inventing a one-off integration for every client.
What an MCP server is
MCP stands for Model Context Protocol. It is an open standard for connecting AI applications to external systems. The point is not just to call APIs. The point is to expose a consistent contract that an assistant can discover and use.
1 Tools perform actions
Tools can open a page, query an API, trigger a job, or take some other concrete action on behalf of the user.
2 Resources supply context
Resources can expose read-only data like files, tables, or API responses that Copilot can use while answering.
3 Prompts and apps extend the flow
Servers can also provide reusable prompts and interactive app-like experiences for richer workflows.
Why Copilot cares about MCP
Copilot is useful when it can connect the conversation to the task. MCP gives it a path to do that without hard-coding every service into the editor. In VS Code, the server configuration lives in mcp.json, which can be scoped to a workspace or to your user profile.
That is a practical difference. A workspace configuration can be committed for a team. A user profile configuration can follow you across projects. In both cases, VS Code discovers the server, confirms trust, and then exposes the server's tools in chat.
{
"servers": {
"github": {
"type": "http",
"url": "https://api.githubcopilot.com/mcp"
},
"playwright": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@microsoft/mcp-server-playwright"]
}
}
}
That example shows the basic pattern. A remote server can be exposed over HTTP, while a local server can be launched as a stdio process. Copilot does not care which transport you use as long as the server speaks MCP and the editor can trust it.
The fastest way to start
If you do not want to author your own server yet, start with the gallery. In VS Code, search for @mcp in the Extensions view, install a server such as Playwright, and confirm the trust prompt when VS Code asks.
Go to code.visualstudio.com, decline the cookie banner, and give me a screenshot of the homepage.
That prompt is simple on purpose. It makes the tool usage obvious. Copilot can open the site, navigate the page, and invoke the Playwright tools the server exposes. The point is not the screenshot. The point is that chat can now do something concrete instead of merely describing what it would do.
When to build your own server
Installing a public server is the fast path. Building one is the better path when the work is tied to your own systems: internal documentation, ticketing, deployment systems, issue trackers, or database queries.
The minimal shape is still the same. You define a local server in mcp.json, start it with a command, and let Copilot discover the tools it provides.
{
"servers": {
"docs-search": {
"type": "stdio",
"command": "node",
"args": ["./mcp/docs-search-server.mjs"]
}
}
}
With that in place, the assistant can ask your server to search internal docs or return structured data that is hard to reach through a normal chat prompt. That is where MCP becomes useful instead of merely interesting.
Keep the server tight and safe
Local servers are powerful because they can touch your machine and your network. That also means you should keep the surface area small and predictable.
{
"servers": {
"docs-search": {
"type": "stdio",
"command": "node",
"args": ["./mcp/docs-search-server.mjs"],
"sandboxEnabled": true,
"sandbox": {
"filesystem": {
"allowWrite": ["${workspaceFolder}"]
},
"network": {
"allowedDomains": ["api.example.com"]
}
}
}
}
}
Sandboxing is a good default for local stdio servers on macOS and Linux. It limits file-system and network access to what you explicitly allow, which is the right balance for a tool that can execute code on your behalf.
| Approach | Best for | Why it helps |
|---|---|---|
| Install a gallery server | Quick wins | You can start using Copilot with real tools in minutes. |
| Add a workspace server | Team workflows | The configuration can live in source control and travel with the repo. |
| Build a local server | Internal systems | You expose exactly the tools your team needs, and nothing extra. |
| Enable sandboxing | Higher-risk servers | You keep local access constrained while still letting the server run. |
The pattern is simple: reuse existing servers when you can, add workspace configuration when you need shared setup, and build your own server only when the task truly requires it.
What to watch for
In short, MCP is what makes Copilot more than a chat box. It gives the assistant a standard way to reach tools, pull in context, and perform work in a controlled environment. That is why the combination matters: Copilot supplies the reasoning and interaction layer, while MCP supplies the connection to the systems you already use.
If you only need a quick start, install an existing server and try it in chat. If you need a durable team setup, put the configuration in the workspace. If you need deep access to internal systems, build a small local server and keep it sandboxed. In all three cases, the value is the same: less hand-rolled glue, more useful automation.