If you have been building GitHub Copilot agents and skills for any length of time, you eventually run into the same problem: creating them is the easy part. Sharing them properly is where the real friction starts.
At first, the workaround feels reasonable. You build a useful agent for Terraform provider upgrades. You add a skill for architecture reviews. Maybe you create another for diagram generation. Then someone on another repository wants the same setup, so you point them at the repo, tell them which files to copy into .github/agents and .github/skills, or keep a separate “shared skills” repository around and ask people to pull from that when they need something.
That works for a while. Then the second or third repository starts drifting. One repo has the newer version of a skill. Another is still using an older prompt. A third has a local tweak that seemed harmless at the time but now changes behaviour in ways nobody remembers. At that point, the question is no longer whether GitHub Copilot can be useful. It becomes whether that usefulness can stay predictable once more than one team is involved.
That is the part APM solves.
APM, or Agent Package Manager, gives you a packaging model for agents, skills, and supporting configuration so they can be installed, versioned, and reused instead of copied around manually. What I found interesting about it was not just that it made installation cleaner. It reframed the problem properly.
This is not really a GitHub Copilot problem. It is a packaging and distribution problem.
Most teams do not sit down on day one and design a reusable GitHub Copilot setup. It grows the same way most internal tooling grows: one useful thing at a time. Someone creates an agent for a repetitive task. Someone else adds a skill for architecture review. Another person adds instructions to steer GitHub Copilot around internal standards or preferred ways of working. That is usually a good sign. It means the tooling has become useful enough that people want to keep using it.
The trouble starts when that setup needs to move beyond the repository where it was created. Manual copying stops being a shortcut and starts becoming a source of drift. APM gives that setup a dependency model instead. Rather than treating agents and skills as repo-local files that just happen to live in .github, you can start treating them as something declared, installed, versioned, and composed.
That is a much better operating model than a README telling people which folders to copy.
Using .github as the source of truth
One of the design choices that took a few iterations to settle on was where the real content should live. The model that ended up working best was simple: keep the actual agents and skills in .github, and treat that as the source of truth.
.github/
├── agents/
└── skills/
Everything lives once. Packages then reference subsets of that content.
That matters more than it might sound at first. If the same skill exists in both a package directory and a development directory, you have created a sync problem immediately. You can manage that for a while, but it is the same kind of low-grade operational mess that appears everywhere in engineering once duplicate sources of truth are tolerated for too long.
The thin-package model avoids that. You build and test against the same .github content that eventually gets distributed. There is no second copy to maintain, no packaging mirror to keep aligned, and no later argument over which version is the real one when somebody notices behaviour changed between repositories.
That distinction matters at platform scale. Once multiple repositories depend on shared engineering context, duplicating the content itself is usually the wrong abstraction. The package should describe how that context is consumed, not become another place where the context has to be maintained.
Scenario packages are a better fit than one shared bundle
The structure that emerged from that approach was a thin root package with three scenario packages underneath it: architect, terraform, and diagramming.
That turned out to be much closer to how engineers actually want to consume shared tooling.
Someone working on Terraform provider upgrades does not need the same GitHub Copilot context as someone reviewing an Azure architecture. A repository focused on diagrams should not have to pull in workflow helpers it will never use just because they happen to exist in the same source repo. Too much context is not neutral. It makes the setup feel less focused, and over time it increases the likelihood that people either ignore parts of it or start editing it locally to make it fit their own workflow.
The structure ended up looking like this:

And the install model becomes much easier to reason about:
apm install thomast1906/github-copilot-agent-skills/packages/architect
apm install thomast1906/github-copilot-agent-skills/packages/terraform --runtime vscode
apm install thomast1906/github-copilot-agent-skills/packages/diagramming --runtime vscode
apm install thomast1906/github-copilot-agent-skills
The root install gives you the full set. The scenario packages give you a focused workflow. That is a better answer than expecting every team to carry the same monolithic .github folder around and hoping nobody tweaks it locally.
This is one of those places where platform engineering judgement matters more than the mechanics of the tool. Shared context only stays useful if it is shaped around the work people are actually doing. A package boundary is not just a technical convenience. It is a way of deciding what should travel together and what should not.
Diagramming is a good example of where packaging starts to matter
The diagramming package is probably the clearest example of why this approach works.
I had already been doing diagram generation through Draw.io MCP and Excalidraw MCP, and the useful part was never simply that AI could draw a diagram. The value was being able to turn architecture and delivery thinking into something reproducible inside the engineering workflow instead of producing yet another artefact that gets created once, exported somewhere, and then quietly goes stale.
That makes diagramming a strong fit for a dedicated package.
Not every repository needs Terraform upgrade agents. Not every repository needs architecture review skills. But some absolutely benefit from a focused package that gives engineers the right context for Azure architecture diagrams, Draw.io workflows, and Excalidraw-based design sketches without dragging in unrelated tooling.
On main, the install now looks like this:
apm install thomast1906/github-copilot-agent-skills/packages/diagramming --runtime vscode
While I was building out the package structure, the working branch install looked like this:
apm install thomast1906/github-copilot-agent-skills/packages/diagramming#feature/apm-packages --runtime vscode
The output was useful because it showed the package doing something more meaningful than just copying files:
[*] Created apm.yml
[*] Validating 1 package...
[+] thomast1906/github-copilot-agent-skills/packages/diagramming#feature/apm-packages
[*] Updated apm.yml with 1 new package(s)
[>] Installing 1 new package...
[+] github.com/thomast1906/github-copilot-agent-skills/packages/diagramming#feature/apm-packages
#feature/apm-packages (cached)
[+] github.com/thomast1906/github-copilot-agent-skills/.github/skills/azure-drawio-mcp-diagramming (cached)
|-- Skill integrated -> .github/skills/
[+] github.com/thomast1906/github-copilot-agent-skills/.github/skills/drawio-mcp-diagramming (cached)
|-- Skill integrated -> .github/skills/
[+] github.com/thomast1906/github-copilot-agent-skills/.github/skills/excalidraw-mcp-diagramming (cached)
|-- Skill integrated -> .github/skills/
[i] Added apm_modules/ to .gitignore
Trusting direct dependency MCP 'drawio' from 'diagramming-skills'
Trusting direct dependency MCP 'excalidraw' from 'diagramming-skills'
+- MCP Servers (2)
Targeting specific runtime: vscode
| > drawio (self-defined, http)
| +- Configuring for Vscode...
| Installing drawio...
| Successfully configured MCP server 'drawio' for VS Code
| + drawio -> Vscode (configured)
| > excalidraw (self-defined, http)
| +- Configuring for Vscode...
| Installing excalidraw...
| Successfully configured MCP server 'excalidraw' for VS Code
| + excalidraw -> Vscode (configured)
+- Configured 2 servers
── Diagnostics ──
[i] 3 dependencies have no pinned version -- pin with #tag or #sha to prevent drift
[*] Installed 4 APM dependencies and 2 MCP servers.
That tells a much better story than a vague success message ever could.
First, the package installs the skills into the location the runtime expects. Second, it recognises that the package also depends on MCP servers and configures those for the target runtime. Third, it surfaces operational details that matter once other teams depend on the setup, such as unpinned dependencies and the possibility of drift.
That is a much better model than handing somebody a folder of prompts and a separate README telling them which MCP servers they need to wire up afterwards.
The manifest itself stays small enough to reason about:
name: diagramming-skills
version: 1.0.0
description: >
Draw.io, Azure Draw.io, and Excalidraw diagramming skills for GitHub Copilot.
Covers generic architecture diagrams, Azure-specific icons, and live canvas
diagramming via Excalidraw MCP.
author: thomast1906
license: MIT
target: vscode
dependencies:
apm:
- thomast1906/github-copilot-agent-skills/.github/skills/drawio-mcp-diagramming
- thomast1906/github-copilot-agent-skills/.github/skills/azure-drawio-mcp-diagramming
- thomast1906/github-copilot-agent-skills/.github/skills/excalidraw-mcp-diagramming
mcp:
- name: drawio
registry: false
transport: http
url: "https://mcp.draw.io/mcp"
- name: excalidraw
registry: false
transport: http
url: "https://mcp.excalidraw.com"
From a platform point of view, that is where this becomes more interesting. You are no longer just packaging guidance. You are packaging a usable workflow: the skills, the runtime target, and the MCP dependencies needed to support them.
Once MCP is packaged as well, the model changes
This is the point where it stops feeling like simple content distribution.
The real shift is not that APM can install skills. It is that the package can now describe a runtime-aware workflow. The architect package is a good example of keeping scope deliberate. It is not pretending to be an everything-Azure bundle. It is a focused design and review package. The terraform package goes broader in a different direction by bundling workflow context with Terraform MCP wiring. The diagramming package does the same for Draw.io and Excalidraw.
That is a more serious model than “copy these files into .github/skills and see how you get on”.
It also changes the meaning of --runtime vscode. That flag is not just a convenience. It is what allows APM to materialise the workflow for the runtime you actually use rather than dropping files into a repo and leaving the rest as manual setup.
There is a subtle detail here that only really shows up once you start doing this properly: not every MCP-backed package is configured the same way. Some dependencies are wired through mcp.json. Others rely on tooling the runtime already understands. That stops being an edge case once you are trying to make the setup reproducible for other engineers. It becomes part of the design.
That is usually the point where lightweight experimentation either matures into a platform pattern or falls apart. Once runtime assumptions, package scope, and external dependencies all matter, you are no longer dealing with a neat editor customisation problem. You are dealing with workflow design.
The trade-offs are operational, not theoretical
A few practical details only become obvious once you start using this for real.
Every package still needs an .apm/ directory, even if it is acting as a thin manifest. Testing installs repeatedly in the same working directory can also make changes look correct when you are really just seeing cached or previously deployed state. Fresh test directories are still the least confusing way to validate changes properly.
And then there is the usual YAML tax. Frontmatter looks harmless until a description with a colon in it quietly breaks parsing.
None of that is especially dramatic. It is just the difference between something that works once and something that is actually worth sharing with other teams.
That tends to be where people underestimate the effort. The initial setup is rarely the hard part. The harder part is getting the install path, dependency handling, and runtime behaviour into a shape that other engineers can rely on without needing the original author standing over their shoulder to explain what is supposed to happen.
Why this matters more for platform teams than for individual repos
For an individual engineer, the value is fairly obvious: less manual copying, faster setup, and a more consistent GitHub Copilot environment from repository to repository.
For platform teams, the more interesting benefit is control without centralised friction.
You can publish scenario packages that match the kind of work a repository is doing. You can keep the source of truth in one place. You can improve a shared workflow through an actual dependency model rather than through memory, documentation, and luck. That starts to look a lot like platform engineering, because it is.
You are not just packaging files. You are packaging reusable engineering context, runtime expectations, and, in some cases, the MCP dependencies needed to make that context actually usable.
That is a more important distinction than it sounds. Context is now part of how work gets done. If it is shared, versioned, and depended on across repositories, then it deserves the same design thinking as the rest of the platform. One of the easiest mistakes to make is to treat prompt files and agent instructions as if they sit outside normal engineering concerns. Once multiple teams rely on them, they do not. They become another part of the platform surface area, with all the same concerns around ownership, drift, consistency, and change control.
That is where APM feels useful to me. It gives those concerns a more suitable operating model.
What I like about it is not just that it makes sharing easier. It changes how you think about the problem in the first place. Instead of agents and skills being a pile of markdown that happens to live in one repository, they become something structured, installable, and composable. The root package can stay thin. Scenario packages can stay focused. Runtime assumptions can be made explicit. MCP configuration that used to live in side notes and setup docs can move into the package where it belongs.
That is a much better model once more than one team is involved.
Copying files works right up until it does not, and when it fails it fails in the usual ways: drift, inconsistency, unclear ownership, and too much local variation. Packaging agents, skills, and runtime setup properly is a better answer because it treats them as part of the engineering platform rather than as editor clutter that happens to be useful.
That is a more useful place for them to live.
Check out my current APM setup in my agent skills GitHub repo
I do also recommend you check out the APM docs, they are super useful!