Writing your first SharePoint AI Skill should take you thirty minutes, not a week. The trick is choosing the right workflow to encode, using a prompt template that makes the agent do the hard work, and testing on three files before you share it with anyone.
This guide walks all five steps. By the end, you have one tested Skill saved in your site's Agent Assets library and a mental model for writing the second one faster.
If you have not read what SharePoint AI Skills are, start there. This piece assumes you know the definition and want to ship.
Before you start
You need three things.
- A SharePoint site where AI in SharePoint is available. The tenant or the site must be opted into public preview.
- Edit permission on the site. This is the bar to create a Skill.
- Three real files in a document library you can test against. One typical, one edge case, one broken input.
If your tenant has not opted in yet, the Get started with AI in SharePoint docs walk through the opt-in flow.
Step 1. Pick a Skill worth writing first
The best first Skill is a workflow your team already does by hand, on the same type of file, on the same site, at least once a week. That narrow shape forces the Skill to be specific, which is where Skills shine.
Three candidates to consider.
- Document review or validation. Mandatory fields, format checks, naming conventions. Anything where "is this file correct?" has a well-defined answer.
- Metadata and tagging. Apply controlled-vocabulary tags from body content. Enforce required columns.
- Library housekeeping. Flag redundant, outdated, or trivial files. Surface documents missing owners.
Avoid these for your first Skill.
- Anything that reaches outside the current site.
- Anything that needs to run on a schedule.
- Anything where the rules change week to week.
Those are not bad ideas. They just are not first-Skill ideas.
Step 2. Draft the prompt using the six-field template
This is the prompt I give the agent. Six fields. Every field you skip becomes a tuning round later.
Create a Skill named "<short, unique name>" for this site.
Purpose:
<One-sentence statement of what the Skill does and when.>
Trigger:
<The user prompt phrases that should run this Skill.
Give two or three variations.>
Inputs:
<What the user provides. Selected files? A library path?
A list name? A free-text parameter?>
Steps:
1. <First action. Include the rule, not just the action.>
2. <Second action.>
3. <Third action.>
...
Output:
<Exactly what the Skill returns. Chat message? File updated?
List item created? Summary format?>
Rules:
- <Any constraint. Do not rename files.
Never overwrite metadata. Flag and ask on ambiguity.>
Open the AI in SharePoint chat panel on your site. Paste the filled-in template. Ask the agent to create the Skill.
Rules is the single highest-value field. Most teams skip it and pay for it later. Before you save, ask yourself "what would a careless person get wrong here?" and put the answer in Rules.
Step 3. Review the Markdown the agent produces
The agent drafts the Skill as a Markdown file and shows it in chat. Do not save it yet.
Read the Markdown with one question in mind: "does this tell a team member who has never seen my original prompt what to run, and what good looks like?" If the answer is yes, save. If the answer is no, ask the agent to adjust specific sections.
Common edits to make on the first review.
- Tighten the Trigger phrases so they are distinct from any other Skill on the site.
- Add default values for every input that can have a default. "If no template is provided, use the latest version in the Templates library."
- Add a sanity-check step at the end. "Confirm in chat how many files were processed and how many were flagged."
When the Markdown reads right, confirm save.
Step 4. Test on three files
The Skill saves to /Agent Assets/Skills/<skill-name>/SKILL.md in your site. You can review the file directly if you want to see what the agent wrote.
Run the Skill on three files, in this order.
- A typical file. The case the Skill was written for. Output should be clean.
- An edge case. A file that almost meets the rules but not quite. Output should flag it clearly.
- A broken input. A file missing required metadata, corrupted, or outside the Skill's scope. Output should refuse politely or flag the mismatch.
If any of the three produce confusing output, go back to the Markdown and tighten the relevant section. This usually means adding a Rule or refining a Step, not starting over.
Step 5. Ship it to the team
Any user with View permission on the site can run the Skill once it is saved. There is no separate approval step during preview.
Before you tell the team about the Skill, do three things.
- Rename the Skill folder if the agent chose something awkward. The folder name matters for Skill matching.
- Add the Skill's trigger phrases to your team's runbook or site documentation so people know how to call it.
- Watch the first few real runs. Check the Skill indicator card in the chat UI to confirm your Skill loads, not a different one.
The indicator card is the most useful debugging UI during preview. If it shows a different Skill, your trigger phrases are too close to another Skill on the site.
What to do if the Skill does not load
Two things are usually wrong when a Skill does not load on a user's prompt.
The trigger phrases in the Skill are too generic, and another Skill on the site is matching ahead of yours. Tighten both sets of triggers so they do not overlap.
Or the user's prompt is too different from any of the trigger phrases you captured. Add the new phrasing to the Skill's Trigger block and save.
The fast workaround while you tune: ask the user to name the Skill explicitly in their prompt. "Run contract review on this file." That forces the agent to load the Skill by name and ignores the trigger-matching step.
Where this fits in the wider Skill practice
Your first Skill is a proof that the pattern works in your tenant. The second and third are where teams see the shape of what Skills change for them.
Once you have three or four Skills on a site, governance starts to matter. Naming conventions, trigger distinctness, and permissions on the Agent Assets library all become questions worth a small amount of intentional thought. The pillar on extending SharePoint AI with Skills covers that next layer.
Pick the workflow. Fill in the six fields. Test on three files. Ship it. Thirty minutes the first time, ten minutes by the fifth.