Update Your Blog with GitHub Copilot CLI

Yesterday I wrote about updating blog content with Claude Code. But what if your company uses GitHub Copilot instead? If you've got GitHub, Copilot's probably already available to you. Here's how to do the same blog maintenance work with GitHub Copilot CLI.
The problem
Blog posts from 2016 need different fixes than posts from 2023. Links rot at different rates—Microsoft docs got reorganized, Twitter links died completely, some GitHub repos moved. Screenshots age badly, especially the old Windows PowerShell blue console captures. Commands change—parameters get deprecated, best practices evolve.
You can't fix this with find-and-replace. Some links should stay historical. Some outdated information is actually correct in context. Some version numbers document what was true then. Manual review of every post was never going to happen, but AI can handle long, detailed prompts and make judgment calls about what needs updating versus what needs preserving.
Free tier
GitHub has a permanent free tier—50 premium requests per month, 2000 code completions, 50 chat messages. Each file you process counts as one premium request, so that's 50 blog posts per month, every month, forever.
Need more than 50? You can sign up for the 30-day free trial of Copilot Pro. That's 300 premium requests—enough for large blogs. After the trial, Pro is $10/month if you need it. That's half the cost of Claude Pro ($20/month).
When I'm presenting about AI, most people say they've got GitHub Copilot at work which is great news because it makes it possible to use CLIs like this or use AI-integrated VS Code extensions.
Getting started
You need two things:
- GitHub Copilot CLI
- Your blog repo with markdown files
The easiest way to get set up and keep up to date is with aitools, which brings you right to a screen that handles authentication:
1# Install aitools
2Install-Module aitools
3
4# Install GitHub Copilot CLI
5Install-AITool -Tool GitHubCopilot
Already have it installed? Keep it updated:
1Update-AITool -Tool GitHubCopilot
Don't want to use aitools? Install manually:
1# Install GitHub CLI from cli.github.com, then:
2gh extension install github/gh-copilot
3gh auth login
Model limitations
GitHub Copilot CLI currently supports these models: claude-sonnet-4.5, claude-sonnet-4, claude-haiku-4.5, and gpt-5. All CLI requests count as premium requests—there's no free tier option here. (GPT-4o is available in Copilot Chat, but not the CLI. Hopefully that changes.)
If you try copilot --model gpt-4o (a free model), you'll get an error about invalid model choices.
For blog maintenance, claude-sonnet-4.5 is what I'd pick. GPT-5 made mistakes that Sonnet caught. Haiku 4.5 is honestly good enough for this work—and if you're on the free tier with 50 requests, Haiku costs 0.3x, so you actually get 150 files processed instead of 50.
But if you're on the 30-day Pro trial, go Sonnet all the way. You've got 300 premium requests to burn through, and for a project where you're reviewing every change anyway, might as well use the best.
Now you need a prompt document with your maintenance rules.
Write your prompt
The instructions matter more than the model. I learned this processing all the dbatools posts—specificity wins.
Save this as prompts/blog-maintenance.md:
1# Blog Maintenance Instructions
2
3## What to fix
4
51. **Broken links**: Test all links. Update Microsoft docs URLs to current paths. Remove dead Twitter/X links.
6
72. **Outdated screenshots**: If you see PowerShell console screenshots (old blue Windows PowerShell), extract commands and output as text.
8
93. **Code examples**: Check if commands still work. Update deprecated parameter names. Use splatting for 4+ parameters if it improves readability.
10
114. **Social media embeds**: Remove Twitter/X embeds. Replace with brief paraphrased statements where relevant.
12
13## What to preserve
14
151. **Historical content**: Leave version numbers and download counts alone - they document what was true then.
16
172. **Author voice**: Preserve the original writing style.
18
193. **Working code**: Don't "modernize" code that still works.
20
21## Judgment calls
22
23When in doubt, preserve the original. Only change what's actually broken or misleading.
My actual dbatools prompt is 300+ lines covering edge cases. Start simple and refine as you go.
The nuance matters. I had to tell it: "Update 'coming soon' references if those features shipped, but leave version numbers alone—those are historical facts." Without that, it'd "fix" things that weren't wrong.
The actual implementation
Once you've got your prompt dialed in, you need to run it against your files. GitHub Copilot CLI can do this, but the @file syntax gets tedious when you're batch processing dozens of posts.
I use aitools because it handles the context management automatically. It starts fresh with every piped file, so quality stays super high. Previous files don't pollute the current file's context.
Plus, I got tired of looking up different flags for gh copilot, claude, gemini, etc. With aitools, I have consistent commands across all AI CLIs—GitHub Copilot, Claude Code, Gemini, Aider, whatever—plus proper error handling, predictable output, and PowerShell pipeline support.
1# Set GitHub Copilot as default
2Set-AIToolDefault -Tool GitHubCopilot
3
4# Process your posts
5Get-ChildItem content/post/*.md |
6 Invoke-AITool -Prompt .\prompts\blog-maintenance.md
Don't have aitools? You can call Copilot CLI directly, but you need to follow specific patterns for how it loads file context:
1$prompt = Get-Content .\prompts\blog-maintenance.md -Raw
2
3Get-ChildItem content/post/*.md | ForEach-Object {
4 Write-Host "Processing $($_.Name)..." -ForegroundColor Cyan
5
6 # Copilot needs the file reference FIRST with @ syntax
7 $message = "@$($_.FullName)`n`n$prompt"
8
9 # Pass message via -p parameter (not piped via stdin)
10 gh copilot suggest --editor -p $message
11}
The @ syntax tells Copilot to load that file into context before reading your instructions. File reference first, then your prompt. That order matters.
The CLI shows you proposed changes before making them. Review each one.
What it found
I tested this on a batch of posts. Here's what it fixed:
Link rot everywhere. Microsoft reorganized their docs multiple times since 2016. Twitter links that now go nowhere. GitHub paths that changed. It tested every link and updated or removed them.
Those embarrassing blue screenshots. You know the ones—classic Windows PowerShell console captures from 2016. It extracted the commands and output as text. Not as sophisticated as Claude's extraction, but it worked.
Dead social embeds. Old tweet embeds got converted to brief paraphrased statements. Where it could find people's Bluesky profiles, it linked those instead.
Code that aged out. Parameter names that changed. Commands that needed splatting. It updated these, but only when appropriate—it didn't mechanically "modernize" everything.
Smart preservation. This is where it got interesting. It left version numbers alone in historical announcement posts. Those weren't wrong—they documented what was true at that moment. But for posts about ongoing practices, it checked the current repo and updated to match reality.
The judgment call problem
Even with pages of instructions, it made judgment calls I wouldn't have made. When I used Claude Code on dbatools.io, it found drastically different websites for people mentioned in old posts—Michael Lombardi's site changed completely, Warren Frame's too. It found the new sites and updated the links.
That's work I'd never do manually. Not because it's hard, but because it's tedious and requires checking every single person mentioned across all those posts.
Copilot didn't catch as many of these automatically—it needed more explicit instructions per file type. When I told it "check if author websites changed," it did. But Claude noticed patterns I hadn't thought to look for. Still, for 50 files on the free tier, giving more specific prompts is completely manageable.
What this actually means
This isn't just blog maintenance. It's a whole category of work that never gets done because it's tedious but requires judgment.
Documentation that drifts out of date. Example code in READMEs that might not work with current versions. Internal wikis with broken links and outdated processes. The maintenance work you know you should do but never have time for.
How many projects have docs that are 80% correct but just old enough that you can't fully trust them? How many internal knowledge bases are slowly rotting because nobody has bandwidth to fix everything?
Now you can actually fix it. And you can do it within the free tier if your blog isn't massive.
Try it yourself
Pick three old posts. Write instructions for what links to fix, what code to update, and what historical content to keep.
Run Copilot on those three files. See what happens. Refine your instructions based on what it got wrong. Then scale up.
I maintained blogs manually for years and always fell behind. Now I pipe markdown through an AI CLI and review diffs. That's sustainable in a way manual maintenance never was.
Want to see how this worked with Claude Code? Check out my posts on rebuilding dbatools.io and the full blog maintenance workflow.