PowerShell Output Redirection with AI CLIs

TL;DR: In Windows, many AI CLI tools (Aider, Claude Code, Gemini, etc.) work unreliably with PowerShell's standard output capture. The solution is write to temp files instead of variables or add a -Raw
flag for when you need the tool to actually commit changes instead of just previewing them or hanging forever.
The Unix tooling problem on Windows
Most AI tools seem to be written using Unix-native tools like Python and Node.js. If you're building PowerShell modules that interact with AI CLIs like Claude Code or GitHub Copiot CLI, you're dealing with Node.js wrappers or Python processes, all running on Windows where nobody really designed them to work. It'll sometimes work, but then it doesn't.
I ran into this with Jupyter notebooks when working on my demo for PSConf.eu. Commands that worked fine from my terminal would break when called from my demo notebook. Then I wanted to make my output pretty with a custom object, so I attempted to capture the CLI's output to a variable. Pretty much every tool either hung completely or exited after a third of the time it needed to. Then the AI model would get stuck, repeatedly asking "what would you like me to do with this file?" instead of just editing the file. The models somehow detected this state (no idea why) and fell back to asking permission instead of actually writing changes. Variable assignment became unreliable, too, with$result = gemini "fix this code"
working every fifth time.
What went wrong
If you look at Invoke-AITool, you'll see all the workarounds I had to implement. Three different code paths for handling output. Explicit UTF-8 encoding set in multiple places. Tool-specific handling hardcoded in. Regex filters for warnings that shouldn't be there. This wasn't how I hoped the command's code would look, but this is what actually worked.
Claude says that the problem is PowerShell's object pipeline colliding with Unix-style byte streams. Which will never get fixed in PowerShell 5.1. When PowerShell captures output to a variable, it's doing buffering and encoding transformations that the subprocess doesn't know about. The Node wrapper doesn't know when PowerShell is "done" reading. Encoding shifts between UTF-8 and UTF-16. Exit codes don't propagate right. stderr and stdout mix randomly.
Some AI tools detect output redirection and switch to some kind of preview mode. They'll show you what they would change without actually modifying files.
File-based output capture
For probably the first time in decades of coding in PowerShell, I HAD write the output to temp files in order to assign it to a variable (instead of right out to the console):
1$tempOutputFile = [System.IO.Path]::GetTempFileName()
2& $toolDef.Command @arguments *>&1 | Out-File -FilePath $tempOutputFile -Encoding utf8
3$capturedOutput = Get-Content -Path $tempOutputFile -Raw -Encoding utf8
4Remove-Item -Path $tempOutputFile -Force
Writing to disk forces synchronous I/O writes to the filesystem. The tool writes, the file closes, PowerShell reads. No buffering issues, no encoding problems, no weird timing issues. This has subsequently worked in Jupyter notebooks and the console with no issues.
For encoding, I had to be explicit everywhere. I set [Console]::OutputEncoding
to UTF-8 before invoking the tool, used $env:PYTHONIOENCODING = 'utf-8'
for Python tools and specified -Encoding utf8
on all file operations ๐
Using -Raw
A shortcut for all of this is to add a -Raw flag and have your command output everything right to the terminal without processing. It's acutally how I migrated 80% of the Pester tests from v4 to v5 in dbatools using AI. I wrote a quick command to get the job done and only started running into issues when I wanted to make the output pretty and generalized for my new module, aitools.
Without -Raw | With -Raw |
---|---|
Output captured: $result = (& cmd 2>&1) | Out-String | Direct execution: & cmd |
Tool detects output redirection | Tool thinks it's in a real terminal |
May skip file writes or hang | Works and commits file changes normally |
Buffered output | Immediate output |
Returns structured PSCustomObject | Returns CLI output as-is |
The -Raw
flag bypasses all the capture procedures and lets the tool run normally. Output goes straight to your console and file changes actually write. Use this for Jupyter notebooks or interactive scenarios where the structured output breaks the command.
Why this matters for module authors
If you're writing PowerShell modules that wrap AI tools, these issues will bite you eventually. The pipeline wasn't built for byte streams from Unix tools, and there's no clean fix. Seems like we all just have to work around it, especially because Windows PowerShell will never be updated again (sad face).
File-based output isn't elegant but at least it works reliably.
You can check out how I handled it in aitools if you're dealing with similar problems. The code isn't as clean as I'd like, but it gets the job done on Windows without asking users to switch to WSL.