finetuna 2.0
finetuna, the PowerShell module for fine-tuning OpenAI models, has been updated to version 2.0. This release now depends on PSOpenAI, supports Azure OpenAI services, and provides a demo notebook along with some sample data.
If you're new to finetuna, it makes OpenAI model fine-tuning in PowerShell simpler. It does file uploads, job management, and lets you chat with trained models. The demo notebook is a good place to start.
Get finetuna 2.0 from the PowerShell Gallery:
1Install-Module -Name finetuna
Intro to fine-tuning
If you're not into data science, fine-tuning customizes large language models like OpenAI GPT for specific tasks. Large language models (LLMs) like GPT4o are initially trained on incredibly amounts of text. While this gives them broad knowledge, they're not experts in specific domains like PowerShell. They might confuse PowerShell syntax with other languages or miss PowerShell-specific nuances. OpenAI says it can also provide:
- Higher quality results than prompting
- Ability to train on more examples than can fit in a prompt
- Token savings due to shorter prompts
- Lower latency requests
And once a model has been fine-tuned, you won't need to provide as many examples in the prompt.
To use LLMs more effectively, we can include instructions and sometimes several examples in a prompt.
Fine-tuning improves this process by training on many more examples than can fit in the prompt, letting you achieve better results on a wide number of tasks. Once a model has been fine-tuned, you won't need to provide as many examples in the prompt. This saves costs and enables lower-latency requests.
NOTE
Fine-tuning can get pricey, especially on Azure ($1.70/hr + training costs) so check the pricing details before you start.
If you're wondering how this differs from API assistants, assistants (using file uploads or vector stores) provide temporary context. They access external data during interactions, which offers flexibility for context-specific responses. This approach is useful for tasks like searching through documentation or handling frequently changing information.
Both approaches reduce the need for detailed prompting. Fine-tuning is suited for consistent PowerShell tasks, while file upload assistants excel at incorporating fresh, variable data. Choose based on your specific development needs and workflow.
To better understand the differences between these approaches, consider the following comparison table:
Aspect | Fine-Tuning | Assistants with File Uploads / Vector Stores |
---|---|---|
Memory Type | Long-Term | Short-Term |
Data Integration | Integrated into the model | Provided as external context during interactions |
Training | Requires additional training phase | No additional training, just context provision |
Use Case | Domain-specific, consistent output | Dynamic, context-aware, flexible output |
Prompt Engineering | Reduced need | Still needs context but less detailed prompts |
Example | Custom PowerShell code generator | Customer support accessing knowledge base articles |
This table compares fine-tuning and assistants with file uploads or vector stores. Use it to determine which approach fits your PowerShell development needs (it'll probably be a vector store but this was a fun project, nevertheless.)
What else is new in finetuna 2.0
Demo notebook: A new Jupyter notebook (demo.ipynb) shows off finetuna's main features. Run it with:
1Start-TuneDemo
Azure OpenAI support: Finetuna now works with Azure OpenAI services. Set it up like this:
1$splat = @{ 2 Provider = "Azure" 3 ApiKey = "your-azure-api-key" 4 ApiBase = "https://your-azure-endpoint.openai.azure.com/" 5 Deployment = "your-deployment-name" 6} 7Set-TuneProvider @splat
New commands: Compare-Embedding, Get-Embedding, and Request-TuneFileReview.
Better config management: Use Set-TuneProvider to save your OpenAI settings and Clear-TuneProvider to wipe them.
You can find more info at the finetuna GitHub repo.