Hey yall! I invited one of the most active dbatools developers, Cláudio Silva, to do a guest post about his motivation for joining the team and why he built this particular command. So without further ado… here’s Cláudio :D
Hi everyone my name is Cláudio Silva, I’m a MS SQL Server DBA, and have worked with SQL Server since version 2000. Previously, I worked as SQL developer for 8 years before becoming a full time DBA. I’m also a PowerShell enthusiast!
I was invited by no one but PowerShell MVP Chrissy LeMaire (thank you Chrissy!), to share why I come up with the idea of develop this function in PowerShell and why give it away to community.
I will start with the second part. When I first discovered the dbatools project I was pleasantly surprised by such a project that is so useful. Remember when SQL Server 2005 mainstream support ended last April? Well, like many other organizations, we will have to migrate so I decide to analyse, test it and get to know the limitations of dbatools.
When I saw that is an open source project, that Chrissy was soon coming to Portugal to present about migrations at TugaIT, I thought I want to share with her my opinions and be part of the team. There are so much fun work we can develop and help the community that being help me on the last years. This is my opportunity to give back some love! And here I am.
Now, going to what really matters: why did I create a PowerShell function (Expand-SqlTLogResponsibly) to help SQL Server database transaction log to growth responsibly?
A little background
Transaction logs are one of the most important parts of every database and it has a performance side that should not be neglected.
The transaction log is composed by one or more VLF (Virtual Log Files). Having a transaction log with too many VLFs can hurt your database performance in many ways.
- Transaction log backups may slow down
- Database recovery can take to much time (I have seen by my own hours for recovery after database shutdown just because transaction log has more that 1000 VLFs)
- Also can affect insert/update/delete performance.
Ok, but what is a Virtual Log File and how they are created?
When we say to SQL Server growth (or auto-growth hit) the transaction log file SQL Server is dividing that amount of space in various “logic files” (VLFs) with the same size. Here, the number of VLFs vary from the quantity of space that is being added.
Read more about it in the following link from Paul Randal (b), also note that in SQL Server 2014 this algorithm has changed (Important change to VLF creation algorithm in SQL Server 2014).
How can I see how much VLFs I have right now?
You can use the command DBCC LOGINFO (‘<database name>’) to know how many VLFs your transaction log. Each row returned is one VLF.
What have we to do to fix this value?
We have to do 2 major steps:
- Shrink the transaction log size
- Grow the file manually to a specific size
- We need to take into account the database RECOVERY MODEL because if database is using FULL recovery model then we will need to backup transaction log and only after that we can use the DBCC command “DBCC SHRINKFILE (‘<logfile_name>’, TRUNCATEONLY)”. After that you have to check your transaction log size and/or number of VLFs in order to know if you have to repeat this step (BACKUP/SHRINFILE)
- You need to use:
ALTER DATABASE <databasename> MODIFY FILE (NAME = ‘<transactionloglogicalfilename>’, SIZE = <newtotalsize>)
I see, but for the second step if I want to growth the file by 50GB (because was the size I saw before start the step 1) should I do it at once? Or should I execute many times with smaller sizes until get the 50GB?
This is a very good question. If you have too few VLFs for a 50GB log file when the VLF become available to be re-used? Or if we want to shrink the file (to growth) we can only do this by removing VLFs that are marked for reuse.
If we growth the file by 50GB (prior SQL 2014) we will get 16 VLFs of roughly 3200MB each, which means that we can’t reuse the VLF before 3200MB being reached (this could not be bad if you do not need to shrink again). That is why our function suggest to growth the file in 8GB chunks (will create 16 VLFs of roughly 512mb) until achieve the desired size (for 16GB or higher target size). Would be 8 them for 16 – 24 – 32 – 40 – 48 – 50. We have to execute the command seven times to reach the value we want.
Now that I am more familiar with the concepts, why do you convert this steps into PowerShell function?
The quick answer is automate all this steps! Call the function and sit back watch it do all the work for you!
Really, just type the Instance, database and your desired target size. We will figure out the best chunk size for your needs. You would not need to choose the chunk size, not even write and call the ALTER DATABASE command 7 times (picking last example).
NOTE: If you want, you can specify the -IncrementSizeMB parameter to say the desired chunck size.
This include the backups of the transaction log?
YES! Jason Squires have contributed with code for the backup/shrinkfile part! Which means if you do your backups to file system you can say the path and the function will do ALL that work for you!
With a single command we can get up to 5 log backups plus X alter databases commands executed without any further actions needed.
Please visit the Expand-SqlTLogResponsibly dbatools page to view examples and give it a try!
If you have any doubt or want to share your thoughts, drop me a line in comments or send DM on Twitter.
Thanks for reading!