Repost from dbatools.io: Recently, I presented to the Philadelphia PowerShell User Group about Contributing to Open Source PowerShell projects using GitHub Desktop. I chose this topic because a number of people have asked for step-by-step instructions on how to work with GitHub.
I love the command line, but with Git? I think it’s important to get the concepts down first and the Desktop version makes that a lot easier to contribute to projets like dbatools, dbareports and even PowerShell itself.
In the presentation, I pretty much follow along with the tutorial on our wiki called Your First Pull Request. But! I also included some information about working with PowerShell, PowerShell modules and PowerShell-centric GitHub repositories that was not in the tutorial, so there’s still some good information to gain even if you already know how to make a Pull Request.
So if you’ve wanted to get involved in the project but were a bit hesitant because you aren’t comfortable with git, check out this informative and fun video and Get-Contributing ;)
After finding that it suited my requirements, I finally decided on a solid VPN – F-Secure Freedome – as recommended by a friend in security. Then I needed to share my connection with my Roku, but Freedome is an application that has to be run on a phone or computer and routers aren’t supported.
The commands I used there were deprecated, so I moved on and found this awesome comment on StackExchange to get me started. This comment provides a majority of the code below. But it was only mostly working. This post, Share your VPN with Mac OS X El Capitan, felt like it got me closer.
And finally, this post suggested that I setup two rules instead of one and voila! Success.
Setup the infrastructure
Set this part up however you need, I did this wired to wired and it worked, too.
So I connected my Mac to my primary wireless network haxx, then plugged the Mac’s Ethernet port to the upstream of my second wireless router otherhaxx. I setup the otherhaxx the way I would any other wireless router and connected my Roku wirelessly to otherhaxx.
Note that some of these IPs won’t appear until later. Namely, Internet Sharing creates the 192 subnet once it starts and the Roku won’t get its IP until everything is setup. Also, 18.104.22.168 was my first broadband IP ever back in 1997 :D
Now back to the Mac. I did some ifconfigs and found that my wireless connection was en1 and Freedome created an interface named utun0. Mac Internet Sharing created a network with the subnet 192.168.2.x.
I then set Freedome to connect on startup.
Ensure you add a single trailing empty line
My code formatter won’t let me add them, but in the original post, the author said a single trailing empty line in each file is required.
This is where I spent most of my time testing and retesting different configs. This right here is the key to sharing your VPN connection with Internet Sharing. I tried doing it without explicitly listing the subnet but was unable to get it to work.
nat on en1 from192.168.0.0/16toany->(en1)
nat on utun0 from192.168.0.0/16toany->(utun0)
This file is cool because it’s universal; you really only need to edit nat-rules when you need to make modifications. Oh, also, one of my Macs refused to acknowledge that net.inet.ip.fw.enable existed and it still works anyway. I don’t think that particular entry matters.
#flushes all pfctl rules
#starts pfctl and loads the rules from the nat-rules file
I ended up editing /private/etc/nat-rules and executing /usr/local/nat-pf.sh more times than I can count. Mess with that until it works. Ensure you’re using the appropriate device names be examining the output of ifconfig.
Hey yall! I invited one of the most active dbatools developers, Cláudio Silva, to do a guest post about his motivation for joining the team and why he built this particular command. So without further ado… here’s Cláudio :D
Hi everyone my name is Cláudio Silva, I’m a MS SQL Server DBA, and have worked with SQL Server since version 2000. Previously, I worked as SQL developer for 8 years before becoming a full time DBA. I’m also a PowerShell enthusiast!
I was invited by no one but PowerShell MVP Chrissy LeMaire (thank you Chrissy!), to share why I come up with the idea of develop this function in PowerShell and why give it away to community.
I will start with the second part. When I first discovered the dbatools project I was pleasantly surprised by such a project that is so useful. Remember when SQL Server 2005 mainstream support ended last April? Well, like many other organizations, we will have to migrate so I decide to analyse, test it and get to know the limitations of dbatools.
When I saw that is an open source project, that Chrissy was soon coming to Portugal to present about migrations at TugaIT, I thought I want to share with her my opinions and be part of the team. There are so much fun work we can develop and help the community that being help me on the last years. This is my opportunity to give back some love! And here I am.
Now, going to what really matters: why did I create a PowerShell function (Expand-SqlTLogResponsibly) to help SQL Server database transaction log to growth responsibly?
A little background
Transaction logs are one of the most important parts of every database and it has a performance side that should not be neglected.
The transaction log is composed by one or more VLF (Virtual Log Files). Having a transaction log with too many VLFs can hurt your database performance in many ways.
Transaction log backups may slow down
Database recovery can take to much time (I have seen by my own hours for recovery after database shutdown just because transaction log has more that 1000 VLFs)
Also can affect insert/update/delete performance.
Ok, but what is a Virtual Log File and how they are created?
When we say to SQL Server growth (or auto-growth hit) the transaction log file SQL Server is dividing that amount of space in various “logic files” (VLFs) with the same size. Here, the number of VLFs vary from the quantity of space that is being added.
Read more about it in the following link from Paul Randal (b), also note that in SQL Server 2014 this algorithm has changed (Important change to VLF creation algorithm in SQL Server 2014).
How can I see how much VLFs I have right now?
You can use the command DBCC LOGINFO (‘<database name>’) to know how many VLFs your transaction log. Each row returned is one VLF.
What have we to do to fix this value?
We have to do 2 major steps:
Shrink the transaction log size
Grow the file manually to a specific size
We need to take into account the database RECOVERY MODEL because if database is using FULL recovery model then we will need to backup transaction log and only after that we can use the DBCC command “DBCC SHRINKFILE (‘<logfile_name>’, TRUNCATEONLY)”. After that you have to check your transaction log size and/or number of VLFs in order to know if you have to repeat this step (BACKUP/SHRINFILE)
I see, but for the second step if I want to growth the file by 50GB (because was the size I saw before start the step 1) should I do it at once? Or should I execute many times with smaller sizes until get the 50GB?
This is a very good question. If you have too few VLFs for a 50GB log file when the VLF become available to be re-used? Or if we want to shrink the file (to growth) we can only do this by removing VLFs that are marked for reuse.
If we growth the file by 50GB (prior SQL 2014) we will get 16 VLFs of roughly 3200MB each, which means that we can’t reuse the VLF before 3200MB being reached (this could not be bad if you do not need to shrink again). That is why our function suggest to growth the file in 8GB chunks (will create 16 VLFs of roughly 512mb) until achieve the desired size (for 16GB or higher target size). Would be 8 them for 16 – 24 – 32 – 40 – 48 – 50. We have to execute the command seven times to reach the value we want.
Now that I am more familiar with the concepts, why do you convert this steps into PowerShell function?
The quick answer is automate all this steps! Call the function and sit back watch it do all the work for you!
Really, just type the Instance, database and your desired target size. We will figure out the best chunk size for your needs. You would not need to choose the chunk size, not even write and call the ALTER DATABASE command 7 times (picking last example).
NOTE: If you want, you can specify the -IncrementSizeMB parameter to say the desired chunck size.
This include the backups of the transaction log?
YES! Jason Squires have contributed with code for the backup/shrinkfile part! Which means if you do your backups to file system you can say the path and the function will do ALL that work for you!
With a single command we can get up to 5 log backups plus X alter databases commands executed without any further actions needed.
Have you seen the SQL Server Tools team’s post about the PowerShell module for SQL Server? There are TWENTY FIVE new cmdlets! TWENTY FIVE! That’s a 53% increase over the previous 46 cmdlets. It looks like the SQL Server community is well on its way to becoming proper citizens in the PowerShell world, and we’ll reach Exchange’s 700+ in no time ;)
It’s now so much easier to work with SQL Server Agent jobs. But before we dig in, here are the five new cmdlets dedicated to SQL Server Agent.
Returns a SQL Agent (JobServer) object that is present in the target instance of the SQL Server.
Returns a SQL Agent Job object for each job that is present in the target instance of SQL Agent.
Returns the JobHistory present in the target instance of SQL Agent.
Returns a JobSchedule object for each schedule that is present in the target instance of SQL Agent Job.
Returns a SQL JobStep object for each step that is present in the target instance of SQL Agent Job.
Returns a SQL JobSchedule object for each schedule that is present in the target instance of SQL Agent.
One of the best things about these cmdlets is that it makes failed jobs a whole lot easier to find. Note that SQL Server Management Studio 2016 does not have to be installed on the server — only your workstation. That’s because the cmdlet is built on top of SMO which is available in all versions of SQL Server since 2000.
As an aside, you may be wondering what a cmdlet is, and what’s the difference between a cmdlet and what you and I write. A cmdlet is a formally written PowerShell command that’s written in C#. I’ve never written one in my life. Instead, I write Functions. They’re both essentially the same, but a cmdlet is fancier on the back end.
Let’s see some code!
Let’s start by specifying our SQL Server name dynamically so that you can easily copy and paste from this blog. In my case, $sqlserver = “sql2012”
Now you can pipe the Agent server to the next command, but you can also explicitly specify the SQL Server in the cmdlet itself. You can also specify the -Since the get events that have occurred since a specific date. Valid values incldue Midnight, Yesterday, LastWeek and LastMonth. But you don’t have to remember this, you can just tab through the values after typing -Since.
I am still so floored by the additions and enhancements made to the module formerly known as SQLPS, now known as SqlServer. Not only do the new additions bring the total number of cmdlets from 46 to 71, but they align what we voted for on the Trello board. That’s a 54% increase in the number of cmdlets! We’ll be up to 700 like Lync and them in no time ;)
The Tools Team also added Always Encrypted which received no votes on the Trello board, but we knew this was coming because Ken Van Hyning said “Currently we prioritize via a couple of signals. Primarily, there is our Connect and CSS channels where we get input from customers. We also have some business directives around new advancements in SQL Server, like Always Encrypted. This isn’t really the way we need to run this though.”
What’s particularly amazing is that our input actively guided the module as it was being developed! It’s really incredible to have such a direct impact on a tool that will help us with our jobs every day.
What else is new?
In addition to the new Agent, Error Log and Always Encrypted cmdlets, Invoke-SqlCmd now supports outputting to .NET DataTables/Sets/Rows. The SQL Server team also added a ConnectionString parameter for those of you who would like the full flexibility of a ConnectionString.
The SQL PowerShell provider also now properly supports the WhatIf and Confirm parameters.
What about scripts that Import-Module SQLPS?
The SQL Server Tools team really put a lot of thought into this, and SQLPS will remain for now, it just won’t get updated. In talking to the SQL Server PowerShell Engineer, Matteo Taveggia about it, he said
If you are on a clean machine (or even if you had machine with the June Refresh SSMS) and you install the upcoming version of SSMS, you will indeed get both the new SqlServer module (new location) and the old SQLPS (same old location).
I did this intentionally to smooth the transition for existing users and avoid breaking them unexpectedly. Basically, their scripts (which have Import-Module SqlPs) will continue to work – they just are not going to get the new stuff / updates. Then, when they learn about the new cmdlets and fixes, they can start migrating at their own pace.
What about the community module SQLPSX?
SQLPSX is a community module that was built to fill the gap of commands that aren’t available in SQLPS, now SqlServer. SQLPSX had a module inside called SqlServer, which presents a naming conflict. This conflict can be resolved, however, by using the -Prefix parameter.
Then the commands available within the SQLPSX module will be prefixed with an x, for instance Get-xSqlTable.
Join us on Trello and vote vote vote vote
If you’ve already joined the SQL PowerShell Community Board on Trello, get over there and vote for the next set of cmdlets. If you haven’t joined, head on over and vote. You can vote without being a member of the board, but if you’d like to join, comment here or on our comment card. While you’re at it, join the #powershell channel on the SQL Server community’s Slack.
It appears that Microsoft also really benefits from code attachments. If you have any to contribute, which would help with the development, please do attach. They make cmdlets (formalized PowerShell commands), but .ps1 totally works.
Any guesses as to what cmdlets are next? Hint: it’s up to us to decide with our votes!
My first homelab ever was started as a single Packard Bell 486 which ran SuSE Linux 5.2. I used it with my @Home connection to host web, DNS and mail services. Over time, I’d move leftover computers to a data center and covered most of my colo fees with the proceeds from Google Adsense, which I added to my website RealCajunRecipes.com back in 2004.
I had a whole mixed domain (Windows and Linux) in my quarter rack located in downtown Los Angeles. I’d moved a few times and always brought my homelab (which now consisted of HP DL360s) with me which meant tons of downtime.
After moving my servers from LA to San Jose to Austin, I opted for hosting my websites on a VPS, and kept my homelab… at home. And with that, gone were the days of impressing ladies by showing them the Internet.
Nowadays, I have super lightweight, quiet, and fast Shuttles. Here’s what my lab looks like from SUPA FAST to decently fast.
That’s virtualized, too! These tests were performed on my SH170R6 box with ESX and a Win10 VM with regular ol attached disks. I’m so impressed by not only M.2 but ESX as well.
For any of you following some of the work I do with SQL Server and PowerShell import performance, I was disappointed to see that the M.2 only increased the imports by about 20,000 rows per second. That suggests that a majority of the tuning I’ve got left to do is in Windows, SQL Server and on the network.
I love client side networking like VPNs and stuff, but I’m not a fan of networking in general so I keep it simple. I use Apple Airport Extremes – one in the living room that connects to the Internet and one in the storage closet that connects to the one in the living room via wireless. It’s *so* fast, even though 1 Belgian concrete wall and 2 large rooms. Speedtest.net just said that I got 106.52 Mbps down, 10.49 Mbps up. Through a wall and over 2 rooms of wireless!
The communication between all of my VMs stay local, though, which is cool. I also have an older Cisco router but networking ain’t my bag so it’s waiting for the day that I’m interested. When I need anything more than the basics, I use a WRT54GL with dd-wrt.
Why not just use Azure?
I like to test my scripts against a variety of versions/editions and I don’t like spinning VMs up and down all the time. As for the cost; some people spend their money on golf, Polish pottery and gaming rigs. I spend mine on servers, Belgian beer and travel ;)
As you can see, I also have an old Macbook Pro with 256 SSD, 4TB HDD and 8GB RAM in the mix. It’s for photos and videos, however. And someone gave me an old silver Shuttle from like 2002, but I haven’t had the time to set it up yet.
So what runs on this?
I started using VMware products around 1999 and continue to use it to this day for virtualization. Currently, I’m running vSphere 6. I get a super cheap license through my school, Regis University, though you can also get it for $200/year if you become a member of VMUG.
Thanks to my MSDN account, I also have a whole mix of SQL Servers, some DCs, Remote Desktop Gateway, Lync, SharePoint, Exchange, FreeNAS, docker, some rando test servers, and then a few different workstations like Windows 7 with PowerShell 3 and SQL Server 2008 R2 SMO to ensure dbatools works on older systems.
The rest of my equipment is Apple (big ol Thunderbolt display, Mac Mini, Macbook Pro), which is use to RDP to my Windows and Linux machines.
Anyway, that’s my homelab. It’s lightweight which is necessary because I broke my shoulders back in 2007 and can’t carry heavy things. It’s also energy efficient, which is necessary because I live in Europe where the energy prices are crazy.
This month’s T-SQL Tuesday, hosted by Michael Swart (blog | twitter), is all about SQL Server 2016.
sp_help_revlogin is now old enough to drive a car
By now in your career, you may have needed to migrate a few databases and logins. The process isn’t as bad as it could be, after all Microsoft published sp_help_revlogin over 15 years ago to help migrate logins with their passwords and SIDS. Unfortunately, there has been no notable progress since this stored procedure was first released.
Why is this an issue? Well, by using sp_help_revlogin, you are required to do the following, as demonstrated in the painfully boring 41 second video below:
Find the stored procedure somewhere on your hard drive/the Internet
Install sp_help_revlogin and sp_hexadecimal on the source server
Execute it on the source server
Copy the resulting SQL output
Paste it into the query pane of the resulting server
Now you’ve migrated the logins with their passwords, SIDs, and a few default properties. But you don’t have the logins’ server roles, server permission sets, database roles or database permission sets. So now you gotta find and use someone’s modified version of sp_help_revlogin, but you’re still left with manually executing the procedure against your source and destination servers.
Oh, and don’t forget different versions of SQL Server use different hashing algorithms, so you’ll need to use one of the many different versions of sp_help_revlogin if you have a mixed environment.
Let’s hope you only have one or two SQL Servers to migrate and not hundreds.
Introducing Copy-SqlLogin and Export-SqlLogin
Copy-SqlLogin and Export-SqlLogin are the PowerShell answers to performing SQL Server login migrations. These commands are not part of Microsoft’ official SQL Server module, SQLPS, but they do help illustrate how PowerShell can make our lives easier.
Want to export all of your logins to a file, complete with SID, hashed password, roles, permission sets, securables and all that? Just install the dbatools module then execute
Copy-SqlLogin works on SQL Server version 2000-2016. It takes care of the hashing algorithm differences. It works on clusters, named instances, all editions from Express to Enterprise and it copies both Windows and SQL Logins. It copies the default database, default language, password policy settings, securables, permissions sets (server & db), and roles (server & db). And it does so in just a few seconds.
If you’d like a walk-through, check out the video below where I go a bit more into depth about Copy-SqlLogin, create a login, migrate the login, then log into SQL Server as the newly migrated account.
Need to sync your login permissions for your availability groups? Sync-SqlLoginPermissions was created by request, just for that. Unlike Copy-SqlLogin, Sync-SqlLoginPermissions doesn’t add new logins. It just syncs the server & db permission sets, as well as the server & db roles and job ownership.
Need to copy jobs for you availability groups, too? Copy-SqlJob has you covered.
In 2016, DBAs will begin to fully adopt PowerShell
The year 2016 is an especially exciting time for the SQL and PowerShell community.
It’s no secret that prior to SQL Server 2016, the SQL Server team didn’t invest much in PowerShell. SQLPS was slow, rude, and didn’t conform to standard PowerShell practices. The SQL Server community was given a module with less than 50 commands, which left us needing to create solutions of our own. Meanwhile the Lync, Exchange and SharePoint communities were bestowed with over 700 commands each.
If you’ve been a fan of PowerShell, I hope the SQL Server team’s recent dedication to PowerShell renews your excitement. If you’re a PowerShell holdout, I understand. I adopted PowerShell for SharePoint long before I did PowerShell for SQL Server. I thought it was slow (have you seen the performance of Start PowerShell in SSMS prior to 2016?!) and hard to learn.
But now I see its power and it will only get more awesome now that PowerShell has a dedicated resource within the SQL Server team.