Immediately Output Runspace Results to the Pipeline

In my previous post, I presented the template I use anytime I need to add multithreading to my scripts.

Recently, I had a request to add multi-threading to Read-DbaBackupHeader. This was the first runspace in which I had to output the results from the runspace to the host — usually I just needed commands to run in the background, like with bulk-inserts to SQL Server.

So two interesting things came out of this particular change. First, I made use of the time that was spent waiting for runspaces to finish.

Now, I’m basically doing this

Ultimately, though, this is what I had to do to get the output right to screen.

Now, I came upon an issue with the $restore sometimes not returning anything. I’m not sure why; perhaps it has something to do with pooling. SQL Server didn’t return an error reading the backup header, it just returned nothing. To handle this issue, I basically restarted the runspace if the result came back as null. The code is here if you’d like to see how I handled it.

Copy/Pastable Code

I generally like to provide code that actually does something useful but in this case, it just complicated things. So this outputs text — but it does it directly to the pipeline without waiting until all runspaces have finished.

If you’d like to see this with a couple more comments, check out the gist.

Hmm, I wonder if this runspace will be easier for others to understand. Looks like I’ll be linking this blog post in my previous one. Hope this helped!

Posted in PowerShell, SQL Server

Runspaces Simplified

Last year, I was looking into multi-threading in PowerShell and, with the help of Dr. Tobias Weltner, Boe Prox and Martin Pugh ultimately decided on runspaces.

Then, I presented at psconf.eu about using runspaces to speed up SQL Server/CSV imports. Runspaces took me from about 90,000 rows per second to 230,000 rows per second on average.

rps

Runspaces can be intimidating. I had heard about them, took a look at the code and was like “Ah, that looks complex. I’ll learn that later.” Because of this, I repeatedly went over the bulk insert code to familiarize the audience with the functionality that I was eventually going to multi-thread.

All of the code and the PowerPoint seen in the video can be downloaded in my directory on PSConfEU’s GitHub Repository.

Runspace Template

Ultimately, when I want to add a runspace to a command, I download code.zip and copy/paste/edit 6-runspace-concept.ps1. When I add multi-threading to my scripts, I just copy and paste the code below, then modify. Understanding everything that’s going on isn’t immediately necessary.

So that’s basically it. Adding multithreading honestly just requires a bunch of copy/pasting. I generally modify Steps 2 and 3.

Step-by-Step

  • BLOCK 1: Create and open a runspace pool. You don’t have to, but it increases performance so I always just leave it in. CreateRunspacePool() accepts min and max runspaces.

    I’ve found that 1 and number of processors+1 (thanks to Steffan for informing me why 5 was always my quad processor’s sweet spot.) Then I play with MTA and STA and see which one works better.

    I also create a runspaces array to keep track of the runspaces. Unfortunately, you can’t do like $pool.runspaces to get the collection, so you have to make your own.

  • BLOCK 2: Create reusable scriptblock. This is the workhorse of the runspace. Think of it as a function.

  • BLOCK 3: Create the runspace and add to runspace pool.

    If you write a lot of PowerShell functions, it should be apparent what it’s doing. Basically AddScript($scriptblock) is the function name, then AddArgument($connstring), .AddArgument($datatable) and AddArgument($batchsize) are the parameters.

    Note that you may find yourself passing a lot of parameters because the runspace will be mostly unaware of the variables that exist outside of the $scriptblock.

  • BLOCK 4: Add runspace to runspaces collection and start it

  • BLOCK 5: Wait for runspaces to finish

  • BLOCK 6: Clean up

  • BLOCK 7: Look at $results to see any errors or any other return

Up next

Now there are a couple Runspace commands in v5 but there’s no New-Runspace so this code still applies to v5. I generally code for v3 so this script won’t be changing much in the near future.

In the next blog post, I’ll detail a slightly different runspace and immediately outputs the results of the runspace to the pipeline.

Edit: Actually, I just finished that blog post and decided to paste the code here. It’s good for repetition and shows another simplified runspace, this time without comments. If you’d like more info on this runspace, check out the post.

Posted in PowerShell

Contributing to dbatools & dbareports using GitHub Desktop

Repost from dbatools.io: Recently, I presented to the Philadelphia PowerShell User Group about Contributing to Open Source PowerShell projects using GitHub Desktop. I chose this topic because a number of people have asked for step-by-step instructions on how to work with GitHub.

I love the command line, but with Git? I think it’s important to get the concepts down first and the Desktop version makes that a lot easier to contribute to projets like dbatools, dbareports and even PowerShell itself.

In the presentation, I pretty much follow along with the tutorial on our wiki called Your First Pull Request. But! I also included some information about working with PowerShell, PowerShell modules and PowerShell-centric GitHub repositories that was not in the tutorial, so there’s still some good information to gain even if you already know how to make a Pull Request.

So if you’ve wanted to get involved in the project but were a bit hesitant because you aren’t comfortable with git, check out this informative and fun video and Get-Contributing ;)

Posted in PowerShell, SQL Server

Share VPN with OS X Sierra Internet Sharing

After finding that it suited my requirements, I finally decided on a solid VPN – F-Secure Freedome – as recommended by a friend in security. Then I needed to share my connection with my Roku, but Freedome is an application that has to be run on a phone or computer and routers aren’t supported.

Once I confirmed that sharing a Freedome encrypted connection is supported and within the TOS, I began the 6-hour journey into sharing my VPN connection with my Roku.

Try, Try Again

First, I tried simply connecting to Freedome then turning on my Mac’s Internet Sharing. I found that as soon as I successfully connected to Freedome, my Internet Sharing would stop working.

So I revisited my blog post that I wrote a few years ago, How to Setup NAT on Lion and Mountain Lion, because I figured I needed to do some command-line magic.

The commands I used there were deprecated, so I moved on and found this awesome comment on StackExchange to get me started. This comment provides a majority of the code below. But it was only mostly working. This post, Share your VPN with Mac OS X El Capitan, felt like it got me closer.

And finally, this post suggested that I setup two rules instead of one and voila! Success.

Setup the infrastructure

Set this part up however you need, I did this wired to wired and it worked, too.

So I connected my Mac to my primary wireless network haxx, then plugged the Mac’s Ethernet port to the upstream of my second wireless router otherhaxx. I setup the otherhaxx the way I would any other wireless router and connected my Roku wirelessly to otherhaxx.

setup3

Note that some of these IPs won’t appear until later. Namely, Internet Sharing creates the 192 subnet once it starts and the Roku won’t get its IP until everything is setup. Also, 24.0.175.222 was my first broadband IP ever back in 1997 :D

Now back to the Mac. I did some ifconfigs and found that my wireless connection was en1 and Freedome created an interface named utun0. Mac Internet Sharing created a network with the subnet 192.168.2.x.

I then set Freedome to connect on startup.

Ensure you add a single trailing empty line

My code formatter won’t let me add them, but in the original post, the author said a single trailing empty line in each file is required.

Create /private/etc/nat-rules

This is where I spent most of my time testing and retesting different configs. This right here is the key to sharing your VPN connection with Internet Sharing. I tried doing it without explicitly listing the subnet but was unable to get it to work.

Create /usr/local/nat-pf.sh

This file is cool because it’s universal; you really only need to edit nat-rules when you need to make modifications. Oh, also, one of my Macs refused to acknowledge that net.inet.ip.fw.enable existed and it still works anyway. I don’t think that particular entry matters.

Create /Library/LaunchDaemons/org.user.natpf.plist

No idea if this works, but it seems to. I should reboot and test ;)

Clean up and launch the daemon

Now that the files have been created, ensure they’ve got the proper owners and permissions.

Didn’t work?

I ended up editing /private/etc/nat-rules and executing /usr/local/nat-pf.sh more times than I can count. Mess with that until it works. Ensure you’re using the appropriate device names be examining the output of ifconfig.

Posted in Networking, OS X & iPhone

Guest blogger Cláudio Silva: Why Expand-SqlTLogResponsibly?

Hey yall! I invited one of the most active dbatools developers, Cláudio Silva, to do a guest post about his motivation for joining the team and why he built this particular command. So without further ado… here’s Cláudio :D


Hi everyone my name is Cláudio Silva, I’m a MS SQL Server DBA, and have worked with SQL Server since version 2000. Previously, I worked as SQL developer for 8 years before becoming a full time DBA. I’m also a PowerShell enthusiast!

I was invited by no one but PowerShell MVP Chrissy LeMaire (thank you Chrissy!), to share why I come up with the idea of develop this function in PowerShell and why give it away to community.

I will start with the second part. When I first discovered the dbatools project I was pleasantly surprised by such a project that is so useful. Remember when SQL Server 2005 mainstream support ended last April? Well, like many other organizations, we will have to migrate so I decide to analyse, test it and get to know the limitations of dbatools.

When I saw that is an open source project, that Chrissy was soon coming to Portugal to present about migrations at TugaIT, I thought I want to share with her my opinions and be part of the team. There are so much fun work we can develop and help the community that being help me on the last years. This is my opportunity to give back some love! And here I am.

Now, going to what really matters: why did I create a PowerShell function (Expand-SqlTLogResponsibly) to help SQL Server database transaction log to growth responsibly?

A little background

Transaction logs are one of the most important parts of every database and it has a performance side that should not be neglected.

The transaction log is composed by one or more VLF (Virtual Log Files). Having a transaction log with too many VLFs can hurt your database performance in many ways.

Examples:

  • Transaction log backups may slow down
  • Database recovery can take to much time (I have seen by my own hours for recovery after database shutdown just because transaction log has more that 1000 VLFs)
  • Also can affect insert/update/delete performance.

Ok, but what is a Virtual Log File and how they are created?

When we say to SQL Server growth (or auto-growth hit) the transaction log file SQL Server is dividing that amount of space in various “logic files” (VLFs) with the same size. Here, the number of VLFs vary from the quantity of space that is being added.
Read more about it in the following link from Paul Randal (b), also note that in SQL Server 2014 this algorithm has changed (Important change to VLF creation algorithm in SQL Server 2014).

How can I see how much VLFs I have right now?

You can use the command DBCC LOGINFO (‘<database name>’) to know how many VLFs your transaction log. Each row returned is one VLF.

What have we to do to fix this value?

We have to do 2 major steps:

  1. Shrink the transaction log size
  2. Grow the file manually to a specific size
Step-by-step
  1. We need to take into account the database RECOVERY MODEL because if database is using FULL recovery model then we will need to backup transaction log and only after that we can use the DBCC command “DBCC SHRINKFILE (‘<logfile_name>’, TRUNCATEONLY)”. After that you have to check your transaction log size and/or number of VLFs in order to know if you have to repeat this step (BACKUP/SHRINFILE)
  2. You need to use:

ALTER DATABASE <databasename> MODIFY FILE (NAME = ‘<transactionloglogicalfilename>’, SIZE = <newtotalsize>)

I see, but for the second step if I want to growth the file by 50GB (because was the size I saw before start the step 1) should I do it at once? Or should I execute many times with smaller sizes until get the 50GB?

This is a very good question. If you have too few VLFs for a 50GB log file when the VLF become available to be re-used? Or if we want to shrink the file (to growth) we can only do this by removing VLFs that are marked for reuse.

If we growth the file by 50GB (prior SQL 2014) we will get 16 VLFs of roughly 3200MB each, which means that we can’t reuse the VLF before 3200MB being reached (this could not be bad if you do not need to shrink again). That is why our function suggest to growth the file in 8GB chunks (will create 16 VLFs of roughly 512mb) until achieve the desired size (for 16GB or higher target size). Would be 8 them for 16 – 24 – 32 – 40 – 48 – 50. We have to execute the command seven times to reach the value we want.

Now that I am more familiar with the concepts, why do you convert this steps into PowerShell function?

The quick answer is automate all this steps! Call the function and sit back watch it do all the work for you!

Really, just type the Instance, database and your desired target size. We will figure out the best chunk size for your needs. You would not need to choose the chunk size, not even write and call the ALTER DATABASE command 7 times (picking last example).

NOTE: If you want, you can specify the -IncrementSizeMB parameter to say the desired chunck size.

This include the backups of the transaction log?

YES! Jason Squires have contributed with code for the backup/shrinkfile part! Which means if you do your backups to file system you can say the path and the function will do ALL that work for you!

With a single command we can get up to 5 log backups plus X alter databases commands executed without any further actions needed.

Please visit the Expand-SqlTLogResponsibly dbatools page to view examples and give it a try!

If you have any doubt or want to share your thoughts, drop me a line in comments or send DM on Twitter.

Thanks for reading!

References

Posted in PowerShell, SQL Server

New PowerShell Commands from the SQL Server Team!

Have you seen the SQL Server Tools team’s post about the PowerShell module for SQL Server? There are TWENTY FIVE new cmdlets! TWENTY FIVE! That’s a 53% increase over the previous 46 cmdlets. It looks like the SQL Server community is well on its way to becoming proper citizens in the PowerShell world, and we’ll reach Exchange’s 700+ in no time ;)

Included in the 25 new cmdlets is a whole lot more support for SQL Server Agent. This batch was chosen because of community upvoting on the Trello boards. We literally asked for this and got it. If you haven’t participated in the design of the cmdlets yet, head over now and start upvoting.

SQL Server Agent

It’s now so much easier to work with SQL Server Agent jobs. But before we dig in, here are the five new cmdlets dedicated to SQL Server Agent.

Get-SqlAgent Returns a SQL Agent (JobServer) object that is present in the target instance of the SQL Server.
Get-SqlAgentJob Returns a SQL Agent Job object for each job that is present in the target instance of SQL Agent.
Get-SqlAgentJobHistory Returns the JobHistory present in the target instance of SQL Agent.
Get-SqlAgentSchedule Returns a JobSchedule object for each schedule that is present in the target instance of SQL Agent Job.
Get-SqlAgentJobStep Returns a SQL JobStep object for each step that is present in the target instance of SQL Agent Job.
Get-SqlAgentSchedule Returns a SQL JobSchedule object for each schedule that is present in the target instance of SQL Agent.

One of the best things about these cmdlets is that it makes failed jobs a whole lot easier to find. Note that SQL Server Management Studio 2016 does not have to be installed on the server — only your workstation. That’s because the cmdlet is built on top of SMO which is available in all versions of SQL Server since 2000.

As an aside, you may be wondering what a cmdlet is, and what’s the difference between a cmdlet and what you and I write. A cmdlet is a formally written PowerShell command that’s written in C#. I’ve never written one in my life. Instead, I write Functions. They’re both essentially the same, but a cmdlet is fancier on the back end.

Let’s see some code!

Let’s start by specifying our SQL Server name dynamically so that you can easily copy and paste from this blog. In my case, $sqlserver = “sql2012”

Now you can pipe the Agent server to the next command, but you can also explicitly specify the SQL Server in the cmdlet itself. You can also specify the -Since the get events that have occurred since a specific date. Valid values incldue Midnight, Yesterday, LastWeek and LastMonth. But you don’t have to remember this, you can just tab through the values after typing -Since.

Want to know all the jobs associated with a specific schedule? While you can do this to get the schedules.

What if you have duplicate names? Let’s pick with Out-GridView and pass that to Get-SqlAgentJob.

And finally, want to find the failed step within a failed job? Here’s one way you can do it.

In the future, we’ll have corresponding Set cmdlets that can address any issues that are found. Until then, we have these super useful cmdlets that make it way easier to work with jobs in PowerShell.

Great work, team! Want to see more? Aaron Nelson wrote and Rob Sewell wrote a lengthy article about new cmdlets. I wrote up a lil something about them, too.

Posted in PowerShell, SQL Server

The PowerShell Module Formerly Known as SQLPS

I am still so floored by the additions and enhancements made to the module formerly known as SQLPS, now known as SqlServer. Not only do the new additions bring the total number of cmdlets from 46 to 71, but they align what we voted for on the Trello board. That’s a 54% increase in the number of cmdlets! We’ll be up to 700 like Lync and them in no time ;)

So what’s new?

As you may know by now, the SQL Server Tools team introduced a ton of new cmdlets for SQL Server Agent and SQL Server logs, as well as fixing a few things with Invoke-SqlCmd and the provider.

The Tools Team also added Always Encrypted which received no votes on the Trello board, but we knew this was coming because Ken Van Hyning said “Currently we prioritize via a couple of signals. Primarily, there is our Connect and CSS channels where we get input from customers. We also have some business directives around new advancements in SQL Server, like Always Encrypted. This isn’t really the way we need to run this though.”

What’s particularly amazing is that our input actively guided the module as it was being developed! It’s really incredible to have such a direct impact on a tool that will help us with our jobs every day.

What else is new?

In addition to the new Agent, Error Log and Always Encrypted cmdlets, Invoke-SqlCmd now supports outputting to .NET DataTables/Sets/Rows. The SQL Server team also added a ConnectionString parameter for those of you who would like the full flexibility of a ConnectionString.

The SQL PowerShell provider also now properly supports the WhatIf and Confirm parameters.

What about scripts that Import-Module SQLPS?

The SQL Server Tools team really put a lot of thought into this, and SQLPS will remain for now, it just won’t get updated. In talking to the SQL Server PowerShell Engineer, Matteo Taveggia about it, he said

If you are on a clean machine (or even if you had machine with the June Refresh SSMS) and you install the upcoming version of SSMS, you will indeed get both the new SqlServer module (new location) and the old SQLPS (same old location).

I did this intentionally to smooth the transition for existing users and avoid breaking them unexpectedly. Basically, their scripts (which have Import-Module SqlPs) will continue to work – they just are not going to get the new stuff / updates. Then, when they learn about the new cmdlets and fixes, they can start migrating at their own pace.

Brilliant!

What about the community module SQLPSX?

SQLPSX is a community module that was built to fill the gap of commands that aren’t available in SQLPS, now SqlServer. SQLPSX had a module inside called SqlServer, which presents a naming conflict. This conflict can be resolved, however, by using the -Prefix parameter.

Then the commands available within the SQLPSX module will be prefixed with an x, for instance Get-xSqlTable.

Join us on Trello and vote vote vote vote

If you’ve already joined the SQL PowerShell Community Board on Trello, get over there and vote for the next set of cmdlets. If you haven’t joined, head on over and vote. You can vote without being a member of the board, but if you’d like to join, comment here or on our comment card. While you’re at it, join the #powershell channel on the SQL Server community’s Slack.

trella

It appears that Microsoft also really benefits from code attachments. If you have any to contribute, which would help with the development, please do attach. They make cmdlets (formalized PowerShell commands), but .ps1 totally works.

Any guesses as to what cmdlets are next? Hint: it’s up to us to decide with our votes!

Posted in PowerShell, SQL Server

My homelab <3

My first homelab ever was started as a single Packard Bell 486 which ran SuSE Linux 5.2. I used it with my @Home connection to host web, DNS and mail services. Over time, I’d move leftover computers to a data center and covered most of my colo fees with the proceeds from Google Adsense, which I added to my website RealCajunRecipes.com back in 2004.

I had a whole mixed domain (Windows and Linux) in my quarter rack located in downtown Los Angeles. I’d moved a few times and always brought my homelab (which now consisted of HP DL360s) with me which meant tons of downtime.

downtime

After moving my servers from LA to San Jose to Austin, I opted for hosting my websites on a VPS, and kept my homelab… at home. And with that, gone were the days of impressing ladies by showing them the Internet.

hackers get crazy pull. jk thats my friend and i love my wife

My homelab

Nowadays, I have super lightweight, quiet, and fast Shuttles. Here’s what my lab looks like from SUPA FAST to decently fast.

Case Processor RAM Disk
Shuttle SH170R6 i7-6700K 4.0GHz 64GB DDR4-2400 Samsung 950 PRO 512GB M.2
Samsung 850 EVO 500GB
Seagate 4TB SSHD
Shuttle SH87R6 i7-4790K 4.0Ghz 32GB DDR3-1600 Samsung 850 EVO 500GB
1 TB Velociraptor
Shuttle SH67H3 i7 3770 3.4Ghz 32GB DDR3-1333 Samsung 100 GB SSD
1 TB Velociraptor
Seagate 4TB SSHD

Storage and performance

Yes, it’s true, I have no redundancy. Sucks but I love local storage, don’t have enough room for more disks and my janky backups work well enough.

I had a WD Green 1TB in there for a few days and did some tests; here’s how the benchmarks fared. In order: Western Digital 1TB Green Drive, Samsung 850 EVO 500GB, and Samsung 950 PRO 512GB M.2.

disks

That’s virtualized, too! These tests were performed on my SH170R6 box with ESX and a Win10 VM with regular ol attached disks. I’m so impressed by not only M.2 but ESX as well.

For any of you following some of the work I do with SQL Server and PowerShell import performance, I was disappointed to see that the M.2 only increased the imports by about 20,000 rows per second. That suggests that a majority of the tuning I’ve got left to do is in Windows, SQL Server and on the network.

Network

I love client side networking like VPNs and stuff, but I’m not a fan of networking in general so I keep it simple. I use Apple Airport Extremes – one in the living room that connects to the Internet and one in the storage closet that connects to the one in the living room via wireless. It’s *so* fast, even though 1 Belgian concrete wall and 2 large rooms. Speedtest.net just said that I got 106.52 Mbps down, 10.49 Mbps up. Through a wall and over 2 rooms of wireless!

The communication between all of my VMs stay local, though, which is cool. I also have an older Cisco router but networking ain’t my bag so it’s waiting for the day that I’m interested. When I need anything more than the basics, I use a WRT54GL with dd-wrt.

Why not just use Azure?

I like to test my scripts against a variety of versions/editions and I don’t like spinning VMs up and down all the time. As for the cost; some people spend their money on golf, Polish pottery and gaming rigs. I spend mine on servers, Belgian beer and travel ;)

homelab

As you can see, I also have an old Macbook Pro with 256 SSD, 4TB HDD and 8GB RAM in the mix. It’s for photos and videos, however. And someone gave me an old silver Shuttle from like 2002, but I haven’t had the time to set it up yet.

So what runs on this?

I started using VMware products around 1999 and continue to use it to this day for virtualization. Currently, I’m running vSphere 6. I get a super cheap license through my school, Regis University, though you can also get it for $200/year if you become a member of VMUG.

vcenter

Thanks to my MSDN account, I also have a whole mix of SQL Servers, some DCs, Remote Desktop Gateway, Lync, SharePoint, Exchange, FreeNAS, docker, some rando test servers, and then a few different workstations like Windows 7 with PowerShell 3 and SQL Server 2008 R2 SMO to ensure dbatools works on older systems.

The rest of my equipment is Apple (big ol Thunderbolt display, Mac Mini, Macbook Pro), which is use to RDP to my Windows and Linux machines.

Anyway, that’s my homelab. It’s lightweight which is necessary because I broke my shoulders back in 2007 and can’t carry heavy things. It’s also energy efficient, which is necessary because I live in Europe where the energy prices are crazy.

Hope you enjoyed!

Posted in Networking