Icon Color Replacement Fun with PowerShell

I like to keep interfaces simple and tend to use icons in my GUI designs. Recently, when making a PowerShell GUI to alert myself to alarm status changes in vCenter, I decided to reuse the same icon over and over for different statuses, but I made them meaningful by dynamically changing the color.

The Goal

My ultimate goal was to have 3 icons that were colored white, yellow and red.

white yellow red

How I did it

The first thing I did was search iconfinder for a free icon, and decided on this one:


Using paint.net, I determined that the base color of iconfinder’s server icon was #444444. Now what’s really cool about iconfinder, is they also offer the base64 code. As discussed in my earlier post, base64 can easily be used as icons and images in WPF forms.

So let’s take a look at what the code below executes to accomplish this task

  1. Loads WPF assemblies and sets base64 icon variable
  2. Creates a bitmapimage object to enable streaming of the base64 image
  3. Creates a colormap. Colormap is simple and just contains .OldColor and .NewColor.
  4. Passes colormap to an imageattribute which performs a SetRemapTable()
  5. Graphics.DrawImage draws an image based on these new imageattributes
  6. Saves the icon to variable for later use within the script ($iconwhite, $bmpwhite)
  7. Saves and opens the new icon so that you can confirm it worked.
  8. Do this for all colors within the newcolor array

This is high performance, and the total conversion time is less than 5 ms, which makes this technique efficient enough to use in all of your PowerShell GUI apps, if you’re so inclined :)

Posted in PowerShell, WPF

Display WPF Popups in Lower Right Corner of Screen using PowerShell

If you find yourself limited by notifyicon’s BalloonTips functionality, you can use WPF Popups to display information instead. You could also use a WPF form, but popups were specifically meant to be chromeless, so no border type attributes have to be set on the popup itself to make it popup-like in appearance.


Considering the nature of popups, placing them in the lower right should be easy, right? Well, Microsoft’s guide for popup placement was confusing to me, and StackExchange’s top-rated responses weren’t much more straight forward.

So after about twenty minutes of trial and error, I figured out that in order to create a popup in the lower right hand side of the screen, three things are required. First, the Popup’s Placement property must be set to AbsolutePoint. Then, the screen’s working area has to be determined and its values used to update the popup’s VerticalOffset and HorizontalOffset. This is similar to winform’s location property.

The following code will create a popup in the lower right hand corner. Note that it will not create a notifyicon. That will be covered in my next post.

And here is what it looks like:


This code will work for any sized popup. Also, this form’s Window and Popup objects were stripped down to only the required properties. If you remove any of them, it won’t work as demonstrated.

I’m not a GUI pro, but my understanding of this is that an invisible form hosts a popup. You can’t see the form, and only the popup is visible. While StaysOpen=”False” usually allows the popup to close on LostFocus, it doesn’t work that way in this form. If StaysOpen is left out, the popup will never respond to the $form’s MouseDoubleClick event. You’ll just hear a ding when clicking the popup to close it. I’m assuming that’s Windows’ way of saying the action is forbidden.

Posted in PowerShell, WPF

Use base64 for Images in WPF-based PowerShell Forms

I’m currently building a notification module to let me know when VMware vCenter alerts go off in my home lab. I plan to share it, and wanted to use a non-standard icon, but didn’t want to require a separate download. I knew base64 would be the answer, and ultimately, some C# code from StackExchange helped me figure out what needed to be done.

Here’s what it looks like in my application. Note the image being used in both the popup and the notifyicon itself:


Below is the simplified code I used to display the icon within my application popup. The code contains comments that explains each step. It’s pretty straightforward: a quick conversion, then setting the image source to the converted stream.

Working with PowerShell and WPF makes me realize that *this* is what I always wanted coding to be. Simplified yet powerful!

Posted in PowerShell, WPF

Import-CsvToSql: Super Fast CSV to SQL Server Import PowerShell Module

A while back, I wrote about High-Performance Techniques for Importing CSV to SQL Server using PowerShell. Earlier today, I released a PowerShell module, CsvSqlImport.psm1, which puts that post into practice.

Using this module, it’s possible to import over 90,000 records a second in optimized environments. In addition to being speedy, it’s also straightforward to use and can be easily scheduled for ongoing imports.

Turbo charged

Check out how quick 1,000,000 rows can be imported when the -Turbo switch is used:


Why isn’t Turbo used by default? Well there’s very little error management (that slows things down) so it may error out when other methods (default and -Safe) may not.

Why not just use the Import/Export Wizard?

Because it stopped being easy for me awhile back (and it’s purely GUI, and requires additional software and it can’t import multiple files at once, and so on.)

Remember when importing CSVs into SQL Server using dtswizard didn’t require any conversions or fanciness? You’d just select your file and your delimiter, it would create the table for you using varchar(255) across the board, and plink! You’d almost always get the “Successfully copied 1 table(s) from Flat File to Microsoft SQL Server.”


Then, the Import Wizard became more powerful but also more complicated, and quick and dirty imports suddenly took more work. For instance, whether I allow the Wizard to create a new table, or try to map to an existing one, my imports in SQL Server 2005 and above nearly always go like this:


Noooo! What is all this mapping? I just want it to work then I can mess with my datatypes later. *Next*


“Well, damn. I should just write a PowerShell function to do this for me.” And this is basically how Import-CsvToSql came about.


Eventually, I’ll have support for a GUI, too. I created one, but got too caught up in best practices, and decided to put it off while until I have time to learn more about WPF (which I love, btw.)

Like many PowerShell functions, Import-CsvtoSql supports the -First parameter. As I mentioned earlier, it can even import multiple files at once, so long as they’re similarly formatted. It also allows you to query the CSV file for subsets of data (think select address, cost from csv where State = ‘Louisiana’), and it supports all the options within SqlBulkCopy like Table Lock, Check Constraints, Fire Triggers, Keep Identity, and Keep Nulls.

Most of my imports have been pretty basic, but I did add in features that I thought others would appreciate. If you have any more you’d like me to add, let me know. Oh, also, I did my best to comment the hell out of the code, so if you’re wondering how it works, it should be easy to figure out.

Give the script a try and let me know how you like it. You only need PowerShell 3.0 and above. No additional software is required. If you run into any issues, send a portion of your CSV file to me at [email protected], and I’ll take a look to see what’s going on.

Posted in PowerShell, SQL Server

Reset-SqlSaPassword: Easily regain sa/sysadmin access to your SQL Servers

It’s obvious that I love PowerShell and SQL Server. Together, they allow SQL Server DBA’s to solve just about any problem. Most of my projects have come from questions posted in Reddit’s SQL Server subreddit, including my most recent project, Reset-SqlSaPassword, which allows you to regain syadmin access to your SQL Servers, most times, in less than 20 seconds.

The “sa” in this module name is more about sysadmin access, and not the actual SQL Server login “sa”. Using this module, you can easily regain access to local or remote SQL Servers by either resetting the sa password, adding sysadmin role to existing login, or adding a new login (SQL or Windows) and granting the login sysadmin privileges.


“That seems totally insecure”, you may think. But this script will not work unless you have Windows administrator access on the server or workstation running the SQL Server instance. I’m actually running the script as a Domain Admin in the demo video, though it will also work with local admin privileges.

Initially, when I started the project, I was modifying the Startup Parameters using Microsoft.SqlServer.SqlWmiManagement but I found a huge bug that meant my script would only work using Microsoft.SqlServer.SMO using Version= If at all possible, I’d like my scripts to work across all environments, and this was a show stopper. Then I considered modifying the service itself (failed miserably) and even the registry. Then I realized I could do this all safely from the command-line, which meant the script would have no lasting impact since no permanent startup parameters are modified.

If you need to regain access to your SQL Server, just download Reset-SqlSaPassword from Microsoft Script Center and execute. It works on every Windows platform I tested, even Win XP (laugh, but sadly, some orgs still use it) and Windows Server 2016 Tech Preview. It also works on SQL Server 2005-2016, both clustered and stand-alone instances. It relies heavily on WMI calls and .NET, and does not require SMO or any admin tools. If you’re having access issues caused by unopened ports/strict firewall access, just run the script locally.

If you have any issues, please let me know. You can hit me up on email ([email protected]) or on Twitter (@cl). Or in this post’s comments.

Posted in PowerShell, Security, SQL Server

Getting Total Number of Rows Copied in SqlBulkCopy Using PowerShell

Getting the total number of inserted rows for SqlBulkCopy should be easier, but I believe the easiest (and fastest) is by using reflection and some C# code within your PowerShell script. Thanks to user601543 at stackoverflow, I was able translate his code for use within one of my own scripts.


The Code

Here’s an approximation of how I did it. First, I added the code using Add-Type. Then I called it using [System.Data.SqlClient.SqlBulkCopyExtension]::RowsCopiedCount($bulkcopy)

As the user noted: “This count does not take into consideration the number of rows actually inserted when Ignore Duplicates is set to ON.”

Posted in PowerShell, SQL Server

Simplifying SQL Server Migrations using PowerShell

Recently, I did a webcast for the PowerShell Virtual Chapter of PASS on Simplifying SQL Server Migrations using PowerShell. If you’re planning for a SQL Server migration, I wrote a bunch of scripts that can help, and put them all on github.

Check out the sqlmigration repository, and the accompanying PowerPoint Slides.

If you use the scripts and run into any issues, or have any feature requests, let me know. I can be contacted on Twitter at @cl or by email.

Posted in PowerShell, SQL Server

Sudden High CPU Utilization and Query Performance Decrease | When Query Plans Go Bad

Each morning when I get into the office, I check performance graphs in SolarWinds Database Performance Analyzer (DPA – formerly Confio Ignite).  Monday mornings are usually the most important, because I run full system maintenance on a specific monitored database on Sunday.  Monday is the first day of heavy utilization by users since the completion of that maintenance.  The reason that maintenance is important is because updating statistics will often cause SQL Server to generate new execution plans.  Sometimes the new plan is worse than the old plan and you see changes in system performance.

The great thing about SolarWinds DPA is that I have a graphical representation of all of my data and I can drill down into timeframes.  I can also look backward in time to compare system performance.  Since I look at the graphs so frequently, I am familiar with which queries generate the most wait time, and what I should expect from my CPU utilization.  One morning the graphs showed an extreme spike in wait time for a single stored procedure called GetIDXIntegratedTaskList.  I also quickly realized that the CPU on the database was running at double the usual (23% instead of 10%).

Below are two graphs from DPA.  The graph on the left shows my system performance a few days prior.  The graph on the right shows system performance the morning of the incident.  Focus on the 8AM hour, and the scale of the graphs.  What I deem to be “normal” system performance on the left shows an overall wait period of about 14,000 seconds at the 8AM hour.  But on the right, the wait is over 50,000 seconds.  You will also notice the vast majority of the wait time is coming from that pretty blue block.  Also important to note is that this pretty blue block (which represents a query) is not present in the normal graph on the left, even though there is another color block that is very similar (trust me, it’s not the same query).

normal_chart_dpa       abnormal_chart_dpa


A mouse-over of the block shows the name and the specific part of the query that is responsible for the wait.  I am also able to drill down into the hour and see what kind of wait times the top queries are experiencing.  In this case, it turned out to be completely due to CPU/Memory consumption.  You can also see that this query is using much more CPU than the other queries.


Looking back in time, I also wanted to verify my theory that this procedure probably started acting up after maintenance.  This was confirmed by the graph from Sunday, which shows the pretty blue blocks popping up right after the update statistics jobs completed during the 9AM hour.


DPA is not necessarily saying that the query is waiting on the CPU, but also that the query is utilizing CPU time.  So what I know at this point:

  • This query/procedure is not normally on my graphs as a top resource utilizer
  • The CPU is running hot
  • This query/procedure is using a lot of CPU, more than the other individual queries.
  • The issue began after maintenance completed, which indicates a possible bad execution plan.


What if I don’t use DPA?

Up until now, I’ve relied heavily on SolarWinds DPA to help me track down the query.  It’s not the only way to do it.  If I did not have access to DPA, I probably would have ran a duration trace to find anything over 1000ms or 1500ms and adjust from there.  I did actually run a duration trace against the database looking for exec dbo.GetIDXIntegratedTaskList% to see if there were any patterns to be identified.  Perhaps someone was executing the query with bad parameters, or a web server went haywire and is flooding the database with requests.  In this case, I did not see anything unsual about the queries, other than the fact that the execution time was averaging 1.5 seconds and the matches were rolling in at a high rate.  A more general duration trace would have also matched up a lot of this data and would have alerted me to an issue with the procedure.

In addition to duration traces, you can also use Activity monitor to find Recent Expensive Queries and sort by Executions/minute or CPU.  This isn’t always a sure-fire way to identify the issue, but it’s a good idea to at least try.

The last way I would attempt to find the query is by querying DMVs for query and procedure execution statistics.  The problem here is that this information is cumulative since the system started.  You can reset wait statistics, but you would have to clear out all of the procedure cache in order to reset the stats on the queries and procedures.  In this case, the issue would have magically disappeared, but you would not know what the problem was if you were to blindly clear the cache.

Luckily, there is a creation_time column in sys.dm_exec_query_stats and a cached_time field in sys.dm_exec_procedure_stats.  You could attempt to sort by these columns and filter down to cached plans that were created around the time the issue started.  If you are able to identify which plans are new and also have a very high total_worker_time, you can narrow down potential candidates for high CPU utilization.

I actually used the sys.dm_exec_procedure_stats to confirm my other findings and I will cover some of that here now.


At this point, I am pretty sure that I know what the problem is.  The procedure got a new execution plan after updating statistics and it’s bad.  But just for the sake of it, I still want to get more information from the procedure stats DMV.

There are only three basic SQL statements I will need to use at this point, since I know the procedure I want to address.  The first statement will query the sys.dm_exec_procedure_stats DMV for all the information on the stored procedure.  For the sake of space, I’ve limited the rows in the screenshot to what I think is most important.


We can see that there are two cached plans for this stored procedure.  It’s quickly apparent that the first one in the list has been executed the most and has the most total_worker_time.  So, I will grab the plan_handle for the first entry and then query the sys.dm_exec_query_stats DMV for more information on the queries within that plan.



Now we can see that there are a number of queries being executed within this stored procedure.  The statement offset start and end positions are specified.  If you really wanted to look at the statement, you could use these offsets to help isolate that specific portion of the stored procedure.  You also see the plan_generation_num.  If this number is high, it indicates that the procedure or statement is being recompiled.  The important data here is the creation_time, execution_count and the total_worker_time.

As we see, there are 8 SQL plans for this 1 procedure.  The second row has the highest total_worker_time, meaning it is using more of the CPU than the rest of the procedure.  We also see that the creation_time for this plan is different than the rest of the procedure.  All the other plans in this procedure have been using the same execution plan since 04-11-2015.  Our issue didn’t start until the 19th of April.  We see that this coincides with the start of our problem and it’s using the most CPU out of the group.

From here you could check out the execution plan in depth, if desired, but I already know that it used to work better than it is working now.  The decision I made was to clear the plan cache for this stored procedure and hope that the system would choose a better plan.  To do that, simply grab the same plan handle and run:

(I originally tried to clear out only the SQL plan, but it purges the entire plan handle)

As soon as I cleared the procedure out of cache, the CPU utilization on my server dropped by 50% and my duration trace stopped displaying matches, since the query began to execute much more quickly.


I was also able to grab another screenshot from DPA showing the total wait time per minute.  All of the CPU waits from this procedure disappeared once the new plan cache was calculated.  I admit that there was a good chance that this would work, but it was not guaranteed.  The system could have chosen the same plan again, or maybe even a worse one.  But it all worked out for me… and for the users who were reporting slowness.


Posted in SQL Server