URL Shortening/Redirects using GitHub Pages and Jekyll

Recently, I moved many of my blogs/shortlink domains to GitHub Pages. With that, the security of my sites increased dramatically, but I lost some functionality including search and 301 (Moved Permanently) Redirects using .htaccess, which I use extensively for shortlinks.

Because GitHub Pages don’t support .htaccess or 301 redirects, I had to end up using the supported method, jekyll-redirect-from, which generates HTML redirects. The jekyll-redirect-from repo’s main page had a tutorial but it wasn’t geared towards GitHub Pages or my specific use-case, so I had to look elsewhere.

Most of the tutorials I found assumed some basic understanding of Jekyll but I have absolutely no knowledge of it. It’s like a black box to me, so it took a solid six hours of research and testing before I could get things to work. It wasn’t until I found this comment on stackoverflow that I was able to understand the whole process.

The biggest thing I was missing is that I didn’t consider that each shortlink would need its own markdown file. I thought every shortlink could be taken care of within the _config.yml file.

Here’s the basics of what you’ll need, based on the real-world example of dbachecks.io:

image

Then a sample of the actual shortlinks

Redirecting the root was a bit of a challenge, but the index.md holds the key. Basically, it’s permalink: /index.html within the index.md file. I tried permalink: / like a lot of other repos on GitHub, but it didn’t work until I changed it to permalink: /index.html.

The whole dbachecksweb GitHub Pages repo is a big redirector site, so if you’ve still got questions, take a look around the repo, it’s a good place to learn.

Posted in GitHub

Migrating my WordPress sites to GitHub Pages

After getting hacked one too many times, I’ve decided to move all of my blogs and URL-shortening sites from a Virtual Private Server to GitHub Pages.

GitHub Pages are pretty awesome and I’m not using them to their fullest ability, but so far, it’s been a great replacement for WordPress. While I procrastinate a total transition to Jekyll and markdown, I’m writing posts on a local backup of my WordPress blog then performing an export to GitHub each time.

I’ve been on Pages for a little less than a week and it’s already been a huge relief not to worry about my blog going offline anymore because my VPS got super hacked because I wasn’t on top of updating WordPress or PHP.

Here are some other pros to moving to GitHub Pages:

  • Super fast since everything is static
  • Built-in CDN since GitHub takes care of all that
  • Free
  • Writing markdown files is a fun way to blog
  • No more Gutenberg editor!

There are some downsides, of course.

  • Forms POST to local site forbidden, so no built-in search or commenting
  • Can’t figure out how users can still subscribe via Jetpack
  • Coordinating new posts by guest authors will be a challenge
  • WordPress plugins are really useful and I’ll have to learn a bunch of new stuff to replace what they offer
  • Unlike with WordPress and Jetpack, stats aren’t offered statically

Even with these cons, I regret not migrating to static pages years ago but no other platform has been appealing until now. I like the GitHub workflow and Pages feel pretty familiar. I’ve actually used them before – shoutout to the dbatools team for the intro! We use Pages for docs.dbatools.io and I was basically forced to understand it to keep supporting docs.

Here’s how I did it

It’s been a process so I wanted to distill it in a post. This post assumes a small amount of familiarity with GitHub so I’ll big skipping over the basics.

Enable GitHub Pages

To begin, create a fresh new repo and initialize it with a README.md if you want.

Then, enable GitHub Pages in repository Settings.

image

If you want to use a custom domain name, it’s a straightforward, three-step process. I’ll show you how to do it after you export your blog and commit it to your repository.

Exporter Plugin

In order to begin my migration, I had to find the right WordPress plugin. The plugin had to:

  • actually work (tall order, it seems!)
  • look exactly the same for now
  • export only what was needed to prevent bloat

It took a bit but I finally found WP2Static. It’s actually not available in the WordPress plugin gallery, so I downloaded the plugin from statichtmloutput.com and uploaded it my site.

At first, I tried to commit straight to GitHub but I guess I had too many files. I found the most luck just downloading the zip and committing manually.

image

It also said not to use the URL of your current blog, but I did. I considered a subdomain name change but decided against it. When I need to compose and export nowadays, I just modify my HOSTS file.

My site is pretty big and exporting takes quite a while to export each time. I’d say 30 minutes or so. It’s a big time investment, but recovering from constant hacks is an even bigger, more stressful timesuck.

I tried messing with the Advanced options that can speed up the export but even though I have a dedicated VPS and resources, found it’d mess up if I went too high. In the end, I just kept the defaults.

Once my export was done, I committed all the exported html files to my newly created GitHub Pages-enabled repo!

image

Awesome, time to change update my DNS settings!

Before exporting

Oh, turns out I should have done a couple other things before updating DNS.

Google Analytics

My site already had Google Analytics, but I wanted to highlight it for others who rely solely on WordPress stats. You can add Google Analytics to your site using a WordPress plugin, CloudFlare App or manually adding it to your theme like I did.

Once I migrate fully to GitHub Pages and use markdown and Jekyll to do my blog, Google Analytics easily supported, too. Looks like it’s just adding a file to the _includes directory.

So before your “final” exports, make sure you have your Analytics setup and ready for the exporter to write to file.

Migrate comments to Disqus

After my initial export using WP2Static, I posted a test comment which failed because it attempted to POST to my own domain which is unsupported/forbidden in GitHub Pages. This means that I’ve gotta setup support for comments prior to my final migration.

Considering I can’t POST to my own site, I realized that I’d have to find a plugin and platform which posted somewhere else and included that in the exported HTML code. Fortunately, it was pretty easy using Disqus.

I tried a couple plugins and ended up going with the official Disqus for WordPress. I had to create an account, create an API key then export/import old comments and all-around, I’m happy with the result. It actually works!

image

There’s me commenting on my GitHub Pages site 🤩

After Exporting

Now that I’ve got comments and analytics working, it’s time to tell the Internet that my blog has moved. I use Cloudflare extensively for this. Cloudflare is a free service that I use for all of my domains. It offers free SSL, a beautiful DNS management interface and some cool Page Rules that replaces a lot of functionality I’d usually pull off with .htaccess.

Cloudflare also supports geo-replication of my media, which means my site is fast no matter where my host is based. This matters less now that my blog is on GitHub Pages and not a VPS in Texas, but I was happy I had it when I did. Especially since it’s free.

Add your domain name to the repo

First, let’s add support for our custom domain name to GitHub. Do this by adding a file named CNAME (capitalization is important) to the root of your repo that contains the name of your domain.

Next, go to the Settings of your repo, and in the section for GitHub pages, add your domain name.

image

SSL

If you use Cloudflare, SSL is taken care of by default. If you don’t, you’ll need to add an extra DNS entry to give letsencrypt.org authority to create SSL certs for you. Visit GitHub’s troubleshoting page for more information.

DNS

Next, add a CNAME in DNS for your domain. The entry will be yourgithubusername.github.io. Here’s mine with my GitHub username, potatoqualitee:

image

Once that’s saved, I quit my browser to clear my DNS cache, and boom, it worked! I was pretty amazed that it looked exactly like my old site too. No messed up CSS or anything 🚀

Search

While the site looked exactly the same, the search didn’t work, which I expected.

To my knowledge, Jekyll does not support querystrings and search is impossible. Because of this, I used a Cloudflare Page Rule to redirect all searches to Google.

image

You can try it now by searching the site or by clicking the following link: https://blog.netnerds.net/?s=powershell

image

Redirects / shortlinks

If you’re really into shortlinks like me, that is also supported by GitHub Pages. It doesn’t support 301 Permanent Redirects, but it supports HTML redirects which is good enough for me.

I wrote a blog post on how to accomplish this, titled URL Shortening/Redirects using GitHub Pages and Jekyll, if you’re looking to do the same.

Now, all that’s left is to figure out the subscribers situation and I’ll be set until I figure out how to create a Jekyll theme and migrate entirely to markdown!

Posted in GitHub

My Windows Terminal Retro Theme

After reading a number of Windows Terminal posts by Thomas Maurer and seeing Windows Terminal Preview available in my Windows App Store, I finally decided to dive in again.

I was sooo excited when Microsoft first made the announcement but was disappointed when I found out I’d have to run a specific version of Windows and compile the app myself. Whaat? No way, too much work. Now, it’s more widely available, so I decided to jump in and try it out. I love it and even miss Windows Terminal when I develop PowerShell on my Mac.

So here’s the Theme I’m contributing, which is based off of my favorite VS Code Theme, 1984 Unbolded. I call it Retrowave.

    {
        // this theme created and commented for powershell
        // based off of 1984 unbolded vs code theme
        "name" : "retrowave",
        // entire background
        "background" : "#070825",
        // default text
        "foreground" : "#46BDFF",
        //quoted values
        "cyan" : "#df81fc",
        // commands
        "brightYellow" : "#ffffff",
        // parameters
        "brightBlack" : "#FF16B0",
        // tokens like if, true, false
        "brightGreen" : "#fcee54",
        //comments
        "green" : "#929292",
        //errors
        "brightRed" : "#f85353",
        // attributes like ValueFromPipeline or ::Whatever
        "brightWhite" : "#ffffff",

        // other or unknown
        "blue" : "#46BDFF",
        "brightBlue" : "#46BDFF",
        "brightCyan" : "#ff901f",
        "brightPurple" : "#FF92DF",
        "purple" : "#FF92DF",
        "red" : "#FF16B0",
        "white" : "#FFFFFF",
        "black" : "#181A1F",
        "yellow" : "#fcee54"
    }

This theme looks like this:

So pretty! 😍

You may notice that I commented out which color impacts which part of PowerShell formatting. I did this in case you’d like to recreate your own favorite VS Code theme, like Cold Snack.

Where does this block of code belong within my profile? Check out whole profile on GitHub.

Edit: Here’s a pic of this theme + htop that my buddy Joe sent me!

Gorgeocity of 10.

Oh, and if you’d like to use the PowerShell avatar in the lower-right, here it is.

Posted in PowerShell, Windows

SQL Server Agent with CmdExec job runs PowerShell infinitely

Just ran into this issue and solved it by using powershell -ExecutionPolicy bypass C:\path\to\script.ps1. Seems there was an issue with the signed module so I just set it to not check the sign.

Posted in PowerShell

Windows Services Recovery for SQL Server

Recently, I ran into an issue after applying a few security updates and subsequent reboots: a number of the SQL services did not start successfully. After running the following script, I’ve had success with SQL services starting as expected after reboot. It’s basically a built-in service start retry. You can read more on Microsoft’s docs page or for a visual guide, check out Ibrahim Soliman’s blog.

In this case, I used a script instead of the GUI because the GUI is limited. In the script above, I can specify exact times per retry whereas in the GUI I cannot.

Also, I can’t believe I never noticed this tab for years, until a friend pointed it out. It’s so useful 😁

Posted in PowerShell, SQL Server, Windows

My setup after a year of livestreaming

Back in January and February of 2019, I wrote couple posts that highlighted my journey into livestreaming on Twitch, from understanding the different services/platforms to the lighting I used. I then followed up in June with a post about what I retired and what I was continuing to use.

Since then, I actually started streaming about Cajun cooking for my website, RealCajunRecipes.com. I bought a couple cool things for that setup, including an awesome Madonna mic, which is called that because she made it famous in her Express Yourself video. I also bought an over-the-top Logitech BRIO – Ultra HD Webcam and some in-ear/mostly hidden Xiaomi Redmi AirDots ear buds to hear my guests who dialed into the stream over Skype.

This is what the Madonna mic looks like – note that this is the built-in potato quality Facetime webcam, not the BRIO.

It delivers some of the best sound because it’s good quality and the mic remains near your mouth the whole time. You’d be surprised just how close you have to get to even a high-end mic for the sound to be great.

I will say, it felt SO GOOD knowing how to do this. I applied everything I learned from my programming stream to my cooking stream. In the first video, I just used the built-in platform interface and subsequent ones used good ol’ OBS.

Retired hardware

Before I go into what I’m currently using, let’s look at what I’ve retired over the past year.

As mentioned previously, I retired the green screen because it added too much prep time and formality to my stream so I found myself streaming less often.

The cats retired the string lights for me by eating them, but in truth, I should have skipped right to the Phillips Hue good stuff. My Bose headband broke (wtf) but I AM IN LOVE with what I got as a replacement. I keep the Elgato Key Light around and don’t regret it but my setup changed and I’m just not using it right now.

Now in use

Here are the things I do still use, broken down into essential and very nice to have.

Essential

Mic

The Yeti Mic is now back in action. I just wasn’t happy with the sound of my other devices. They were either inconvenient, sounded “tinny” or had other issues. The Yeti is easy to manage and the sound is great – much better than the mics built into any of the wireless headphones I wear.

Webcam

For years, I’ve used the Facetime camera and thought it was adequate until I hooked up the Logitech BRIO that I bought for my cooking livestreams. The lighting is SO much better.

 

 

Stream Deck

Still loving my stream deck even if I use just a few combo keys.

I actually have different profiles for both my coding and cooking livestream. Coding has stayed mostly the same and the cooking stream allows me to start and stop my stream, as well as seemlessly switch between pics of me cooking and close-ups of the pot!

Mouse and Keyboard

When it comes to my keyboard and mouse, I’ve always been a fan of Microsoft hardware and that’s never changed. I love these ergo versions that keep my shoulder working just a bit better.

Very nice to have

There are cheaper alternatives to the following or they may not even be needed, but I sure do enjoy having them!

Headphones

The Sony Noise Cancelling Headphones WH1000XM are the absolute best wireless headphones I’ve ever used. I can turn off the noise cancelling if I want, too! This is not only safer for when you’re walking around a city, but some people like me are sensitive to the pressure that noise cancelling creates so in a quiet room, I tend to turn it off. On a busy train, though, it’s awesome!

The accompanying app is just phenomenal and allows me to set my EQ, control the direction of the sound and more.

Networking

I was battling with my Unifi recently and realized later that it was actually my Mac causing all of my issues. Now that I’ve fixed my Mac, I can appreciate just how useful the Unifi dashboards are. When I plugged in my Philips Hue hub, I was able to immediately find its IP address and label it.

This helped me immediately troubleshoot some issues that resulted from me not reading the manual 😅

Software changes

I haven’t changed too much, except that I switched my live captioning from pubnub to Stream Closed Captioner by talk2megooseman. Gooseman’s closed caption streamer is Twitch native and can be turned off and on.

Oh, I also changed my VS Code theme. I recently switched from Cold Snack to 1984 unbolded.

Music

I also forgot to cover my music setup in previous posts so I’ll go over it now. Ultimately, I wanted people on my stream to hear something different than what I’m listening to. This was due to a few reasons:

  • Copyright – I don’t want to deal with copyright muting out my archived livestreams
  • Volume stability – If I want to turn up my sound, the chat room is coming with me and it may be suddenly too loud or too soft or they may no longer be able to hear me
  • Mood – I like chill background vibes on my stream but may be listening to harder stuff

The way that I get around this is that I setup a dedicated feed for my stream using Soundflower and Rogue Amoeba’s SoundSource.

So I load up a YouTube in a Safari window. I don’t use Safari, just Chrome, so it’s basically a dedicated browser. Then redirect Safari to Soundflower using SoundSource.

Then I create an Audio Output Capture in OBS and point it to Soundflower’s 2ch.

I was tempted to get Rogue Amoeba’s Loopback in place of Soundflower, but it’s just so expensive considering Soundflower works perfectly.

Channel alerts

When it comes to other sounds, to ensure I can hear people following and appropriately celebrate, I also keep a Stream labels window open while streaming. This alerts me where as Twitch’s native interface does not.

Celebrating new followers is an essential part of every stream, so this capability is especially important.

I stopped livestreaming for a while because I was super burned out. I blame that absence for the fact that I’m still not good with emotes or Twitch culture in general, but I’m workin on it! I’m hoping to start livestreaming each Thursday at 7:30PM or 8:00PM Brussels time. Here’s hoping my brain stays on track 🔷

Posted in Livestreaming

Remove the new ad in PowerShell 5.1 on Windows 10

I really abhor the new ad in the PowerShell 5.1 console and it seems there’s no hope of Microsoft making it go away.

After a long, involved Twitter conversation with the community and the PowerShell team that confirmed it’s impossible for the advertisement (?!) to be easily removed, it looks like the only solution is to bypass it. Przemysław Kłys has a great suggestion to emulate the old prompt that totally works!

First, update the PowerShell shortcut in your taskbar (you have one right? 😁) to use -nologo.

Then add the following to your profile (notepad $profile)

Clear-Host
Write-Host "Windows PowerShell"
Write-Host "Copyright (C) Microsoft Corporation. All rights reserved."
Write-Host

The result ultimately looks like the original prompt. Hell yeah.

If you’re wondering about my prompt, you can find it at dbatools.io/prompt.

Posted in PowerShell

Using robocopy to move SQL Server files

When performing file migrations, PowerShell’s Copy-Item is not ideal. I’ve since forgotten the original reasons I concluded this with the rest of the Internet, but this quote from reddit covers it pretty well.

The Copy-Item and Move-Item cmdlets are general purpose commands. They have the ability to handle many different types of structure.

Robocopy, on the other hand, is highly optimized for copy/move/delete on the filesystem. Only on the filesystem.

That specialization is why it is called Robust Copy.

There are a number of options available in robocopy, and for SQL Server’s purpose, you can probably get away with just /MIR /COPYALL /DCOPY:DAT, I ended up using a few more, as suggested by serverfault:

robocopy C:\oldmount\Data C:\Mount\Data /MIR /COPYALL /DCOPY:DAT /Z /J /SL /MT:"$([int]$env:NUMBER_OF_PROCESSORS+1)" /R:1 /W:10 /LOG+:C:\temp\robocopy-log.txt /TEE /XD "Recycler" "Recycled" '$Recycle.bin' "System Volume Information" /XF "pagefile.sys" "swapfile.sys" "hiberfil.sys"
robocopy C:\oldmount\Logs C:\Mount\Logs /MIR /COPYALL /DCOPY:DAT /Z /J /SL /MT:"$([int]$env:NUMBER_OF_PROCESSORS+1)" /R:1 /W:10 /LOG+:C:\temp\robocopy-log.txt /TEE /XD "Recycler" "Recycled" '$Recycle.bin' "System Volume Information" /XF "pagefile.sys" "swapfile.sys" "hiberfil.sys"

Here are the descriptions for each of the parameters, as described by the serverfault post.

  • /MIR – Mirror source to destination, and delete files and directories on the destination, if they are no longer present on the source
  • /COPYALL – Copy all file info: data, attributes, and timestamps, NTFS Security ACLs, Owner info, Auditing info (not all included by default)
  • /DCOPY:DAT – Copy all directory info – data, attributes, timestamps (original creation timestamp is not copied by default; normally this changes to the date that it was copied by Robocopy)
  • /Z – Use restartable mode
  • /J – Copy using unbuffered I/O (faster copy of large multi-gig files)
  • /SL – Copy symbolic links rather than the target
  • /MT – Use maximum CPU threads (better use of 10 gb Eethernet and many CPU cores)
  • /R:1 – If file access error, retry 1 time
  • /W:10 – If file access error, wait 10 seconds before retry
  • /LOG+ – Log the output to text file, append if log file already exists
  • /TEE – Print results to screen and to log file
  • /XD – Exclude directories, and everything within them. Names with spaces in them need be enclosed in quotes: “Recycler” “Recycled” ‘$Recycle.bin’ “System Volume Information”
  • /XF – Exclude files: virtual memory and hibernation files if they happen to be present on the source: “pagefile.sys” “swapfile.sys” “hiberfil.sys”

I changed a couple things from the original post, like the value for maximum CPU threads, and the quotes around $Recycle.bin because I ran this from PowerShell and I needed the value to be literal. I also removed a couple options (like preserving audit history) that I didn’t have permissions to run.

One huge huge warning about this command: /MIR mirrors directories which can result in deleted files in the destination. In the past, I’ve accidentally destroyed data in a lab, so make super duper sure of your source and destination are right and make sure you want the directories mirrored and not just appended. I am actually unsure of which parameters would just append 🤔

Hope that helped! I’m glad to finally get around to documenting this.

Posted in PowerShell, SQL Server