Things I wish I hadn’t done while writing PowerShell stuff for work

About once a year I work from home like a scumbag for a week to get all my big code re-writes done. Throughout the year I write fixes, add new scripts and functions as needed, and then clean everything up, document things properly, add comments, get everything actually working in this week. That week is now upon me, and there are a bunch of things I wish that I didn’t keep having to re-write, or update, or rip out, or whatever.

Writing absolute paths in my scripts.
They’re all run under AD user accounts, from an AD integrated server, working on AD stuff for the most part, using AD integrated DNS server, so why aren’t I leveraging DFS? As servers are upgraded; folders, roles, and programs are migrated to different servers;  or other hard path changes are otherwise required, I have to go into each of my scripts and update those links. Pointing them instead to a DFS namespace makes much more sense. As DFS structures are generally permanent structures, There should just a be a private namespace, appropriately secured, for the scripts to peek in.

Not loading common functions, or other variables from a singular set location/module/file.
This is for the same reason that I keep kicking myself for the absolute path habit. After migrating from one Exchange server to another for all our mail roles, I had to go into 30 different scripts and replace the mail server in each of them because I’d used the server’s actual DNS name, not a CNAME for it. Sure, I could just use the CNAME in each of those scripts, and keep things pointed at but I’d still have to swap those out in future if that alias ever changed. All our common admin functions and aliases are loaded from a central module, why aren’t I doing the same thing with the common variables and functions in our other scripts?

Keeping the code so simple that it becomes complicated.
Since I’m the only one at the organization that speaks PowerShell, and our MSP’s staff don’t seem to be as good at reading/writing it as they are at everything else, I’ve stayed away from using anything more advanced than a switch in most of the code that I’m writing. Often times, I’ll break something up into multiple pieces so that people really understand what’s going on at a glance. So instead of writing the one line, kind of complex but not complicated as:

# Get first name from Excel file and convert to Capital Name $FirstName = (Get-Culture).TextInfo.ToTitleCase($ExcelSheet.Range("C5").Text.Trim().ToLower())

I end up with something like like

# Get first name from Excel sheet
$RawFirstName = $ExcelSheet.Range("C5").Text
# Trim whitespace
$TrimmedFirstName = $RawFirstName.Trim()
# Convert to Lowercase in case it was all capitalized
$LowerFirstName = $TrimmedFirstName.ToLower()
# Load function to capitalize names properly
$ToCapital = (Get-Culture).ToTitleCase()
# Capitalize first name
$FirstName = $ToCapital.ToTitleCase($LowerFirstName)

Which, while none of those lines are inherently complex, nor complicated, the overall tracking of what’s going on there requires reading and remembering far more steps, none of which are in and of themselves useful in an overall context.

Since the first option is relying on builtin functions, it’s not likely that they’re going to go out of function any time soon, and people aren’t going to have to take much time to figure out where it’s pulling the Excel data from in case the form is ever updated.

Not using code signing from the beginning.
If you’ve got an an AD integrated CA then signing all the code that’s going to run in your organization is a super simple process and it significantly increases security. If you don’t have an integrated AD CA, you should spend the time to implement one. When you’ve got your CA installed, creating code signing certificates is incredibly easy. You issue one for each of the accounts that you will use to sign the code you’re deploying, and then deploy those to whatever computers are going to be running your code via GPO. Setting your  Signing the code itself is a matter of selecting your cert, and then signing it with the Set-AuthenticodeSignature cmdlet. Wrap this into a little function, add it to your PowerShell $Profile file, and it’s easy to update your code, then push it to wherever you’re running your code from, such as the aforementioned DFS location.  As a lazy admin I’m handling the  and push just the file name from my base PowerShell window without either CD’ing into my source dir, or having to type the entire path.

function SignAndPush {
        param ($file)
        $CodeSigningCert = (dir Cert:\CurrentUser\My -CodeSigningCert)[0]
        Set-AuthenticodeSignature $env:LocalScripts\$File -Certificate $CodeSigningCert
        copy $env:LocalScripts\$file $env:RemoteScripts

Not setting up my PowerShell environment more
After moving jobs, and having this saved in my drafts folder forever, I started doing all of these things from the start, and the biggest improvements I’ve made have been configuring my environment. I’ve got a set of common functions that are referenced by basically everything from log rotation scripts, AD change monitoring, user management, to general sysadmin’ing. I’ve got another set of functions, aliases, and handy little wrappers that PowerShell imports from the DFS scripts share, and then my local $Profile has a bunch of stuff set for just me. Given that one of the most popular things I’ve written is about setting path variables via GPO, I’ve got no idea why it took so long to apply the same principals within my scripting environment.

When I launch a PowerShell window, it imports the infrastructure scripts, adds a bunch of variables that add on a bunch of the variables set in the infrastructure script, automatically verifies that none of the scripts in the network share are unsigned, sets a bunch of PowerShell preferences, and adds a handful of personalized aliases. It’s the same thing I’ve been doing in my .bashrc files, it’s just not something I’d bothered with in PowerShell before, and I don’t know why.