Powershell Toolmaking Thumbnail

What I learned from PowerShell Toolmaking part 1: Gathering Metrics

Published on June 28, 2018 by Patrick.lavallee Blogue Patrick Lavallée

Let’s talk about code quality

The continuous integration and deployment of typical applications has been a hot topic for several years now and a plethora of examples exist out there to help you out. But what about applying this discipline to ensure the same level of quality to PowerShell scripts?

We have all been down that road with PowerShell. It starts out as a script with several helpful functions automating tedious and repetitive tasks. Along the way, the script gets bigger and is split into several logical units. Eventually, the units start being source controlled. They get organised into a module which can be deployed as a toolbox for helping your organisation’s sysadmins/DevOps. Sounds familiar? It sure does to me!

The above scenario is all fun and games until maintenance is required and a new version of the toolbox needs to be rolled out. How can it be proven that the tools are still working properly? That the source code is still compliant with the organisation’s standards and the business rules? With cold hard automated metrics of course, which can then be used to build dashboards (I love dashboards)!

This article is the first of a series of two that will cover the metrics gathering via tests and code analysis and how to fully automate the process using VSTS.

For the sake of this article, I’ve created a simple module. While we won’t dive deep into the actual code, let’s at least look at the anatomy of the module. I am not reinventing the wheel here, it’s just for visual support.

Module Anatomy

  1. The sources containing our logic and the module definition
  2. The test projects with some test data depending of the functionality under test
  3. Utility scripts for VSTS which we will cover in the next blogpost of this series

Also, we will leverage two of my favorite PowerShell modules that are available on the PowerShellGallery, Pester for Test Engine and PSScriptAnalyser for CodeAnalysis

Fire up PowerShell as an Administrator and run the following commands:

Install-Module -Name Pester -Scope CurrentUser -Force -AllowClobber
Install-Module -Name PSScriptAnalyzer -Scope CurrentUser -Force -AllowClobber

Metric #1 – Failing tests

Writing tests is all about proving that your system is not failing and is in my honest opinion the lowest level of documentation of a system. A colleague and good friend of mine already said those words of wisdom: “Code going in production without any tests is already legacy code”. Run the following command from the root of the module.

Invoke-Pester -OutputFormat NUnitXml -OutputFile ".\Test-Pester.XML"

The result of the execution is the following failing tests (as intended).  We ensure resurfacing the problems by logging each test run into an output file.

Failing Tests

Metric #2 – Code coverage

Code coverage is gathered as a percentage and is a good indicator of the degree of testing the system is under. Meaning that a high coverage indicates a lower chance of the module having any hidden defects. It can be considered as a safety measure more than anything. Since it comes off-the-shelf with Pester, we might as well take advantage of it!

By adding extra parameters to the Invoke-Pester cmdlet we can gather 2 distinct metrics in a single command. Very nice but a bit cumbersome if your module file count grows tenfold!

Invoke-Pester `
  -OutputFormat NUnitXml `
  -OutputFile ".\Test-Pester.XML" `
  -CodeCoverage @('.\Sources\Credentials.ps1', '.\Sources\Deployments.ps1', '.\Sources\Multilingual.ps1') `
  -CodeCoverageOutputFile ".\Coverage-Pester.xml"

The following screenshot is the result of above command and demonstrates the code coverage metric and some insights regarding missed commands. Those commands are not covered by the test run and are a good indicator of where to start looking to increase the coverage of your codebase. Usually, my floor value turns around 50% but it is strictly up to you. The higher the better.

Code Coverage

Metric #3 – Code analysis

Standard code across a solution is a must. It greatly increases your organisation’s productivity when it comes to maintenance. While some people hate having a tool telling them that what they did is wrong, I always have the same approach: don’t kill the messenger. Those tools exist to ensure a certain code quality and standard. If a rule doesn’t make sense for your developers or is too time-consuming, you can always turn it off.

By executing the following command:

Invoke-ScriptAnalyzer -Path .\Sources\

We get this output:

Code Analysis Output

Technological osmosis

By combining both of our tools we can automate the whole metric gathering process. The following code is a simple implementation of embedding PSScriptAnalyzer within a Pester test. As shown on line 17, any rules being violated will generate a failing test run.

Describe ': Given the PowerShell Scripts Analyzer' {
    BeforeAll {
        Push-Location $PSScriptRoot
        $scriptsToAnalyze = Resolve-Path "..\Sources\"
    }

    AfterAll {
        Pop-Location
    }
       
    Context '-> When analyzing with the standard rules' {
        It "Should NOT violates any rules" {
            # Act
            $violations = Invoke-ScriptAnalyzer -Path $scriptsToAnalyze

            # Assert
            $violations | Should -Be $null
        }
    }
}

Conclusion

Gathering metrics is essential when it comes to stamping code quality. Those can then be fed to a CI pipeline ensuring helpful dashboards (yeah!) to inform a build administrator when things go south with your codebase.

Please stay tuned for the upcoming article where we will put in place the mechanisms covered by this article into a proper Continuous Integration build machine 🙂

You have an innovative concept

Let us propose you the best technologies