Tag Archives: Pester

Unit Testing with Pester : storing complex Mock objects in a JSON file

When unit testing with Pester, mocking is pretty much unavoidable, especially for code related to infrastructure, configuration, or deployment.
We don’t want our unit tests to touch files, a database, the registry, and not to mention the internet, do we ?

With Pester’s Mock function, we can isolate our code from this outside (hostile ?) world by faking commands and make them return whatever we want, even a custom object. Unfortunately, creating custom objects from within the Mock is not ideal when dealing with complex Mock objects with nested properties.

Let’s see an example to understand why.

We need to unit test a simple function (Get-ProcessModule) that lists all modules (DLLs) loaded in the process(es) specified by name :

Function Get-ProcessModule {
    [CmdletBinding()]
    Param (
        [Parameter(Mandatory=$True)]
        [string]$Name        
    )
    $Processes = Get-Process -Name $Name

    If ( $Processes ) {
        Foreach ( $Process in $Processes ) {
            $LoadedModules = $Process.Modules

            Foreach ( $LoadedModule in $LoadedModules ) {
                $CustomProps = @{'Name'= $LoadedModule.ModuleName
                                 'Version'= $LoadedModule.ProductVersion
                                 'PreRelease' = $LoadedModule.FileVersionInfo.IsPreRelease
                                }
            $CustomObj = New-Object -TypeName psobject -Property $CustomProps
            $CustomObj
            }
        }
    }
}

 
Nothing fancy here, but notice that we are looking at a property named IsPreRelease which is nested in the FileVersionInfo property which itself is nested within the Modules property of our Process objects.

When unit testing this function, we don’t know which process(es) are running or not, and which DLLs they have loaded. And we don’t want to start new processes just for the sake of testing. So, we will need to Mock Get-Process and return fake process objects with the properties we need, including the IsPreRelease nested property.

It would look like this :

 
While this does work, I’m not a big fan of cluttering the test file with 10 lines of code for every single Mock. Imagine if we had a dozen (or more) different Mock objects to create, this would add up pretty quickly and make the test file quite difficult to follow.

I think we should strive to keep our test files as concise and readable as possible, if we want to realize the benefits of Pester’s DSL : a fairly minimalist syntax that reads almost like a specification. Granted, it’s no Gherkin, but this might be coming

Also, because these Mock objects are just fake objects with fake property values, they should be considered more as test data than code. So, applying the “separation of concerns” principle, we should probably separate this data from the testing logic and store it in a distinct file.

Being a PowerShell kind of guy, my first choice was to use a standard PowerShell data file (.psd1). Let’s see how this works out :

@{
    Process1 =  [PSCustomObject]@{ 
        Modules = @( @{
            ModuleName = 'Module1FromProcess1';
            ProductVersion = '1.0.0.1';
            FileVersionInfo = @{
                IsPreRelease = $False
            }
        } );
    }
}

 
We have to specify the type PSCustomObject, otherwise it would be a hashtable when imported from the file back into PowerShell. Unfortunately, Import-PowerShellDataFile doesn’t like that :

 
This is because to safely import data, the cmdlet Import-PowerShellDataFile works in RestrictedLanguage mode. And in this mode, casting to a PSCustomObject (to any type, for that matter) is forbidden.

We could use Invoke-Expression instead of Import-PowerShellDataFile, but we’ve been told that Invoke-Expression is evil, so we should probably look for another option.

I heard that JSON is a nice and lightweight format to store data, so let’s try to use it to store our Mock objects. Here is the solution I came up with to represent Mock objects as JSON :

{
    "Get-Process": [
        {
            "1ProcessWithMatchingName": {
                "Modules": {
                    "ModuleName": "Module1FromProcess1",
                    "ProductVersion": "1.0.0.1",
                    "FileVersionInfo": {
                        "IsPreRelease": false
                    }
                }
            }
        },
        {
            "2ProcessesWithMatchingName": [
                {
                    "Modules": {
                        "ModuleName": "Module1FromProcess1",
                        "ProductVersion": "1.0.0.1",
                        "FileVersionInfo": {
                            "IsPreRelease": false
                        }
                    }
                },
                {
                    "Modules": {
                        "ModuleName": "Module1FromProcess2",
                        "ProductVersion": "2.0.0.1",
                        "FileVersionInfo": {
                            "IsPreRelease": true
                        }
                    }
                }
            ]
        }
    ]
}

 
NOTE : For “true” and “false” to be treated as proper boolean values, they have to be all lower case.

The data is organized hierarchically, as follow :

  1. The top level is the name of the mocked command (Get-Process in this case)
  2. The next level describes each scenario (or test case)
  3. The inner level is the actual object(s) that we want the mocked command to return, in this specific scenario.

As we can see above, the second scenario (labelled “2ProcessesWithMatchingName“) returns an array of 2 objects. We could make it return 3, or more, if we wanted to. We could also have multiple modules in some of our fake processes, but for illustration purposes, the above is enough.

We can import this data back into PowerShell with ConvertFrom-Json and explore the objects it contains, and their properties using what I call “dot-browsing” :

C:\> $JsonMockData = Get-Content .\TestData\MockObjects.json -Raw
C:\> $Mocks = ConvertFrom-Json $JsonMockData
C:\> $2ndTestCase = $Mocks.'Get-Process'.'2ProcessesWithMatchingName'
C:\> $2ndTestCase.Modules

ModuleName          ProductVersion FileVersionInfo
----------          -------------- ---------------
Module1FromProcess1 1.0.0.1        @{IsPreRelease=False}
Module1FromProcess2 2.0.0.1        @{IsPreRelease=True}


C:\> $2ndTestCase.Modules.FileVersionInfo.IsPreRelease
False
True
   

 
Now, let’s see how we can use this in our tests :

 
Within each Context block, we get the Mock for a specific scenario that we have defined in our JSON data and store it into the ContextMock variable. Then, to define our Mock, we just specify that its return value is our ContextMock variable.

We can even use the ContextMock variable to get the expected values for the Should assertions, like in the first 2 tests above.

You might be wondering why the hell I would filter ContextMock with : Where-Object { $_ }, in the second Context block. Well, this is because importing arrays from JSON to PowerShell has a tendency to add $Null items in the resulting array.

In this case, $ContextMock contained 3 objects : the 2 fake process objects, as expected, and a $Null element. Why ? I have no idea, but I was able to get rid of it with the Where-Object statement above.

As we can see, it makes the tests cleaner and allows to define Mocks in an expressive way, so overall, I think this is a nice solution to manage complex Mock data.

That said, unit testing is still a relatively new topic in the PowerShell community, and I haven’t heard or read anything on best practices around test data. So I’m curious, how do you guys handle Mock objects and more generally, test data ? Do you have any tips or techniques ?

Making PSScriptAnalyzer a first-class citizen in a PowerShell CI pipeline

As you already know if you have read this or this, I’m a big fan of PSScriptAnalyzer to maintain a certain standard of coding style and quality. Where this is especially powerful is inside a continuous integration pipeline because this allows us to enforce that coding standard.

In our CI pipeline, we can easily make the build fail if our code violates one or more PSScriptAnalyzer rule(s). That’s great, but the main point of continuous integration is to give quick feedback to developers about their code change(s). Continuous integration is about catching problems early to fix them early. So, Green/Red or Pass/Fail is OK, but providing meaningful information about a problem to help remediate it is better. And pretty darn important.

So now, the question is :

How can we make our CI tool publish PSScriptAnalyzer results with the information we need to remediate any violation ?

All CI tools have ways to publish test results to make them highly visible, to drill down into a test failure, and even do some reporting on these test results. Since we are talking about a PowerShell pipeline, we are most likely already using Pester to test our PowerShell code. Pester can spit out results in the same XML format as NUnit and these NUnit XML files can be consumed and published by most CI tools.

It makes a lot of sense to leverage this Pester integration as a universal CI glue and run our PSScriptAnalyzer checks as Pester tests. Let’s look at possible ways to do that.

One Pester test checking if the PSScriptAnalyzer result is null :

This is probably the simplest way to invoke PSScriptAnalyzer from Pester :

Describe 'PSScriptAnalyzer analysis' {
    
    $ScriptAnalyzerResults = Invoke-ScriptAnalyzer -Path ".\ExampleScript.ps1" -Severity Warning
    
    It 'Should not return any violation' {
        $ScriptAnalyzerResults | Should BeNullOrEmpty
    }
}
  

 
Here, we are checking all the rules which have a “Warning” severity within one single test. Then, we rely on the fact that if PSScriptAnalyzer returns something, it means that they were at least one violation and if PSScriptAnalyzer returns nothing, it’s all good.

There are 2 problems here :

  • We are evaluating a whole bunch of rules in a single test, so the test name cannot tell us which rule was violated
  • As soon as there are more than one violation, the Pester message gives us useless information

How useless ? Well, let’s see :

useless-pester-stacktrace
 
The Pester failure message gives us the object type of the PSScriptAnalyzer results, instead of their contents. This does not provide what we need to locate and remediate the problem, like the name of the file which violated the rule and the line number in that file where the violation is located.

One Pester test per PSScriptAnalyzer rule :

This is a pretty typical (and better) way of running PSScriptAnalyzer checks via Pester.

Describe 'PSScriptAnalyzer analysis' {
    
    $ScriptAnalyzerRules = Get-ScriptAnalyzerRule -Name "PSAvoid*"

    Foreach ( $Rule in $ScriptAnalyzerRules ) {

        It "Should not return any violation for the rule : $($Rule.RuleName)" {
            Invoke-ScriptAnalyzer -Path ".\ExampleScript.ps1" -IncludeRule $Rule.RuleName |
            Should BeNullOrEmpty
        }
    }
}
  

 
In this case, the first step is to get a list of the rules that we want to evaluate. Here, I changed the list of rules to : all rules which have a name starting with “PSAvoid”.
This is just to show that we can filter the rules by name, as well as by severity.

Then, we loop through this list of rules and have a Pester test evaluating each rule, one by one. As we can see below, the output is much more useful :

psscriptanalyzer-by-rule
 
This is definitely better but we still encounter the same issue as before because there were more than one violation for that “PSAvoidUsingWMICmdlet” rule. So we still don’t get the file name and the line number.

We could use a nested loop : for each rule, we would loop through each file and evaluate that rule against each file one-by-one. That would be more granular and reduce the risk of this particular issue. But if a single file violated the same rule more than once, we would still have the same problem.

So, I decided to go take a different direction to address this problem : taking the output from PSScriptAnalyzer and converting it to a test result file, using the same XML schema as Pester and NUnit.

Converting PSScriptAnalyzer output to a test result file :

For that purpose, I wrote a function named Export-NUnitXml, which is available on GitHub in this module.

Here are the high-level steps of what Export-NUnitXml does :

  • Take the output of PSScriptAnalyzer as its input (zero or more objects of the type [Microsoft.Windows.PowerShell.ScriptAnalyzer.Generic.DiagnosticRecord]
  • Create an XML document containing a “test-case” node for each input object(s).
  • Write this XML document to the file specified via the Path parameter.

Here is an example of how we can use this within a build script (in Appveyor.com as the CI tool, in this case) :

$ScriptAnalyzerRules = Get-ScriptAnalyzerRule -Severity Warning
$ScriptAnalyzerResult = Invoke-ScriptAnalyzer -Path ".\CustomPSScriptAnalyzerRules\ExampleScript.ps1" -IncludeRule $ScriptAnalyzerRules
If ( $ScriptAnalyzerResult ) {
  
    $ScriptAnalyzerResultString = $ScriptAnalyzerResult | Out-String
    Write-Warning $ScriptAnalyzerResultString
}
Import-Module ".\Export-NUnitXml\Export-NUnitXml.psm1" -Force
Export-NUnitXml -ScriptAnalyzerResult $ScriptAnalyzerResult -Path ".\ScriptAnalyzerResult.xml"

(New-Object 'System.Net.WebClient').UploadFile("https://ci.appveyor.com/api/testresults/nunit/$($env:APPVEYOR_JOB_ID)", (Resolve-Path .\ScriptAnalyzerResult.xml))

If ( $ScriptAnalyzerResult ) {        
    # Failing the build
    Throw "Build failed because there was one or more PSScriptAnalyzer violation. See test results for more information."
}
   

 
And here is the result in Appveyor :

appveyor-overview
 
Just by reading the name of the test case, we get the essential information : the rule name, the file name and even the line number. Pretty nice, huh ?

Also, we can expand any failed test (by clicking on it) to get additional information. For example, the last 2 tests are expanded below :

appveyor-test-details
 
The “Stacktrace” section provides additional details, like the rule severity and the actual offending code. Another nice touch is that the “Message” section gives us the rule message, which normally provides an actionable recommendation to remediate the problem.

But, what if PSScriptAnalyzer returns nothing ?
Export-NUnitXml does handle this scenario gracefully because its ScriptAnalyzerResult parameter accepts $Null.
In this case, the test result file will contain only one test case and this test passes.

Let’s test this :

Import-Module -Name 'PsScriptAnalyzer' -Force
Import-Module ".\Export-NUnitXml\Export-NUnitXml.psm1" -Force
Export-NUnitXml -ScriptAnalyzerResult $Null -Path ".\ScriptAnalyzerResult.xml"

(New-Object 'System.Net.WebClient').UploadFile("https://ci.appveyor.com/api/testresults/nunit/$($env:APPVEYOR_JOB_ID)", (Resolve-Path .\ScriptAnalyzerResult.xml))
  

 
Here is what it looks like in Appveyor:

appveyor-passed-psscriptanalyzer-tests
 
There’s nothing more beautiful than a green test…

So now, as developers, we not only have quick feedback on our adherence to coding standards, but we also get actionable guidance on how to improve.
And remember, this NUnit XML format is widely supported in the CI/CD world, so even though I only showed Appveyor, this would work similarly in TeamCity, Microsoft VSTS, and others…

Using Pester to validate deployment readiness for a large number of machines

Recently, I had to roll out an upgrade of our software for a customer. The upgrade failed for about 80 client machines (out of around 400). There was a lot of head-scratching and quite a few “It was working in the Test environment !”. Because we couldn’t afford much more downtime for end-users, I had to develop an ugly workaround to allow machines to upgrade. But even so, this upgrade rollout went well over the planned maintenance window.

In short, it was a pain. And you know what ?
Pains are great learning opportunities and powerful incentives to take action. The first lesson was that there was a multitude of different causes which boiled down to misconfiguration, due to inconsistently managed client machines.

The second lesson was that we needed some kind of tool to validate the upgrade (or deployment) readiness of a bunch of machines to prevent this kind of mess in the future. This tool would allow to check whether all the machines meet the prerequisites for a new deployment or upgrade before rolling it out. This should also provide a nice, visual report so that non-technical stakeholders can see :

  • The overall number and percentage of machines not ready
  • Which machines are ready
  • Which ones are not ready
  • Which prerequisites (and prerequisite categories) are met
  • Which prerequisites (and prerequisite categories) are not met

The report should also allow technical stakeholders to drill down to see for a specific machine which prerequisite(s) were not met and why.

Knowing that Pester can be used to validate the operation of a system, I figured I could build a tool leveraging Pester tests to validate prerequisites. So I wrote the DeploymentReadinessChecker PowerShell module and made it available on the PowerShell Gallery. Yes, anyone can use it because it is designed as BYOPS (Bring Your Own Pester Script).

Regarding the HTML report, I didn’t reinvent the wheel, this is using a great utility named ReportUnit.

Basic usage :

First, we need :

  • PowerShell 4.0 (or later).
  • The Pester module should be installed on the machine from which we run DeploymentReadinessChecker.
  • A Pester script containing tests for the prerequisites we want to validate and optionally, “Describe” blocks to group tests in “prerequisite categories”.
  • If the above Pester script (validation script) takes parameters, the values for the parameters we need to pass to it.
  • A list of computer names for the machines we want to check the prerequisites against.
  • Credentials to connect to all the target machines

Now that we have everything we need, let’s get to it.

The module comes with an example validation script : Example.Tests.ps1 and that is what we are going to use here. For your own deployments or upgrades, you will need a validation script containing tests for your own prerequisites : hardware prerequisites, OS requirements, runtime or other software dependencies, network connectivity prerequisites… whatever you need.

Here are a few examples from the first 2 “Describe” blocks of Example.Tests.ps1 :

Describe 'Hardware prerequisites' -Tag 'Hardware' {
    
    It 'Has at least 4096 MB of total RAM' {

        Invoke-Command -Session $RemoteSession {
        (Get-CimInstance -ClassName Win32_PhysicalMemory).Capacity / 1MB } |
        Should Not BeLessThan 4096
    }
}
Describe 'Networking prerequisites' -Tag 'Networking' {

    It 'Can ping the Management server by name' {

        Invoke-Command -Session $RemoteSession { param($ManagementServerName)
        Test-Connection -ComputerName $ManagementServerName -Quiet } -ArgumentList $ManagementServerName |
        Should Be $True
    }
    It 'Can ping the Deployment server by name' {

        Invoke-Command -Session $RemoteSession { param($DeploymentServerName)
        Test-Connection -ComputerName $DeploymentServerName -Quiet } -ArgumentList $DeploymentServerName |
        Should Be $True
    }
    It 'Has connectivity to the Management server on TCP port 80' {

        Invoke-Command -Session $RemoteSession { param($ManagementServerName)
        (Test-NetConnection -ComputerName $ManagementServerName -CommonTCPPort HTTP).TcpTestSucceeded } -ArgumentList $ManagementServerName |
        Should Be $True
    }
    It 'Has the firewall profile set to "Domain" or "Private"' {

        Invoke-Command -Session $RemoteSession {
        $FirewallProfile = (Get-NetConnectionProfile)[0].NetworkCategory.ToString();
        $FirewallProfile -eq 'Domain' -or $FirewallProfile -eq 'Private' } |
        Should Be $True
    }
}

As we can see, it is the validation script’s responsibility to handle the remoting to the target machines.
The validation script should be located in $Module_Folder\ReadinessValidationScript\, for example : C:\Program Files\WindowsPowerShell\Modules\DeploymentReadinessChecker\1.0.0\ReadinessValidationScript\Example.Tests.ps1.

Also, its extension should be “.Tests.ps1” because that’s what Invoke-Pester looks for.

There is no support for multiple validation scripts, so before adding your own validation script in there, rename Example.Tests.ps1 by changing its extension to something else than “.Tests.ps1“. This is to ensure that the example script is ignored by Invoke-Pester.

UPDATE :
I added support for multiple validation scripts being present in $Module_Folder\ReadinessValidationScript\.
Test-DeploymentReadiness can only invoke one validation script at a time, but if there is more than one validation script present, a dynamic parameter named ValidationScript is made available (mandatory, even) to specify the name of the validation script.

It is highly recommended to group related tests into distinct and meaningful “Describe” blocks because, as we’ll see later on, some items in the report are displayed on a per-Describe block basis.

Optionally, “Describe” blocks can have tags and the tool can use these tags to include or exclude some tests, just like Invoke-Pester does.

The module contains a single cmdlet : Test-DeploymentReadiness.

Our computer names list can be fed to the -ComputerName parameter at the command line, from a file, or via pipeline input. For example, for a single computer, this could look like :

C:\> Test-DeploymentReadiness -ComputerName Computer1 -Credential $Cred -OutputPath $env:USERPROFILE\Desktop\Readiness\ |
Invoke-Item

Here is the console output :

Simple example with Invoke-Item

So we get the normal output from Invoke-Pester for each target machine specified via the -ComputerName parameter and a little bit more text at the end. All of this is just written to the console (using Write-Host) but it outputs a single object to the pipeline : a FileInfo object for the Index.html of the report. That way, if we want instant gratification, we can directly open the report in our default browser by piping the output of Test-DeploymentReadiness to Invoke-Item, as seen above.

Off course, it generates a bunch of files, as well. These are generated in the current directory by default, or in the directory specified via the -OutputPath parameter. Invoke-Pester generates one test result file (.xml) per target machine and ReportUnit.exe generates one HTML report per target machine and the overall report Index.html. To view the report, we only need to open the Index.html because it has the links to machine-specific files if we want to drill down to the per-machine reports.

Filtering the tests using tags :

As said earlier, all the Pester tests representing the prerequisites should be in a single validation script, so we can potentially end up with a script containing a very large number of tests. To make this more modular and flexible, we can group tests related to the same topic, purpose, or component into distinct “Describe” blocks and give these “Describe” blocks some tags.

Then, Test-DeploymentReadiness can include only the tests contained in the “Describe” blocks which have the tag(s) specified via the -Tag parameter. Let’s see what it looks like :

Simple example with tag

Similarly, we can exclude the tests contained in the “Describe” blocks which have the tag(s) specified via the -ExcludeTag parameter.

Passing parameters to the validation script :

It is more than likely that the Pester-based validation script takes parameters, especially since it remotes into the target machines, so it may need a -ComputerName and a -Credential parameter. If your validation script has parameter names matching “ComputerName” or “Credential“, then Test-DeploymentReadiness does a bit of work for you.

If the validation script has a ComputerName parameter, Test-DeploymentReadiness passes one computer at a time to its ComputerName parameter, via the Script parameter of Invoke-Pester.

If the validation script has a Credential parameter, the Test-DeploymentReadiness passes the value of its own Credential parameter to the validation script, via the Script parameter of Invoke-Pester.

Cool, but what about any other parameters ?
That’s where the -TestParameters parameter comes in. The parameter names and values can be passed as a hashtable to the -TestParameters parameter of Test-DeploymentReadiness. Then, Test-DeploymentReadiness passes these into the Script parameter of Invoke-Pester, when calling the validation script.

The example validation script Example.Tests.ps1 takes quite a few parameters, among them are DeploymentServerName and ManagementServerName . We can pass values to these 2 parameters, like so :

C:\> $TestParameters= @{ DeploymentServerName = 'DeplServer1'
                      ManagementServerName = 'Mgmtserver1'
                   }
C:\>
C:\> 'Computer1','Computer2','Computer3','Computer4','Computer5' |
Test-DeploymentReadiness -Credential $Cred -OutputPath $env:USERPROFILE\Desktop\Readiness\ -TestParameters $TestParameters
   

 

The Reports :

As mentioned earlier, we only need to open the generated Index.html and this opens the overview report. After running the above command, here is what this looks like :

Overview Report

Fixture summary” gives us the number of ready machines and not-so-ready machines whereas the “Pass percentage” gives us the percentage of machines which are ready.

We can see that Computer4 is the only machine which failed more than 1 prerequisite. We can see what’s going on with it in more detail by clicking on the link named “Computer4” :

Computer4 Report

We can clearly see 4 distinct prerequisite categories, which corresponds with “Describe” blocks in our validation script. Here, “Fixture summary” tells us which prerequisite categories contained at least one failed prerequisite(s). In this case, there were 2.

Let’s check which Networking prerequisite(s) were not met by clicking on “Networking prerequisites” :

Network Prerequisites Details

So now, we have can a good idea of what the issue is (the actual usefulness of the Pester error message will depend on how the test was written).

Pretty neat, huh ? I can see this saving me hours and hours of work, and considerably reduce the maintenance windows in future deployments and upgrades.

If this tool doesn’t exactly fit your needs or if you think of an enhancement, the code is on GitHub, feel free to submit an issue, or even better, to fork it and improve it.

A Boilerplate for Unit testing DSC resources with Pester

Unit testing PowerShell code is slowly but surely becoming mainstream. Pester, the awesome PowerShell testing framework is playing a big part in that trend.
But why the hell would you write more PowerShell code to test your PowerShell code ? Because :

  • It can give you a better understanding of your code, its design, its assumptions and its behaviour.
     
  • When you make changes and the unit tests pass, you can be pretty confident that you didn’t break anything.
    This makes changes less painful and scary and this is a very important notion in DevOps : removing fear and friction to make changes painless, easy, fast and even … boring.
     
  • It helps writing more robust , less buggy code.
     
  • Given the direction that PowerShell community is taking and the way the DevOps movement is permeating the IT industry, this is becoming a valuable skill.
     
  • There is an initial learning curve and it takes time, effort and discipline, but if you do it often enough, it can quickly become second nature.
     

To help reduce this time and effort, I wanted to a build Pester script template which could be reused for unit testing any DSC resource. After all, DSC resources have a number of specific requirements and best practices, for example : Get-TargetResource should return a hashtable, or Test-TargetResource should return a boolean… So we can write tests for all these requirements and these tests can be readily reused for any other DSC resource (non class-based).

Without further ado, here is the full script (which is also available on GitHub) and then we’ll elaborate on the main bits and pieces :

$Global:DSCResourceName = 'My_DSCResource'  #<----- Just change this

Import-Module "$($PSScriptRoot)\..\..\DSCResources\$($Global:DSCResourceName)\$($Global:DSCResourceName).psm1" -Force

# Helper function to list the names of mandatory parameters of *-TargetResource functions
Function Get-MandatoryParameter {
    [CmdletBinding()]
    Param(
        [Parameter(Mandatory=$True)]
        [string]$CommandName
    )
    $GetCommandData = Get-Command "$($Global:DSCResourceName)\$CommandName"
    $MandatoryParameters = $GetCommandData.Parameters.Values | Where-Object { $_.Attributes.Mandatory -eq $True }
    return $MandatoryParameters.Name
}

# Getting the names of mandatory parameters for each *-TargetResource function
$GetMandatoryParameter = Get-MandatoryParameter -CommandName "Get-TargetResource"
$TestMandatoryParameter = Get-MandatoryParameter -CommandName "Test-TargetResource"
$SetMandatoryParameter = Get-MandatoryParameter -CommandName "Set-TargetResource"

# Splatting parameters values for Get, Test and Set-TargetResource functions
$GetParams = @{
    
}
$TestParams = @{
    
}
$SetParams = @{
    
}

Describe "$($Global:DSCResourceName)\Get-TargetResource" {
    
    $GetReturn = & "$($Global:DSCResourceName)\Get-TargetResource" @GetParams

    It "Should return a hashtable" {
        $GetReturn | Should BeOfType System.Collections.Hashtable
    }
    Foreach ($MandatoryParameter in $GetMandatoryParameter) {
        
        It "Should return a hashtable with key named $MandatoryParameter" {
            $GetReturn.ContainsKey($MandatoryParameter) | Should Be $True
        }
    }
}

Describe "$($Global:DSCResourceName)\Test-TargetResource" {
    
    $TestReturn = & "$($Global:DSCResourceName)\Test-TargetResource" @TestParams

    It "Should have the same mandatory parameters as Get-TargetResource" {
        # Does not check for $True or $False but uses the output of Compare-Object.
        # That way, if this test fails Pester will show us the actual difference(s).
        (Compare-Object $GetMandatoryParameter $TestMandatoryParameter).InputObject | Should Be $Null
    }
    It "Should return a boolean" {
        $TestReturn | Should BeOfType System.Boolean
    }
}

Describe "$($Global:DSCResourceName)\Set-TargetResource" {
    
    $SetReturn = & "$($Global:DSCResourceName)\Set-TargetResource" @SetParams

    It "Should have the same mandatory parameters as Test-TargetResource" {
        (Compare-Object $TestMandatoryParameter $SetMandatoryParameter).InputObject | Should Be $Null
    }
    It "Should not return anything" {
        $SetReturn | Should Be $Null
    }
}

 
That’s a lot of information so let’s break it down into more digestible chunks :

$Global:DSCResourceName = 'My_DSCResource'  #<----- Just change this

 
The “My_DSCResource” string is only part in the entire script which needs to be changed from one DSC resource to another. All the rest can be reused for any DSC resource.

Import-Module "$($PSScriptRoot)\..\..\DSCResources\$($Global:DSCResourceName)\$($Global:DSCResourceName).psm1" -Force

The relative path to the module containing the DSC resource is derived from a standard folder structure, with a “Tests” folder at the root of the module and a “Unit” subfolder, containing the resulting unit tests script, for example :

O:\> tree /F "C:\Git\FolderPath\DscModules\DnsRegistration"
Folder PATH listing for volume OS

│   DnsRegistration.psd1
│
├───DSCResources
│   └───DnsRegistration
│       │   DnsRegistration.psm1
│       │   DnsRegistration.schema.mof
│       │
│       └───ResourceDesignerScripts
│               GenerateDnsRegistrationSchema.ps1
│
└───Tests
    └───Unit
            DnsRegistration.Tests.ps1

 
We load the module because we’ll need to use the 3 functions it contains : Get-TargetResource, Set-TargetResource and Test-TargetResource.

By the way, note that this script is divided into 3 Describe blocks : this is a more or less established convention in unit testing with Pester : one Describe block per tested function. The “Force” parameter of Import-Module is to make sure that, even if the module was already loaded, we get the latest version of the module.

Function Get-MandatoryParameter {
    [CmdletBinding()]
    Param(
        [Parameter(Mandatory=$True)]
        [string]$CommandName
    )
    $GetCommandData = Get-Command "$($Global:DSCResourceName)\$CommandName"
    $MandatoryParameters = $GetCommandData.Parameters.Values | Where-Object { $_.Attributes.Mandatory -eq $True }
    return $MandatoryParameters.Name
}

 
This is a helper function used to get the mandatory parameter names for the *-TargetResource functions. If you use a more than a few helper functions in your unit tests, then you should probably gather them in a separate script or module.

# Splatting parameters values for Get, Test and Set-TargetResource functions
$GetParams = @{
     
}
$TestParams = @{
     
}
$SetParams = @{
     
}

 
These are placeholders to be completed with the parameters and values for Get-TargetResource, Test-TargetResource and Set-TargetResource, respectively. Splatting makes them more readable, especially for resources that have many parameters. We might use the same parameters and parameter values for all 3 functions, in that case, we can consolidate these 3 hashtables into a single one.

$GetReturn = & "$($Global:DSCResourceName)\Get-TargetResource" @GetParams

 
Specifying the resource name with the function allows to unambiguously call the Get-TargetResource function from the DSC resource we are currently testing and not the one from another resource.

It "Should return a hashtable" {
        $GetReturn | Should BeOfType System.Collections.Hashtable
    }

 
The first actual test ! This is validating that Get-TargetResource returns a object of the type [hashtable]. The “BeOfType” operator is designed specifically for verifying the type of an object so it’s a great fit.

Foreach ($MandatoryParameter in $GetMandatoryParameter) {
        
        It "Should return a hashtable with key named $MandatoryParameter" {
            $GetReturn.ContainsKey($MandatoryParameter) | Should Be $True
        }
    }

 
An article from the PowerShell Team says this :

The Get-TargetResource returns the status of the modeled entities in a hash table format. This hash table must contain all properties, including the Read properties (along with their values) that are defined in the resource schema.

I’m not sure this is a hard requirement because this is not enforced, and Get-TargetResource is not automatically called by the DSC engine. So this may not be ideal but we are getting the names of the mandatory parameters of Get-TargetResource and we check that the hashtable returned by Get-TargetResource has a key matching each of these parameters. Maybe, we could check against all parameters, not just the mandatory ones ?

Now, let’s turn our attention to Test-TargetResource :

    $TestReturn = & "$($Global:DSCResourceName)\Test-TargetResource" @TestParams

    It "Should have the same mandatory parameters as Get-TargetResource" {
        (Compare-Object $GetMandatoryParameter $TestMandatoryParameter).InputObject | Should Be $Null
    }

 
This test is validating that the mandatory parameters of Test-TargetResource are the same as for Get-TargetResource. There is a PSScriptAnalyzer rule for that, with an “Error” severity, so we can safely assume that this is a widely accepted and important best practice :

GetSetTest Parameters
 
Reading the name of this “It” block, we could assume that it is checking against $True or $False. But here, we use Compare-Object and validate that there is no difference between the 2 lists of mandatory parameters. This is to make the message we get in case the test fails more useful : it will tell us the offending parameter name(s).

    It "Should return a boolean" {
        $TestReturn | Should BeOfType System.Boolean
    }

 
The function Test-TargetResource should always return a boolean. This is a well known requirement and this is also explicitly specified in the templates generated by xDSCResourceDesigner, so there is no excuse for not knowing/following this rule.

Now, it is time to test Set-TargetResource :

    It "Should have the same mandatory parameters as Test-TargetResource" {
        (Compare-Object $TestMandatoryParameter $SetMandatoryParameter).InputObject | Should Be $Null
    }

 
The same as before, but this time we validate that the mandatory parameters of the currently tested function (Set-TargetResource) are the same as for Test-TargetResource.

    It "Should not return anything" {
        $SetReturn | Should Be $Null
    }

 
Set-TargetResource should not return anything. Again, you don’t have to take my word for it, PSScriptAnalyzer is our source of truth :

Set should not return anything
 
That’s it for the script. But then, a boilerplate is more useful when it is readily available as a snippet on your IDE of choice. So I also converted this boilerplate into a Visual Studio Code snippet, this is the first snippet in the custom snippet file I made available here.

The path of Visual Studio Code PowerShell snippet file is : %APPDATA%\Code\User\snippets\PowerShell.json.
Or, for those of us using the PowerShell extension, we can modify the following file : %USERPROFILE%.vscode\extensions\ms-vscode.PowerShell-0.6.1\snippets\PowerShell.json.

Obviously, this set of tests is pretty basic and doesn’t cover the code written specifically for a given resource, but it’s a pretty good starting point. This allows to write basic unit tests for our DSC resources in just a few minutes, so now, there’s no excuse for not doing it.