Tag Archives: DSC

Merging data from 2 PowerShell DSC configuration data files

As you probably already know, when writing a DSC configuration, separating the environmental data from the configuration logic is a best practice. So all the environment-specific data gets stored in separate (typically .psd1) files. If you work with PowerShell DSC at medium-to-large scale, you (hopefully) have separate configuration data files for each customer and each environment.

Something like this, for example :


C:\TEST                                                    
│   Common_ConfigDataServer.psd1                           
│                                                          
├───Customer A                                             
│   ├───Production                                         
│   │       ConfigDataServer.psd1                          
│   │                                                      
│   ├───Staging                                            
│   │       ConfigDataServer.psd1                          
│   │                                                      
│   └───Test                                               
│           ConfigDataServer.psd1                          
│                                                          
├───Customer B                                             
│   ├───Production                                         
│   │       ConfigDataServer.psd1                          
│   │                                                      
│   ├───Staging                                            
│   │       ConfigDataServer.psd1                          
│   │                                                      
│   └───Test                                               
│           ConfigDataServer.psd1                          
│                                                          
└───Customer C                                             
    ├───Production                                         
    │       ConfigDataServer.psd1                          
    │                                                      
    ├───Staging                                            
    │       ConfigDataServer.psd1                          
    │                                                      
    └───Test                                               
            ConfigDataServer.psd1                          
   

 

Now, imagine we add stuff to a DSC configuration which takes some values from additional settings in the configuration data files. Updating every configuration data files every time we add stuff to the DSC configuration would get very inefficient as the number of customers or environments grows.

A solution for that is to have a common configuration data file which contains the more common settings and their default values and which is shared across all customers/environments (Common_ConfigDataServer.psd1 in the example above). Then, we have a config data file for each environment, which contains only the data that is specific to a given customer or environment.

Finally, we merge the configuration data from the 2 files (the common one and the environment-specific one) before passing this to the ConfigurationData parameter of the DSC configuration. In this scenario, we need to ensure that the more specific data takes precedence over the common data. This means :

  • Data which is present in the environment-specific file and absent from the common file gets added to the merged configuration data
  • Data which is absent in the environment-specific file and present in the common file is preserved in the merged configuration data
  • Data which is present in both files gets the value from the environment-specific file in the merged configuration data

Let’s look at how to do this.
In the example we are going to work with, the content of the common configuration data file (Common_ConfigDataServer.psd1) is :

@{ 
    # Node specific data 
    AllNodes = @(
       @{ 
            NodeName = '*'
            PSDscAllowPlainTextPassword = $True
            ServicesEndpoint = 'http://localhost/Services/'
            TimeZone = 'Pacific Standard Time'
       }
    );
}
   

 
And we are going to merge/override it with the file for Customer A’s Test environment, which contains this :

@{ 
    # Node specific data 
    AllNodes = @( 
       @{ 
            NodeName = '*'
            TimeZone = 'GMT Standard Time'
            LocalAdministrators = 'MyLocalUser'
       },
       @{
            NodeName = 'Server1'
            Role = 'Primary'
       },
       @{
            NodeName = 'Server2'
            Role = 'Secondary'
       }
    );
}
   

 
As we can see, the environment-specific data contains :

  • Additional node entries : Server1 and server2
  • An additional setting in an existing node : “LocalAdministrators” in the “*” node entry
  • A different value for an existing setting in an existing node : TimeZone in the “*” node entry (because this specific customer is located in Ireland)

To take care of the merging, we are going to use a function I wrote, named Merge-DscConfigData. The module containing this function is available on GitHub and on the PowerShell Gallery.

NOTE : This function uses Invoke-Expression to convert the content of the configuration data files into PowerShell objects. This is to keep this function compatible with PowerShell 4.0, but be aware that using Invoke-Expression has security implications. If you can get away with being compatible only with PowerShell 5.0 and later, then you should use the much safer Import-PowerShellDataFile instead.

This function takes the path of the common configuration data file via its BaseConfigFilePath parameter and the environment-specific data file via its OverrideConfigFilePath parameter.
It outputs the merged data as a hashtable that can be directly consumed by a DSC configuration.

Here is what it looks like :


 
The function’s verbose output gives a pretty good idea of how it works.
Also, we can see that the output object is a hashtable. More accurately, it is a hashtable containing an array of nested hashtables (one per node entry). This is exactly what the ConfigurationData parameter of any DSC configuration expects.

Now, let’s verify we can use this output object in a DSC configuration and that running the configuration results in the expected MOF files.
For testing purposes, we are going to use the following DSC configuration :

Configuration ProvisionServers
{

    Import-DscResource -ModuleName PSDesiredStateConfiguration
    Import-DscResource -ModuleName xTimeZone

     Node $AllNodes.NodeName
     {
        Registry ServicesEndpoint
        {
            Key = 'HKLM:\SOFTWARE\MyApp\Server\Config'
            ValueName = 'ServicesEndpoint'
            ValueData = $Node.ServicesEndpoint
            ValueType = 'String'
            Ensure = 'Present'
        }
        xTimeZone TimeZone
        {
            IsSingleInstance = 'Yes'
            TimeZone = $Node.TimeZone
        }
        If ( $Node.LocalAdministrators ) {
            Group LocalAdminUsers
            {
                GroupName = 'Administrators'
                MembersToInclude = $Node.LocalAdministrators
                Ensure = 'Present'
            }
        }
     }

    Node $AllNodes.Where{$_.Role -eq 'Primary'}.NodeName
    {
        File FolderForPrimaryServer
        {
            DestinationPath = 'C:\MyApp_Data'
            Ensure = 'Present'
            Type = 'Directory'
        }
    }
}
   

 
Then, we just invoke our configuration named ProvisionServers, passing our merged data to its ConfigurationData parameter, like so :


 
Now, let’s check the configuration documents which have been generated from this DSC configuration and data. Here is the content of Server1.mof :

/*
@TargetNode='Server1'
@GeneratedBy=mbuisson
@GenerationDate=01/06/2017 13:52:43
*/

instance of MSFT_RegistryResource as $MSFT_RegistryResource1ref
{
ResourceID = "[Registry]ServicesEndpoint";
 ValueName = "ServicesEndpoint";
 Key = "HKLM:\\SOFTWARE\\MyApp\\Server\\Config";
 Ensure = "Present";
 SourceInfo = "::9::9::Registry";
 ValueType = "String";
 ModuleName = "PSDesiredStateConfiguration";
 ValueData = {
    "http://localhost/Services/"
};

ModuleVersion = "1.0";

 ConfigurationName = "ProvisionServers";

};
instance of xTimeZone as $xTimeZone1ref
{
ResourceID = "[xTimeZone]TimeZone";
 SourceInfo = "::17::9::xTimeZone";
 TimeZone = "GMT Standard Time";
 IsSingleInstance = "Yes";
 ModuleName = "xTimeZone";
 ModuleVersion = "1.3.0.0";

 ConfigurationName = "ProvisionServers";

};
instance of MSFT_GroupResource as $MSFT_GroupResource1ref
{
ResourceID = "[Group]LocalAdminUsers";
 MembersToInclude = {
    "MyLocalUser"
};
 Ensure = "Present";
 SourceInfo = "::23::13::Group";
 GroupName = "Administrators";
 ModuleName = "PSDesiredStateConfiguration";

ModuleVersion = "1.0";

 ConfigurationName = "ProvisionServers";

};
instance of MSFT_FileDirectoryConfiguration as $MSFT_FileDirectoryConfiguration1ref
{
ResourceID = "[File]FolderForPrimaryServer";
 Type = "Directory";
 Ensure = "Present";
 DestinationPath = "C:\\MyApp_Data";
 ModuleName = "PSDesiredStateConfiguration";
 SourceInfo = "::34::9::File";

ModuleVersion = "1.0";

 ConfigurationName = "ProvisionServers";

};
instance of OMI_ConfigurationDocument


                    {
                        Version="2.0.0";
                        MinimumCompatibleVersion = "1.0.0";
                        CompatibleVersionAdditionalProperties= {"Omi_BaseResource:ConfigurationName"};
                        Author="mbuisson";
                        GenerationDate="01/06/2017 13:52:43";
                        Name="ProvisionServers";
                    };
   

 
First, the sole fact that we got a file named Server1.mof tells us one thing : the node entry with the NodeName “Server1” was indeed in the merged config data.

Also, we can see that the value of the setting ServicesEndpoint from the common data file was preserved and properly injected in the Registry resource entry of the DSC configuration.

Then, we see that the time zone value is “GMT Standard Time”, so this was overridden by the environment-specific data, as expected. The setting “LocalAdministrators” was not present in the common data file but it got added and its value is properly reflected in the Group resource entry.

Finally, the resource entry named “FolderForPrimaryServer” was processed, which means the “Role” settings had the value “Primary”. This is the expected value for Server1.

Now, we can verify the configuration document which has been generated for Server2 :

/*
@TargetNode='Server2'
@GeneratedBy=mbuisson
@GenerationDate=01/06/2017 13:52:43
*/

instance of MSFT_RegistryResource as $MSFT_RegistryResource1ref
{
ResourceID = "[Registry]ServicesEndpoint";
 ValueName = "ServicesEndpoint";
 Key = "HKLM:\\SOFTWARE\\MyApp\\Server\\Config";
 Ensure = "Present";
 SourceInfo = "::9::9::Registry";
 ValueType = "String";
 ModuleName = "PSDesiredStateConfiguration";
 ValueData = {
    "http://localhost/Services/"
};

ModuleVersion = "1.0";

 ConfigurationName = "ProvisionServers";

};
instance of xTimeZone as $xTimeZone1ref
{
ResourceID = "[xTimeZone]TimeZone";
 SourceInfo = "::17::9::xTimeZone";
 TimeZone = "GMT Standard Time";
 IsSingleInstance = "Yes";
 ModuleName = "xTimeZone";
 ModuleVersion = "1.3.0.0";

 ConfigurationName = "ProvisionServers";

};
instance of MSFT_GroupResource as $MSFT_GroupResource1ref
{
ResourceID = "[Group]LocalAdminUsers";
 MembersToInclude = {
    "MyLocalUser"
};
 Ensure = "Present";
 SourceInfo = "::23::13::Group";
 GroupName = "Administrators";
 ModuleName = "PSDesiredStateConfiguration";

ModuleVersion = "1.0";

 ConfigurationName = "ProvisionServers";

};
instance of OMI_ConfigurationDocument


                    {
                        Version="2.0.0";
                        MinimumCompatibleVersion = "1.0.0";
                        CompatibleVersionAdditionalProperties= {"Omi_BaseResource:ConfigurationName"};
                        Author="mbuisson";
                        GenerationDate="01/06/2017 13:52:43";                        
                        Name="ProvisionServers";
                    };
   

 
The value of the setting ServicesEndpoint from the common data file was preserved as well. The time zone value is “GMT Standard Time”, so this was overridden as well. The setting “LocalAdministrators” got added as well because it applied to all nodes in the environment-specific data file.

More interestingly, unlike the MOF file for Server1, the one for Server2 doesn’t have the resource entry named “FolderForPrimaryServer“. This tells us that in the merged configuration data, the Role value for Server2 was not “Primary”. This is expected because the value for this setting was “Secondary” in the environment-specific data file.

That’s all there is to using the Merge-DscConfigData function.

I am aware that some configuration management tools can make overriding configuration data easier, for example, attributes defined at a Chef cookbook level can be overridden at different levels. But for those of us using PowerShell DSC in production, this is a working alternative.

A Boilerplate for Unit testing DSC resources with Pester

Unit testing PowerShell code is slowly but surely becoming mainstream. Pester, the awesome PowerShell testing framework is playing a big part in that trend.
But why the hell would you write more PowerShell code to test your PowerShell code ? Because :

  • It can give you a better understanding of your code, its design, its assumptions and its behaviour.
     
  • When you make changes and the unit tests pass, you can be pretty confident that you didn’t break anything.
    This makes changes less painful and scary and this is a very important notion in DevOps : removing fear and friction to make changes painless, easy, fast and even … boring.
     
  • It helps writing more robust , less buggy code.
     
  • Given the direction that PowerShell community is taking and the way the DevOps movement is permeating the IT industry, this is becoming a valuable skill.
     
  • There is an initial learning curve and it takes time, effort and discipline, but if you do it often enough, it can quickly become second nature.
     

To help reduce this time and effort, I wanted to a build Pester script template which could be reused for unit testing any DSC resource. After all, DSC resources have a number of specific requirements and best practices, for example : Get-TargetResource should return a hashtable, or Test-TargetResource should return a boolean… So we can write tests for all these requirements and these tests can be readily reused for any other DSC resource (non class-based).

Without further ado, here is the full script (which is also available on GitHub) and then we’ll elaborate on the main bits and pieces :

$Global:DSCResourceName = 'My_DSCResource'  #<----- Just change this

Import-Module "$($PSScriptRoot)\..\..\DSCResources\$($Global:DSCResourceName)\$($Global:DSCResourceName).psm1" -Force

# Helper function to list the names of mandatory parameters of *-TargetResource functions
Function Get-MandatoryParameter {
    [CmdletBinding()]
    Param(
        [Parameter(Mandatory=$True)]
        [string]$CommandName
    )
    $GetCommandData = Get-Command "$($Global:DSCResourceName)\$CommandName"
    $MandatoryParameters = $GetCommandData.Parameters.Values | Where-Object { $_.Attributes.Mandatory -eq $True }
    return $MandatoryParameters.Name
}

# Getting the names of mandatory parameters for each *-TargetResource function
$GetMandatoryParameter = Get-MandatoryParameter -CommandName "Get-TargetResource"
$TestMandatoryParameter = Get-MandatoryParameter -CommandName "Test-TargetResource"
$SetMandatoryParameter = Get-MandatoryParameter -CommandName "Set-TargetResource"

# Splatting parameters values for Get, Test and Set-TargetResource functions
$GetParams = @{
    
}
$TestParams = @{
    
}
$SetParams = @{
    
}

Describe "$($Global:DSCResourceName)\Get-TargetResource" {
    
    $GetReturn = & "$($Global:DSCResourceName)\Get-TargetResource" @GetParams

    It "Should return a hashtable" {
        $GetReturn | Should BeOfType System.Collections.Hashtable
    }
    Foreach ($MandatoryParameter in $GetMandatoryParameter) {
        
        It "Should return a hashtable with key named $MandatoryParameter" {
            $GetReturn.ContainsKey($MandatoryParameter) | Should Be $True
        }
    }
}

Describe "$($Global:DSCResourceName)\Test-TargetResource" {
    
    $TestReturn = & "$($Global:DSCResourceName)\Test-TargetResource" @TestParams

    It "Should have the same mandatory parameters as Get-TargetResource" {
        # Does not check for $True or $False but uses the output of Compare-Object.
        # That way, if this test fails Pester will show us the actual difference(s).
        (Compare-Object $GetMandatoryParameter $TestMandatoryParameter).InputObject | Should Be $Null
    }
    It "Should return a boolean" {
        $TestReturn | Should BeOfType System.Boolean
    }
}

Describe "$($Global:DSCResourceName)\Set-TargetResource" {
    
    $SetReturn = & "$($Global:DSCResourceName)\Set-TargetResource" @SetParams

    It "Should have the same mandatory parameters as Test-TargetResource" {
        (Compare-Object $TestMandatoryParameter $SetMandatoryParameter).InputObject | Should Be $Null
    }
    It "Should not return anything" {
        $SetReturn | Should Be $Null
    }
}

 
That’s a lot of information so let’s break it down into more digestible chunks :

$Global:DSCResourceName = 'My_DSCResource'  #<----- Just change this

 
The “My_DSCResource” string is only part in the entire script which needs to be changed from one DSC resource to another. All the rest can be reused for any DSC resource.

Import-Module "$($PSScriptRoot)\..\..\DSCResources\$($Global:DSCResourceName)\$($Global:DSCResourceName).psm1" -Force

The relative path to the module containing the DSC resource is derived from a standard folder structure, with a “Tests” folder at the root of the module and a “Unit” subfolder, containing the resulting unit tests script, for example :

O:\> tree /F "C:\Git\FolderPath\DscModules\DnsRegistration"
Folder PATH listing for volume OS

│   DnsRegistration.psd1
│
├───DSCResources
│   └───DnsRegistration
│       │   DnsRegistration.psm1
│       │   DnsRegistration.schema.mof
│       │
│       └───ResourceDesignerScripts
│               GenerateDnsRegistrationSchema.ps1
│
└───Tests
    └───Unit
            DnsRegistration.Tests.ps1

 
We load the module because we’ll need to use the 3 functions it contains : Get-TargetResource, Set-TargetResource and Test-TargetResource.

By the way, note that this script is divided into 3 Describe blocks : this is a more or less established convention in unit testing with Pester : one Describe block per tested function. The “Force” parameter of Import-Module is to make sure that, even if the module was already loaded, we get the latest version of the module.

Function Get-MandatoryParameter {
    [CmdletBinding()]
    Param(
        [Parameter(Mandatory=$True)]
        [string]$CommandName
    )
    $GetCommandData = Get-Command "$($Global:DSCResourceName)\$CommandName"
    $MandatoryParameters = $GetCommandData.Parameters.Values | Where-Object { $_.Attributes.Mandatory -eq $True }
    return $MandatoryParameters.Name
}

 
This is a helper function used to get the mandatory parameter names for the *-TargetResource functions. If you use a more than a few helper functions in your unit tests, then you should probably gather them in a separate script or module.

# Splatting parameters values for Get, Test and Set-TargetResource functions
$GetParams = @{
     
}
$TestParams = @{
     
}
$SetParams = @{
     
}

 
These are placeholders to be completed with the parameters and values for Get-TargetResource, Test-TargetResource and Set-TargetResource, respectively. Splatting makes them more readable, especially for resources that have many parameters. We might use the same parameters and parameter values for all 3 functions, in that case, we can consolidate these 3 hashtables into a single one.

$GetReturn = & "$($Global:DSCResourceName)\Get-TargetResource" @GetParams

 
Specifying the resource name with the function allows to unambiguously call the Get-TargetResource function from the DSC resource we are currently testing and not the one from another resource.

It "Should return a hashtable" {
        $GetReturn | Should BeOfType System.Collections.Hashtable
    }

 
The first actual test ! This is validating that Get-TargetResource returns a object of the type [hashtable]. The “BeOfType” operator is designed specifically for verifying the type of an object so it’s a great fit.

Foreach ($MandatoryParameter in $GetMandatoryParameter) {
        
        It "Should return a hashtable with key named $MandatoryParameter" {
            $GetReturn.ContainsKey($MandatoryParameter) | Should Be $True
        }
    }

 
An article from the PowerShell Team says this :

The Get-TargetResource returns the status of the modeled entities in a hash table format. This hash table must contain all properties, including the Read properties (along with their values) that are defined in the resource schema.

I’m not sure this is a hard requirement because this is not enforced, and Get-TargetResource is not automatically called by the DSC engine. So this may not be ideal but we are getting the names of the mandatory parameters of Get-TargetResource and we check that the hashtable returned by Get-TargetResource has a key matching each of these parameters. Maybe, we could check against all parameters, not just the mandatory ones ?

Now, let’s turn our attention to Test-TargetResource :

    $TestReturn = & "$($Global:DSCResourceName)\Test-TargetResource" @TestParams

    It "Should have the same mandatory parameters as Get-TargetResource" {
        (Compare-Object $GetMandatoryParameter $TestMandatoryParameter).InputObject | Should Be $Null
    }

 
This test is validating that the mandatory parameters of Test-TargetResource are the same as for Get-TargetResource. There is a PSScriptAnalyzer rule for that, with an “Error” severity, so we can safely assume that this is a widely accepted and important best practice :

GetSetTest Parameters
 
Reading the name of this “It” block, we could assume that it is checking against $True or $False. But here, we use Compare-Object and validate that there is no difference between the 2 lists of mandatory parameters. This is to make the message we get in case the test fails more useful : it will tell us the offending parameter name(s).

    It "Should return a boolean" {
        $TestReturn | Should BeOfType System.Boolean
    }

 
The function Test-TargetResource should always return a boolean. This is a well known requirement and this is also explicitly specified in the templates generated by xDSCResourceDesigner, so there is no excuse for not knowing/following this rule.

Now, it is time to test Set-TargetResource :

    It "Should have the same mandatory parameters as Test-TargetResource" {
        (Compare-Object $TestMandatoryParameter $SetMandatoryParameter).InputObject | Should Be $Null
    }

 
The same as before, but this time we validate that the mandatory parameters of the currently tested function (Set-TargetResource) are the same as for Test-TargetResource.

    It "Should not return anything" {
        $SetReturn | Should Be $Null
    }

 
Set-TargetResource should not return anything. Again, you don’t have to take my word for it, PSScriptAnalyzer is our source of truth :

Set should not return anything
 
That’s it for the script. But then, a boilerplate is more useful when it is readily available as a snippet on your IDE of choice. So I also converted this boilerplate into a Visual Studio Code snippet, this is the first snippet in the custom snippet file I made available here.

The path of Visual Studio Code PowerShell snippet file is : %APPDATA%\Code\User\snippets\PowerShell.json.
Or, for those of us using the PowerShell extension, we can modify the following file : %USERPROFILE%.vscode\extensions\ms-vscode.PowerShell-0.6.1\snippets\PowerShell.json.

Obviously, this set of tests is pretty basic and doesn’t cover the code written specifically for a given resource, but it’s a pretty good starting point. This allows to write basic unit tests for our DSC resources in just a few minutes, so now, there’s no excuse for not doing it.

Adding ConfigurationData dynamically from a DSC configuration

When writing a DSC configuration, separating the environmental data from the DSC configuration is a best practice : it allows to reuse the same configuration logic for different environments, for example the Dev, QA and Production environments . This generally means that the environment data is stored in separate .psd1 files. This is explained in this documentation page.

However, these configuration data files are relatively static, so if the environment changes frequently these files might end up containing outdated information. A solution is to keep the static environment data in the configuration data files and then adding the more dynamic data on the fly from the DSC configuration itself.

A good example of this use case is a web application, where the configuration is identical for all web servers but these servers are treated not as pets but as cattle : we create and kill them on a daily basis. Because they are cattle, we don’t call them by their name, in fact we don’t even know their name. So the configuration data file doesn’t contain any node names :

@{
    # Node specific data
    AllNodes = @(

       # All the Web Servers have following information 
       @{
            NodeName           = '*'
            WebsiteName        = 'ClickFire'
            SourcePath         = '\\DevBox\SiteContents\'
            DestinationPath    = 'C:\inetpub\wwwroot\ClickFire_Content'
            DefaultWebSitePath = 'C:\inetpub\wwwroot\ClickFire_Content'
       }
    );
    NonNodeData = ''
}

 
By the way, the web application used for illustration purposes is an internal HR app, codenamed “Project ClickFire”.

Let’s assume the above configuration data is all the information we need to configure our nodes. That’s great, but we still need some node names, otherwise there will be no MOF file generated when we run the configuration. So we’ll need the query some kind of database to get the names of the web servers for this application, Active Directory for example. This is easy to do, especially if these servers are all in the same OU and/or there is a naming convention for them :

C:\> $DynamicNodeNames = Get-ADComputer -SearchBase "OU=Project ClickFire,OU=Servers,DC=Mat,DC=lab" -Filter {Name -Like "Web*"} |
Select-Object -ExpandProperty Name

C:\> $DynamicNodeNames

Web083
Web084
Web086
  

 
Now that we have the node names, we need to add a hashtable for each node into the “AllNodes” section of our configuration data. To do that, we first need to import the data from the configuration data file and we store it into a variable for further manipulation. There is a new cmdlet introduced in PowerShell 5.0 which makes this very simple : Import-PowerShellDataFile :

C:\> $EnvironmentData = Import-PowerShellDataFile -Path "C:\Lab\EnvironmentData\Project_ClickFire.psd1"
C:\> $EnvironmentData

Name                           Value
----                           -----
AllNodes                       {System.Collections.Hashtable}
NonNodeData


C:\> $EnvironmentData.AllNodes

Name                           Value
----                           -----
DefaultWebSitePath             C:\inetpub\wwwroot\ClickFire_Content
NodeName                       *
WebsiteName                    ClickFire
DestinationPath                C:\inetpub\wwwroot\ClickFire_Content
SourcePath                     \\DevBox\SiteContents\
  

 
Now, we have our configuration available to us as a PowerShell object (a hashtable) and the “AllNodes” section inside of it is also a hashtable. More accurately, the “AllNodes” section is an array of Hashtables because each node entry within “AllNodes” is a hashtable :

C:\> $EnvironmentData.AllNodes.GetType()

IsPublic IsSerial Name                                     BaseType
-------- -------- ----                                     --------
True     True     Object[]                                 System.Array


C:\> $EnvironmentData.AllNodes | Get-Member | Select-Object TypeName -Unique

TypeName
--------
System.Collections.Hashtable
  

 
So now, what we need to do is to inject a new node entry for each node returned by our Active Directory query into the “AllNodes” section :

C:\> Foreach ( $DynamicNodeName in $DynamicNodeNames ) {
     $EnvironmentData.AllNodes += @{NodeName = $DynamicNodeName; Role = "WebServer"}
 }
  

 
For each node name, we add a new hashtable into “AllNodes”. These hashtables are pretty simple in this case, this is just to give our nodes a name and a role (in case we need to differentiate with other server types, like database servers for example).

The result of this updated configuration data is equivalent to :

@{
    # Node specific data
    AllNodes = @(

       # All the Web Servers have following information 
       @{
            NodeName           = '*'
            WebsiteName        = 'ClickFire'
            SourcePath         = '\\DevBox\SiteContents\'
            DestinationPath    = 'C:\inetpub\wwwroot\ClickFire_Content'
            DefaultWebSitePath = 'C:\inetpub\wwwroot\ClickFire_Content'
       }
       @{
            NodeName           = 'Web083'
            Role               = 'WebServer'
       }
       @{
            NodeName           = 'Web084'
            Role               = 'WebServer'
       }
       @{
            NodeName           = 'Web086'
            Role               = 'WebServer'
       }
    );
    NonNodeData = ''
}

 
So that’s it for the node data, but what if we need to add non-node data ?
It is very similar to the node data because the “NonNodeData” section of the configuration data is also a hashtable.

Let’s say we want to add a piece of XML data that may be used for the web.config file of our web servers to the “NonNodeData” section of the configuration data. We could do that in the configuration data file, right :

@{
    # Node specific data
    AllNodes = @(

       # All the Web Servers have following information 
       @{
            NodeName           = '*'
            WebsiteName        = 'ClickFire'
            SourcePath         = '\\DevBox\SiteContents\'
            DestinationPath    = 'C:\inetpub\wwwroot\ClickFire_Content'
            DefaultWebSitePath = 'C:\inetpub\wwwroot\ClickFire_Content'
       }
    );
    NonNodeData =
    @{
        DynamicConfig = [Xml](Get-Content -Path C:\Lab\SiteContents\web.config)
    }
}

Nope :

SafeGetValueErrorNew
 
This is because to safely import data from a file, the cmdlet Import-PowerShellDataFile kinda works in RestrictedLanguage mode. This means that executing cmdlets, or functions, or any type of command is not allowed in a data file. Even the XML type and a bunch of other things are not allowed in this mode. For more information : about_Language_Modes.

It does make sense : data files should contain data, not code.

OK, so we’ll do that from the DSC configuration script, then :

C:\> $DynamicConfig = [Xml](Get-Content -Path "\\DevBox\SiteContents\web.config")
C:\> $DynamicConfig

xml                            configuration
---                            -------------
version="1.0" encoding="UTF-8" configuration


C:\> $EnvironmentData.NonNodeData = @{DynamicConfig = $DynamicConfig}
C:\>
C:\> $EnvironmentData.NonNodeData.DynamicConfig.configuration


configSections      : configSections
managementOdata     : managementOdata
appSettings         : appSettings
system.web          : system.web
system.serviceModel : system.serviceModel
system.webServer    : system.webServer
runtime             : runtime
  

 
With this technique, we can put whatever we want in “NonNodeData”, even XML data, as long as it is wrapped in a hashtable. The last command shows that we can easily access this dynamic config data because it is stored as a tidy [Xml] PowerShell object.

Please note that the Active Directory query, the import of the configuration data and the manipulation of this data are all done in the same script as the DSC configuration but outside of the DSC configuration itself. That way, this modified configuration data can be passed to the DSC configuration as the value of its -ConfigurationData parameter.

Putting it all together, here is what the whole DSC configuration script looks like :

configuration Project_ClickFire
{
    Import-DscResource -Module PSDesiredStateConfiguration
    Import-DscResource -Module xWebAdministration
    
    Node $AllNodes.Where{$_.Role -eq "WebServer"}.NodeName
    {
        WindowsFeature IIS
        {
            Ensure          = "Present"
            Name            = "Web-Server"
        }
        File SiteContent
        {
            Ensure          = "Present"
            SourcePath      = $Node.SourcePath
            DestinationPath = $Node.DestinationPath
            Recurse         = $True
            Type            = "Directory"
            DependsOn       = "[WindowsFeature]IIS"
        }        
        xWebsite Project_ClickFire_WebSite
        {
            Ensure          = "Present"
            Name            = $Node.WebsiteName
            State           = "Started"
            PhysicalPath    = $Node.DestinationPath
            DependsOn       = "[File]SiteContent"
        }
    }
}

# Adding dynamic Node data
$EnvironmentData = Import-PowerShellDataFile -Path "$PSScriptRoot\..\EnvironmentData\Project_ClickFire.psd1"
$DynamicNodeNames = (Get-ADComputer -SearchBase "OU=Project ClickFire,OU=Servers,DC=Mat,DC=lab" -Filter {Name -Like "Web*"}).Name

Foreach ( $DynamicNodeName in $DynamicNodeNames ) {
    $EnvironmentData.AllNodes += @{NodeName = $DynamicNodeName; Role = "WebServer"}
}

# Adding dynamic non-Node data
$DynamicConfig = [Xml](Get-Content -Path "\\DevBox\SiteContents\web.config")
$EnvironmentData.NonNodeData = @{DynamicConfig = $DynamicConfig}

Project_ClickFire -ConfigurationData $EnvironmentData -OutputPath "C:\Lab\DSCConfigs\Project_ClickFire"
  

 
Running this script indeed generates a MOF file for each of our nodes, containing the same settings :

C:\> & C:\Lab\DSCConfigs\Project_ClickFire_Config.ps1

    Directory: C:\Lab\DSCConfigs\Project_ClickFire


Mode                LastWriteTime         Length Name                                       
----                -------------         ------ ----                                       
-a----         6/6/2016   1:37 PM           3986 Web083.mof                                 
-a----         6/6/2016   1:37 PM           3986 Web084.mof                                 
-a----         6/6/2016   1:37 PM           3986 Web086.mof        
  

 
Hopefully, this helps treating web servers really as cattle and give its full meaning to the expression “server farm“.

Documentation as Code : Exporting the contents of DSC MOF files to Excel

One of the greatest benefits of PowerShell DSC (and other Configuration Management tools/platforms) is the declarative syntax (as opposed to imperative scripting). Sure, a DSC configuration can contain some logic, using loops and conditional statements, but we don’t need to care about handling errors or checking if something is already present. All this (and the large majority of the logic) is handled within the resource, so we just need to describe the end result, the “Desired State”.

So all the settings and information that a configuration is made of are stored in a very simple (and pretty much human-readable) syntax, like :

Node $AllNodes.NodeName
    {
        cWindowsErrorReporting Disabled
        {
            State = "Disabled"
        }
    }

 
This allows us to use this “code” (for lack of a better word) as documentation in a way that wouldn’t be possible or practical with imperative code. For this purpose, we could use DSC configurations, or DSC configuration data files if all the configuration data is stored separately. But the best files for that would probably be the MOF files for 2 reasons :

  • Even if some settings are in different files, we can be sure that all the settings for a given node is in a single MOF file (the exception being partial configurations)
  • Even if the DSC configuration contains complex logic, there is no need to understand or parse this logic to get the end result. All this has been done for us when the MOF file has been generated

Now, imagine you have all your MOF files stored in a directory structure like this :

PS C:\> tree C:\DSCConfigs /F
Folder PATH listing for volume OS
C:\DSCCONFIGS
├───Customer A
│   ├───Dev
│   │       Server1.mof
│   │       Server2.mof
│   │
│   ├───Prod
│   │       Server1.mof
│   │       Server2.mof
│   │
│   └───QA
│           Server1.mof
│           Server2.mof
│
├───Customer B
│   ├───Dev
│   │       Server1.mof
│   │       Server2.mof
│   │
│   ├───Prod
│   │       Server1.mof
│   │       Server2.mof
│   │
│   └───QA
│           Server1.mof
│           Server2.mof
│
└───Customer C
    ├───Dev
    │       Server1.mof
    │       Server2.mof
    │
    ├───Prod
    │       Server1.mof
    │       Server2.mof
    │
    └───QA
            Server1.mof
            Server2.mof

You most likely have much more than 2 servers per environment, so there can easily be a large number a MOF files.
Then, imagine your boss tells you : “I need all the configuration settings, for all customers, all environments and all servers in an Excel spreadsheet to sort and group the data easily and to find out the differences between Dev and QA, and between QA and Prod”.

If you are like me, you may not quite understand bosses’ uncanny obsession with Excel but this definitely sounds like something useful and an interesting challenge. So, let’s do it.

We’ll divide this in 3 broad steps :

  • Converting the contents of MOF files to PowerShell objects
  • Exporting the resulting PowerShell objects to a CSV file
  • Processing the data using PowerShell and/or Excel

Converting the contents of MOF files to PowerShell objects

This is by far the most tricky part.
Fortunately, I wrote a function, called ConvertFrom-DscMof, which does exactly that. We won’t go into much details about how it works, but you can have a look at the code here.

Basically, it parses one or more MOF files and it outputs an object for each resource instance contained in the MOF file(s). All the properties of a given resource instance become properties of the corresponding object, plus a few properties related to the MOF file.

Here is an example with a very simple MOF file :

ConvertFrom-DscMofExample
 
And here is an example with all the properties of a single resource instance :

ConvertFrom-DscMofSingle
 

Exporting the resulting PowerShell objects to a CSV file

As we have the ability to get DSC configuration information in the form of PowerShell objects, it is now very easy to export all this information as CSV. But there’s a catch : different resources have different parameters, for example the Registry resource has the ValueName and ValueData parameters and the xTimeZone resource has a TimeZone parameter.

So the resulting resource instances objects will have ValueName and ValueData properties if they are an instance of the Registry resource and a TimeZone property if they are an instance of the xTimeZone resource. Even for a given resource, some parameters are optional and they will end up in the properties of the resulting PowerShell object only if they were explicitly specified in the configuration.

The problem is that Export-Csv doesn’t handle intelligently objects with different properties, it will just create the columns from the properties of the first object in the collection and apply that to all objects, even for objects which have different properties.

But, we can rely on the “ResourceID” property of each resource instance because it uniquely identify the resource instance. Also, it contains the name we gave to the resource block in the DSC configuration, which should be a nice meaningful name, right ?
Yeah, this is where “Documentation as code” meets “self-documenting code” : they are both important and very much complementary. To get an idea of what the values of ResourceID look like, refer back to the first screenshot.

Below, we can see how to export only the properties we need, and only the properties that we know will be present for all resource instances :


Get-ChildItem C:\MOFs\ -File -Filter "*.mof" -Recurse |
ConvertFrom-DscMof |
Select-Object -Property "MOF file Path","MOF Generation Date","Target Node","Resource ID","DSC Configuration Info","DSC Resource Module" |
Export-Csv -Path 'C:\DSCConfig Data\AllDSCConfigs.csv' -NoTypeInformation

 

Processing the data using PowerShell and/or Excel

The resulting CSV file can be readily opened and processed by Excel (or equivalent applications) :

CSVFileInExcel
 
Now, we have all the power of Excel at our fingertips, we can sort, filter, group all this data however we want.

Now, here is a very typical scenario : the Dev guys have tested their new build and it worked smoothly in their environment. However, the QA guys say that the same build is failing miserably in their environment. The first question which should come to mind is : “What is the difference between the Dev and QA environments ?

If all the configuration of these environments is done with PowerShell DSC, the ConvertFrom-DscMof function can be a great help to answer that very question :

C:\> $CustomerCDev = Get-ChildItem -File -Filter '*.mof' -Recurse 'C:\MOFs\Customer C\Dev\' |
ConvertFrom-DscMof
C:\> $CustomerCQA = Get-ChildItem -File -Filter '*.mof' -Recurse 'C:\MOFs\Customer C\QA\' |
ConvertFrom-DscMof
C:\> Compare-Object -ReferenceObject $CustomerCDev -DifferenceObject $CustomerCQA -Property 'Target Node','Resource ID'

Target Node Resource ID                    SideIndicator
----------- -----------                    -------------
Server1     [xRemoteFile]RabbitMQInstaller <=
Server1     [Package]RabbitMQ              <=

 
Oops, we forgot to install RabbitMQ on Server1 ! No wonder it’s not working in QA.
But now, there is hope. We, forgetful and naturally flawed human beings, can rely on this documentation automation to tell us how things really are.

So, as we have seen, Infrastructure-as-code (PowerShell DSC in this case) can be a nice stepping-stone for an infrastructure documentation.
What is the number 1 problem for any infrastructure/configuration documentation ?
Keeping it up-to-date. This can help generate dynamically the documentation, which means this documentation can be kept up-to-date pretty easily without any human intervention.

Managing large numbers of registry settings with PowerShell DSC

Recently, I had to manage the configuration of the remote control settings of client machines with PowerShell DSC. These settings are located in the following registry key : HKLM:\SYSTEM\CurrentControlSet\Services\HidIr\Remotes, and they look like this :

RemoteRegistrySettings

Yes, this is 19 registry values for every single remote control model.

Here is what a resource entry in a DSC configuration would look like, using the built-in Registry resource :

Registry IRRemotes
{
        Ensure = "Present"
        Key = "HKLM:\SYSTEM\CurrentControlSet\Services\HidIr\Remotes\745a17a0-74d3-11d0-b6fe-00a0c90f57da"
        ValueName = "CodeMatchMask"
        ValueData = "4294905600"
        ValueType = "Dword"
}

 
This is for a single registry value.
So, we take this, we multiply it by 19 values and then, we multiply it by 6 remote control models and the result is : 684 lines of code.
This is going to be a pain to write and a nightmare to maintain.

So, when the line count of a DSC configuration jumps like this, we should take a step back and ask ourselves questions like these :

  • What is the impact on the readability and the maintainability of the DSC configuration (or more generally, what kind of technical debt this could create) ? And remember, DSC configurations are supposed to be more or less human-readable.
  •  

  • If we use (or plan to use) DSC configurations as “Documentation as code”, do we really need these details in your documentation ?
  •  

  • Is the business value provided/enabled by this code greater than the cost and time to write, read, test and maintain it ? Off course, these are going to be estimations, but we could even make up a metric, like the ratio business value per line of code (€/line). Then, we could decide that if this metric is less than a certain number, we don’t do it (or we need to do it another way).
  •  

  • Is there another way to achieve the same result ?

Once I answered all of these questions, I thought : “There has to be a better way”.

I couldn’t find any, so I wrote a custom DSC resource which is better suited at handling large numbers of registry settings (especially registry keys with many subkeys and values).
The name of both the module and the resource is cRegFile.

How does it work ?

Basically, it uses :

  • .reg files to contain all the settings in a managed registry key
  • reg.exe to import and export .reg files
  • Get-FileHash to compare the contents of .reg files

For the nitty-gritty, you can have a look at the code. As usual, the module is on GitHub :
https://github.com/MathieuBuisson/Powershell-Administration/tree/master/cRegFile

The .reg file specified in a DSC configuration using this resource represents the desired state for a registry key.
So, it contains the managed registry key, with all its subkeys and values, recursively.

This reference .reg file first needs to be generated.
To do that, we get a reference machine, make sure its registry key has all the settings we want, with all the values we want.
Then we export the registry key, from regedit >> Right-click >> Export , or with a “reg.exe export” command. Either way, the content and the format of the .reg file are the same.

The cRegFile resource is pretty simple to use, as we can see looking at its syntax :

C:\> Get-DscResource -Name cRegFile -Syntax

cRegFile [String] #ResourceName
{
     Key = [string]
    [DependsOn = [string[]]]
    [PsDscRunAsCredential = [PSCredential]]
    [RegFilePath = [string]]
}

 
Now, going back to our remote control settings, let’s configure all the registry values for all the remote control models that we want to support.
To do that, we add the following to our DSC configuration :

        File RemotesRegFile
        {
            DestinationPath = $($Node.RegFileFolder) + "RemotesKey.reg"
            SourcePath = "\\DevBox\Share\RemotesKey.reg"
            Ensure = "Present"
            Type = "File"
            Credential = $Credential
            Checksum = "SHA-1"
            Force = $true
            MatchSource = $true
        }
        cRegFile SupportedRemoteControls
        {
            key = "HKLM:\SYSTEM\CurrentControlSet\Services\HidIr\Remotes"
            RegFilePath = $($Node.RegFileFolder) + "RemotesKey.reg"
            DependsOn = "[File]RemotesRegFile"
        }

 
In case you are wondering what is $Node.RegFileFolder, this is a way to not hard-code the path in the configuration and get its value from the configuration data.

Also, notice the file resource entry. This is because the reg.exe import command doesn’t support remote files, so we first need to copy the .reg file to the target node, to be able to use it with the cRegFile resource.

Because something needs to happen in the File resource before what needs to happen in the cRegFile resource, we add a DependsOn property to our cRegFile resource entry to set the order in which things can happen.

As we can see, this is much cleaner than 684 lines. So, whenever there are more than a few registry values to manage within the same key, this resource makes the DSC configurations much shorter than with the built-in Registry resource.
Also, it probably runs faster (though I didn’t do any measured comparisons).

OK, the old-school reg.exe is not pure PowerShell, but the PowerShell story regarding the registry is not ideal (still using PSDrives, seriously ?). Reg.exe is fast, easy to use, battle-tested reliable.
More interestingly, it is surprisingly close to the philosophy of DSC : the desired state is defined in a “declarative” text file and the “Make it so” command : reg.exe import is idempotent.

I encourage you to grab it here and use it.

UPDATE : the module is now available in the PowerShell Gallery, so it can be installed right from a PowerShell console with Install-Module.

Orchestrating the update of an IIS server farm with PowerShell DSC

PowerShell Desired State Configuration (DSC) makes it easy to apply a configuration to a bunch of servers. But what if the servers are already in production, if the update requires a service restart and we need to make this happen with no service disruption ? That’s a different story. So I want to share the problems, the considerations and the solutions I had along the way to this goal.

As an example, the environment we are going to work on is an IIS Server farm, which is a Microsoft NLB cluster with 2 nodes. Our mission, should we choose to accept it, is to perform a major update of the site contents on both web servers, with zero downtime, with PowerShell DSC.

So, here are the main points we are going to cover in this article :

  • How to stop/start the application pool of our website when (and only when) a new configuration is applied
  • How to apply the configuration on WebServer2 after the configuration is properly applied on WebServer1, using a cross-node dependency.

Stop and Start the AppPool only when a new configuration is applied :

 
Our “major” website update is actually replacing a single file (Index.html) in the defaut IIS site content directory (C:\Inetpub\Wwwroot). I keep the IIS part simple so that we can focus on what really matters : the PowerShell DSC part.

So, we just need to copy the new version of the file, which is stored on a file share accessible via “\\DevBox\SiteContents\” to the web servers in the appropriate directory, overwriting the old version of the file. The built-in File resource can do this easy-peasy.

Regarding the web application pool, we can stop it easily using the resource xWebAppPool, which is part of the module “xWebAdministration”. Our configuration would look like this :

Configuration UpdateWebSite
{
    Import-DscResource -ModuleName "PSDesiredStateConfiguration"
    Import-DscResource -ModuleName "xWebAdministration"

    File Index.html
    {
        SourcePath = "\\DevBox\SiteContents\Index.html"
        DestinationPath = "C:\inetpub\wwwroot\Index.html"
        Checksum = "SHA-1"
        Force = $True        
        Ensure = "Present"

    }

    xWebAppPool StartDefaultAppPool
    {
        Name = "DefaultAppPool"
        Ensure = "Present"
        State = "Stopped"
        DependsOn = "[File]Index.html"
    }
}

There are 2 problems with this configuration. The first one is that a configuration defines the state that we want (Desired State) for the AppPool (Stopped, here). What we really want is : Stop the Application pool, apply the new configuration and then, bring the AppPool back up. In a DSC configuration, there can be only one state (property-value pair) per resource.

So what do we do ?
Start the AppPool manually when the configuration is applied ? That would defeat the purpose of this thing called “automation“. And, even if we do that, the Local Configuration Manager (LCM) would set it back to the desired state, meaning, it would stop it again if the ConfigurationMode is “ApplyAndAutoCorrect”.

The second problem is that we need to stop the application pool if, and only if the website content has to be changed. In other words, the state of the AppPool needs to be changed in the xWebAppPool resource only if the Set-TargetResource function of the File resource had to be executed.

Similar issues were explained here, and there was no solution.

The only solution to these 2 problems, to my knowledge, is to write a custom resource. This allows us to add a Stop-WebAppPool at the beginning of the Set-TargetResource function and a Start-WebAppPool when the file operation is done.

So we can copy the File resource and just add Stop-WebAppPool and Start-WebAppPool in the code, because PowerShell DSC resources are open source, right ?
No. Unfortunately, the File resource is the only built-in resource which is not part of the PSDesiredStateConfiguration module. It doesn’t come from a PowerShell module but from : “C:\Windows\System32\DscCoreConfProv.dll“, according to this StackOverflow answer.

So I wrote a custom resource called “cWebSiteContent“, which takes care of everything we need, the file operation(s) and the AppPool operation(s). This article is not about writing a custom DSC resource (this alone would take several articles) but if you want to have a look at it, here it is.

So, the new configuration, which leverages our new custom resource “cWebSiteContent” looks like this :

$DevEnvironment = @{
    AllNodes = 
    @(
        @{
            NodeName                   = "*"
            PsDscAllowPlainTextPassword= $True
            Role                       = "WebServer"
            SourcePath                 = "\\DevBox\SiteContents\Index.html"
            DestinationPath            = "C:\inetpub\wwwroot\Index.html"
            Checksum                   = 'SHA256'
            Force                      = $True
            WebAppPool                 = "DefaultAppPool"
        }
        @{
            NodeName = "WebServer1"
        }
        @{
            NodeName = "WebServer2"
        }
    )
}

Configuration UpdateWebSite
{
    param(
        [parameter(mandatory)]
        [ValidateNotNullOrEmpty()]
        [PsCredential]$Credential
    )
    Import-DscResource -ModuleName "PSDesiredStateConfiguration"
    Import-DscResource -ModuleName "cWebSiteContent"

    Node $AllNodes.Where{$_.Role -eq "WebServer"}.NodeName
    {
        cWebSiteContent www.mat.lab
        {
            SourcePath = $Node.SourcePath
            DestinationPath = $Node.DestinationPath
            Checksum = $Node.Checksum
            Force = $Node.Force
            WebAppPool = $Node.WebAppPool
        }
    }
}
UpdateWebSite -ConfigurationData $DevEnvironment -OutputPath "C:\DSCConfigs\UpdateWebSite" -Credential (Get-Credential)

 

Notice here that the configuration data is separated from the configuration logic. All the information which is environment-specific is contained in a hash table and stored in the variable $DevEnvironment. Then, we feed this data to the configuration by giving the value $DevEnvironment to the ConfigurationData parameter when calling the configuration (last line).

Separation of environmental data from the configuration logic is a best practice : it allows to easily use the same configuration logic for different environments, for example a test environment, and a production environment, or, for customer A and customer B.

This is very well but we still have one problem : this configuration doesn’t control the order of operations. So, when the new configuration is applied, it could stop the application pool on WebServer1 before or after WebServer2, or worse, at the same time. This could result in downtime for the end-users, and we don’t want that.

If we are in a Push model, we could manually push the configuration to WebServer1 and when this is done, then, we push the configuration to WebServer2. But this is ugly, manual and this would prevent us from achieving “Continuous Deployment“.

Setting the order of operations using a cross-node dependency

 
Unlike scripts, the order in which the different resources in a configuration are executed is not top-to-bottom. It’s normally random. And even if you notice an execution order which might not be totally random, don’t rely on any kind of pattern or order because the order is not guaranteed. The usual way to make one resource run after another resource has been verified to be in the desired state is the “DependsOn” property.

But, in our example, we want the resource cWebSiteContent on one node (WebServer2) to run after the same resource has been verified or configured to the desired state on another node (WebServer1). For that, we need to use another mechanism called “cross-node dependency” (also called “cross-computer synchronization“). This is implemented as 3 special resources : WaitForAll, WaitForAny, WaitForSome :


PS C:\> Get-DscResource -Name "WaitFor*" -Syntax
WaitForAll [String] #ResourceName
{
    NodeName = [string[]]
    ResourceName = [string]
    [DependsOn = [string[]]]
    [PsDscRunAsCredential = [PSCredential]]
    [RetryCount = [UInt32]]
    [RetryIntervalSec = [UInt64]]
    [ThrottleLimit = [UInt32]]
}

WaitForAny [String] #ResourceName
{
    NodeName = [string[]]
    ResourceName = [string]
    [DependsOn = [string[]]]
    [PsDscRunAsCredential = [PSCredential]]
    [RetryCount = [UInt32]]
    [RetryIntervalSec = [UInt64]]
    [ThrottleLimit = [UInt32]]
}

WaitForSome [String] #ResourceName
{
    NodeCount = [UInt32]
    NodeName = [string[]]
    ResourceName = [string]
    [DependsOn = [string[]]]
    [PsDscRunAsCredential = [PSCredential]]
    [RetryCount = [UInt32]]
    [RetryIntervalSec = [UInt64]]
    [ThrottleLimit = [UInt32]]
}

 
We are going to use WaitForAll here but, because WebServer2 is going to wait for only 1 other node, WaitForAny would work the same in our case. More information : This MSDN documentation page.

Here is the new configuration :

$DevEnvironment = @{
    AllNodes = 
    @(
        @{
            NodeName                   = "*"
            PsDscAllowPlainTextPassword= $True
            Role                       = "WebServer"
            SourcePath                 = "\\DevBox\SiteContents\Index.html"
            DestinationPath            = "C:\inetpub\wwwroot\Index.html"
            Checksum                   = 'SHA256'
            Force                      = $True
            WebAppPool                 = "DefaultAppPool"
        }
        @{
            NodeName = "WebServer1"
        }
        @{
            NodeName = "WebServer2"
        }
    )
}

Configuration UpdateWebSite
{
    param(
        [parameter(mandatory)]
        [ValidateNotNullOrEmpty()]
        [PsCredential]$Credential
    )
    Import-DscResource -ModuleName "PSDesiredStateConfiguration"
    Import-DscResource -ModuleName "cWebSiteContent"

    Node $AllNodes.Where{$_.Role -eq "WebServer"}.NodeName
    {
        cWebSiteContent www.mat.lab
        {
            SourcePath = $Node.SourcePath
            DestinationPath = $Node.DestinationPath
            Checksum = $Node.Checksum
            Force = $Node.Force
            WebAppPool = $Node.WebAppPool
        }
    }
    Node WebServer2
    {
        WaitForAll WaitForWebServer1
        {
            NodeName = "WebServer1"
            ResourceName = "[cWebSiteContent]www.mat.lab"
            RetryIntervalSec = 4
            RetryCount = 5
            PsDscRunAsCredential = $Credential
        }
    }
}

 

This dependency is applied only to WebServer2, that’s why it is defined within an additional “Node” entry which is explicitly specific to WebServer2 (Node WebServer2 { ... }).

Within the WaitForAll resource, the NodeName property is the list of the nodes we want to wait for. We have only 1 in our case (WebServer1). The ResourceName property is the name of the resource on that node we want to wait for, in the same format as for a DependsOn. The RetryCount property is important : if it is not specified, its default value is 1. This means the LCM will check if the “Depended-on” node/resource is in desired state only once, and if it is not, it will declare it a failure.

Cross-node dependencies are a major use case for PsDscRunAsCredential. The LCM runs under the Local System Account. This being a local account, it has no permissions on other machines. But, the LCM on the “Dependent” node needs to be able to query the LCM on the “Depended-on” node. To make this happen smoothly, we can use PsDscRunAsCredential within our WaitForAll resource, as we did above.

$Credential is a parameter of our configuration, so we are going to specify the credentials when calling the configuration.

Let’s do it :

PS C:\> UpdateWebSite -ConfigurationData $DevEnvironment -OutputPath "C:\DSCConfigs\UpdateWebSite" -Credential (Get-Credential)

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
WARNING: It is not recommended to use domain credential for node 'WebServer2'.
In order to suppress the warning, you can add a property named 'PSDscAllowDomainUser' with a value of $true to your DSC configuration data for node 'WebServer2'.


    Directory: C:\DSCConfigs\UpdateWebSite


Mode                LastWriteTime         Length Name                                                
----                -------------         ------ ----                                                
-a----       29/02/2016     14:03           2134 WebServer1.mof                                      
-a----       29/02/2016     14:03           3252 WebServer2.mof                                      

 
This generates a configuration document (MOF file) for each node.
Before pushing these configuration documents to the nodes, let’s have a look at our current website :

website original version
 
Pretty, isn’t it ? 🙂

Now, let’s push the configuration to our production Web servers to finally add our wonderful update to our wonderful website :

Start-DscConfiguration
 
There is a lot of information in there (thanks to the Verbose parameter). It looks like it skipped the Set and the Verbose messages I put in the Set-TargetResource function of the cWebSiteContent resource don’t appear here. I have no clue why, but whatever…

The relevant part for the cross-node dependency is the fact that we see that things happened for WebServer1 first, and then for WebServer2. Also, notice towards the end the message : “Remote resource
'[cWebSiteContent]www.mat.lab' is ready
“. This is our “Depended-on” resource which is detected has being in the desired state and this is the green light to proceed to WebServer2.

Now, let’s check our website has the update :

website new version
 
So again, this is a simple, maybe even simplistic example, but hopefully it helps understand the pieces which needs to be put together and how powerful cross-node dependencies can be to add a bit of orchestration around DSC.