Thursday, March 12, 2020

PowerShell 7, VS Code, and the PowerShell 7 ISE Extension

Introduction and Background

Welcome to this post, a part of this #PS7NOW blog series - I hope you are enjoying them.  Before getting to my topic, I assume that you know what PowerShell 7 is and are familiar with Windows PowerShell. 

The Windows PowerShell ISE, a tool I used extensively ever since early beta versions in Windows PowerShell V2, does NOT feature in PowerShell 7.  You might say, it's pining for the fjords (although it's still shipped in Windows and is likely to be supported for years and years). But however you describe it - it's feature complete and is not going to support PowerSell 7.  Replacing the ISE is a new product: Visual Studio Code, or VS Code.

VS Code is light-weight, cross-platform (ie Linux, Mac, Windows), and open-source code editing tool. Which sounds like no big deal, but if you are an ISE user and are utilising Windows PowerShell, VS Code is in your future. For more details on VS Code, see

VS Code

I must confess, the early versions of VS Code I used were under-whelming, and I much preferred the ISE. And given that PowerShell Core was also in the early days, sticking with the ISE and Windows PowerShell for day to day activities made sense.

But for me, the release of PowerShell 7 and the incredible velocity behind the both PowerShell 7 and VS Code product have changed my view. I now use VS Code for my day to day work. I still use the ISE, but only to install  PowerShell 7 and VS Code.

Some features I like in VS Code include:-
  • Workspaces - this is a set of related folders/files which makes it easy to keep things together even when they are spread out in filestore.
  • Side by Side edit windows - makes comparing files, and leveraging code from one into another so much simpler. 
  • Built-in spell check - yes it's an extension, but typos in comments are less likely. 
  • The extensibility and customizability - you really can have it your way.
  • PS Script analyzer is built in - so I get hints about poor code as I type.
And, and, and...

For more details on VS Code, see:

VS Code Extensions

VS Code was built to be extended. An extension adds functionality, such as spell-checking or markdown checking. I originally authored this blog post using Markdown, with the Markdown All In One Extension. If I am to author in Markdown, VS Code is my go-to tool. 

I am working on a book and use Github Gists. To assist in managing my Gists, I also use the GistPad Extension.  It makes handling Gists so much easier. The integration between VS Code and GitHub, via the extension, is really useful.

To customise the colour scheme of VS Code - you can find many extensions providing additional themes. And as a hint, some of these themes are better than others! For details on the available extensions (and there are a lot) see

And of course, a great extension anyone using ISE is going to want to get is the PowerShell ISE extension that makes VS code feel more like the ISE, at least colour wise.

Installing VS Code

To install VS Code, well - there's a PowerShell script for that too. Naturally! It is called Install-VSCode and you download it from the PS Gallery. That script when you run it, downloads and installs the latest version of VS Code and provides you with flexibility over exactly what to install.

You can find any number of cute 1-liners, but here's a more workman-like, step by step, and hopefully clearer installation snippet: 

# Get and save Install-VSCode installation script
#    Assumes C:\Foo exists
Set-Location -Path C:\Foo
Save-Script -Name Install-VSCode -Path C:\Foo

# Create a list of extensions to add when installing
$Extensions =  'Streetsidesoftware.code-spell-checker',

# Now install VS Code
$InstallHT = @{
  BuildEdition         = 'Stable-System'
  AdditionalExtensions = $Extensions
  LaunchWhenDone       = $true
}.\Install-VSCode.ps1 @InstallHT

The install script is able to load different versions (Stable-System, Stable-User, Insider-System, Insider-User). Different builds provide more recent features but maybe less well tested and less reliable. I use Stable-System and have not had any issues whatsoever (aside from being able to get PSReadline to behave - but that is a rant for another day)

When you run this snippet, for example in the Windows PowerShell or the ISE, you may see some warning messages when VS Code adds the extensions. You can ignore these errors. FWIW It seems these warning messages have gone away with the latest builds of VS Code so you may not see these today.

This snippet takes around 30-40 seconds and rewards you, in the end, with VS Code open and ready for use.

You may have noticed reading that snippet that it did not explicitly mention the PowerShell extensions. The good news is that the script installs this extension by default.
It sure seems like a good idea to me! However, the ISE theme is not used by default - but there are scripts to fix that too.

Here are a two screenshots of VS code (with the ISE Theme) and the Windows PowerShell ISE.

For more details on setting up VS Code, see:

Managing VS Code Extensions

You can manage and configure VS Code extensions inside VS Code or externally. In early versions of VS Code, you had to hand configure a JSON file to change settings, but today, there's a gui for that. And once you install VS Code, you can manage extensions (from PowerShell) like this:

#    Sets the root path for extensions
code --extensions-dir

#    Lists the installed extensions.
code --list-extensions

#    Uninstalls an extension.
code --uninstall-extension ( | )

VS Code PowerShell Extension

As I mentioned earlier, one extension most ISE users are going to want to get is the Powershell ISE extension. The PowerShell extension adds great language support and great features including: 
  • PowerShell Syntax highlighting
  • Tab completion
  • Code snippets
  • IntelliSense for cmdlets, parameters, and more
  • The rule-based analysis provided by PowerShell Script Analyzer
  • Definition tracking and a "Go to definition" for cmdlets and variables
  • Find references of commands and variables
  • Document and Workspace symbol discovery
  • Run the selected section of PowerShell code using F8
  • Launch online help for the symbol under the cursor using Ctrl + F1
  • Local script debugging and basic interactive console support
  • A colour scheme that looks familiar.
In my experience, VS Code is just different enough from the ISE to make those first few hours a tad painful.  But quickly, very quickly, VS Code begins to make the ISE look quite dated. I love having PS Script Analyzer run as I am entering code - it helps me to write better code And the side by side editing has made my book-writing task a lot simpler. 

For more details on the extension, see

Configuring the PowerShell Extension

You can update and configure extensions from within VS Code itself. In early versions of VS Code, any configuration had to be done by hand-editing a JSON file. Later versions added a configuration GUI meaning you can do most configuration simply using the GUI.
But you can also directly edit the **settings.json** file to update the configuration.

The VS Code user settings file is contained in the file:

My current settings.json file looks like this:

  "workbench.colorTheme": "PowerShell ISE",
  "window.zoomLevel": 1,
  "editor.fontFamily": "'Cascadia Code',Consolas,'Courier New'",
  "editor.tabCompletion": "on",
  "workbench.editor.highlightModifiedTabs": true,
  "powershell.codeFormatting.useCorrectCasing": true,
  "files.autoSave": "onWindowChange",
  "files.defaultLanguage": "powershell"

A neat feature of VS Code - if you update that file and save it, VS Code uses the newly created configuration automatically.

In the earlier snippet, you installed VS Code.
At the end of the configuration, you could do this to further configure VS Code:

# Download Cascadia Code font from GitHub
$CascadiaFont    = 'Cascadia.ttf'    # font name
$CascadiaRelURL  = ''
$CascadiaRelease = Invoke-WebRequest -Uri $CascadiaRelURL # Get all of them
$CascadiaPath    = "" + ($CascadiaRelease.Links.href | 
                      Where-Object { $_ -match "($cascadiaFont)" } | 
                        Select-Object -First 1)
$CascadiaFile   = "C:\Foo\$CascadiaFont"

# Download Cascadia Code font file
Invoke-WebRequest -Uri $CascadiaPath -OutFile $CascadiaFile

# Install Cascadia Code
$FontShellApp = New-Object -Com Shell.Application
$FontShellNamespace = $FontShellApp.Namespace(0x14)
$FontShellNamespace.CopyHere($cascadiaFile, 0x10)
$FontShellNamespace.CopyHere($cascadiaFilePL, 0x10)

# Install the font using Shell.Application COM object
$Destination = (New-Object -ComObject Shell.Application).Namespace(0x14)

# Create a short cut to VSCode
$SourceFileLocation  = "$env:ProgramFiles\Microsoft VS Code\Code.exe"
$ShortcutLocation    = "C:\foo\vscode.lnk"
# Create a  new object
$WScriptShell        = New-Object -ComObject WScript.Shell
$Shortcut            = $WScriptShell.CreateShortcut($ShortcutLocation)
$Shortcut.TargetPath = $SourceFileLocation
#Save the Shortcut to the TargetPath

# Create a short cut to PowerShell 7
$SourceFileLocation  = "$env:ProgramFiles\PowerShell\7-Preview\pwsh.exe"
$ShortcutLocation    = 'C:\Foo\pwsh.lnk'

# Create a  new object
$WScriptShell        = New-Object -ComObject WScript.Shell
$Shortcut            = $WScriptShell.CreateShortcut($ShortcutLocation)
$Shortcut.TargetPath = $SourceFileLocation

# Save the Shortcut to the TargetPath
$XML = @'
$XML | Out-File -FilePath C:\Foo\Layout.xml

# Import a startlayut.XML file
Import-StartLayout -LayoutPath C:\Foo\Layout.xml -MountPath C:\

# Update Local User Settings for VS Code
#    This step in particular needs to be run in PowerShell 7!
$JSON = @'
  "editor.fontFamily": "'Cascadia Code',Consolas,'Courier New'",
  "editor.tabCompletion": "on",
  "files.autoSave": "onWindowChange",
  "files.defaultLanguage": "powershell",
  "powershell.codeFormatting.useCorrectCasing": true,
  "window.zoomLevel": 1,
  "workbench.editor.highlightModifiedTabs": true,
  "workbench.colorTheme": "PowerShell ISE",
$JHT = ConvertFrom-Json -InputObject $JSON -AsHashtable
$PWSH = "C:\\Program Files\\PowerShell\\7\\pwsh.exe"
$JHT += @{
  "" = "$PWSH"
$Path = $Env:APPDATA
$CP   = '\Code\User\Settings.json'
$Settings = Join-Path  $Path -ChildPath $CP
$JHT |
  ConvertTo-Json  |
    Out-File -FilePath $Settings

This snippet downloads and installs a new font, Cascadia Code and creates two new shortcuts for your toolbar. The snippet also updates the settings.json file with certain useful settings.


PowerShell 7, has shipped. If you are a Windows PowerShell, and particularly a fan of the ISE, VS Code is a tool to take on board. To assist you, the PowerShell extension makes VS Code easier to adopt. And the extensions available take VS Code to the next level.

TL;DR: PowerShell 7 with VS Code with PowerShell 7 rocks.
The PowerShell Extension to VS Code just rocks more!

What are you waiting for?

Tuesday, March 10, 2020

PowerShell 7 Chain and Ternary Operators

Introduction and Background

Welcome to this post as part of this PowerShell 7's #PSBlogWeek!   I hope you are enjoying the many posts.

As I started to think about this topic, an old Grateful Dead song kept running through my mind "Operator, Can you help m?  Help me if you please...For a live version, listen to

So here is some information about a couple of the great new features in PowerShell 7, in particular, the Pipeline Chain Operators and the Ternary Operators.

In the days of Windows PowerShell, extending the PowerShell language was done by the Microsoft Windows PowerShell team.  With the move to open source, more developers can, and have, made it possible to do a lot more in PowerShell 7. PowerShell's language was modelled on C# - Jeff Snover has often said that PowerShell is on the glide scope to C#.

With PowerShell 7 comes two new operator sets: The Pipeline Chain Operators and the Ternary operators.

The Pipeline Chain operators (|| and &&) enable you to allow conditional execution of commands depending on whether the previous command succeeded for failed. You use the Ternary operators (? and :) as a short-hand way of implementing if/else type statements. These are popular among C# developers and Bash users and have long been requested within PowerShell.

These operators add new functionality to PowerShell 7. They are nice when used carefully, but can reduce the clarity of production code. Let's look at them in more detail. 

PIpeline Chain Operators

The pipeline Chain operators enable conditional execution of commands depending on whether a previous command succeeded or failed. There are two pipeline chain operators: && and ||These operators were added to PowerShell 7 Preview 5.  Prior to PowerShell 7, you could have used If/Else to do the same thing,

These operators come originally from Posix. POSIX shells call this as AND-OR lists. The idea is that depending on whether a command is successful, you can do different things.

What is it used for?

If a pipeline is successful, this operator allows you to run some other pipeline. But if the first pipeline is unsuccessful, you can run a different pipeline. 

For example:

# Create an SSH key pair - if successful copy 
# the public key to clipboard
ssh-keygen -t rsa -b 2048 && Get-Content -Raw ~\.ssh\ || clip

If the keys are generated successfully using SSH-KEYGEN (and content returned from Get-Content), then the command copies it to the clipboard. Without these operators you would have used if/else and/or try/catch - the chain operators make things a bit shorter.

Ternary Operators

The ternary operator evaluates a Boolean expression and returns the result of one of the two expressions, depending on whether the Boolean expression evaluates to true or false.
This sounds more complex than it is (see the example below!) These operators were added to PowerShell 7 Preview 4. 

What Is It Used For?
You typically use this operator mainly to create a string depending on the value of a boolean variable or expression. For example, you could create a string that displayed whether an AD user account was enabled based on the user's Enabled property or display whether a user is using PowerShell on a Mac. Like this:

# Is User Enabled?
# Create 2 strings
$UEMsg1         = "This user IS enabled in AD"
$UEMsg2         = "This user IS NOT enabled in AD"
# Get Details
$UserEnabled    = (Get-ADUser -Identity $UserName).Enabled
# Set Enabled/Disable String
$UserEnabledStr = $UserEnabled ? $UEMsg1 : $UEMsg2
# What does this show for an enabled user:
This user IS enabled in AD
# Anotehr example
$IsMacOS ? 'Yes' : 'No'

You Can but Should You?

I like these new operators but am not likely to use them in code I write.  Except maybe to demonstrate them.  I really do not, yet, see a great use case, except at the console. As an example of this, look at the chain operator example above. That snippet executes a command, and if successful copies a file to the clipboard. 

Personally, I'd have written it more like this:

# Create an SSH key pair - if successful, copy
# the public key to clipboard
try {
  ssh-keygen -t rsa -b 2048
Catch {
    # handle terminating error - left as an exercise for the reader
#  then    
Get-Content -Raw ~\.ssh\ | clip

If I was running this from the console, I'd just run ssh-keygen. If it ran ok, then I'd type Get-Content and pipe the output to the clipboard. Typing longer lines of code is almost certain to introduce typos, especially given my lousy typing. I find doing things step by step is easiest - both to write and to understand months later when the code needs modification.

These operators have the potential to reduce the clarity of production code. Unless you know these operators, their meaning is not easy to discern. Operators like -Contains, -Eq, and -Match are both named so as to give at least some clue to their use. The '?' character an alias for Where-Object and the ':' used in PSDrive letters leading to overloaded operators.  And that can diminish the readability of production code. I am sure mileage varies - and would love to hear comments as well as seeing more great use cases.


TL;DR: Great new operators that bring requested C# Features to PowerShell - Just use them wisely.

Monday, March 09, 2020

Deploying and Managing Active Directory with PowerShell 7

I am in the process of writing a book on PowerShell 7 and one chapter is devoted to deploying and managing Active Directory. When I began looking at doing the book, it was early days for PowerShell core, and coverage was kind of poor. Early on, the AD modules did not seem to be particularly usable from within what was PowerShell Core 6.x. But having completed the chapter using PowerShell 7 RC3 and RTM, I am pleased to find that AD deployment and management works well with  PowerShell 7 (and VS Code).


PowerShell 7 is based on .NET Core 3.1, whereas Windows PowerShell is based in the full .NET Framework. This means that some modules, particularly those below the System32 folder, do not work in PowerShell 7 natively.

The PowerShell team's compatibility solution for older modules is to leverage remoting. When you attempt to load a non-compatible module, Import-Module creates a remoting session into a Windows PowerShell endpoint. Then, using implicit remoting, it imports functions into the calling session. Thus you can use the commands in those modules as if they were directly supported.

The remoting session created uses the same Process transport that it uses to create background jobs. So it's relatively efficient and doesn't require WinRM. If you load multiple modules via compatibility, PowerShell just creates one session. And if you want to enter the session and look around that too is easy to do.

What DOESN'T Work?

This compatibility solution works very well but is not a universal get out of jail card. A very small number of modules will never work either directly or via the compatibility solution. Because the solution depends on remoting, some modules do not work due to object serialization that occurs when implicit remoting is invoked. The only solution is for the relevant product team to redevelop their modules (and for at least one module, Update Services, this would require a complete re-engineering of the module). At the time of writing, there are but three modules that simply do not and will not work in PowerShell 7.

The Update Services module, which you would use to manage WSUS, does not work. With this module, you use object methods to perform administrative functions (unlike almost all other modules that deliver commands or cmdlets rather than offering object Methods). The scripting model used by Update Services is reminiscent of COM programming where you instantiate an object and use its methods. A redesign to use command/cmdlets would be a great solution.

When you use the compatibility solution with this module, the methods are stripped off so you can't really do anything with the objects. The Update Service module also uses SOAP to communicate with the WSUS Server, and this is not supported in .NET Core. For that reason, without a complete redesign of the module (either to not use methods or to port the module to .NET Core and eliminate SOAP), you must manage WSUS using Windows PowerShell.

In the early Preview versions of the compatibility feature, had you tried to use some modules, you received hard to understand error messages (and of course, those object methods you need were missing.) Additionally, the error messages that were generated were not actionable and of no value.

The user experience was rather poor, even if you understood the issue. To avoid a bad user experience to this otherwise useful solution, some modules are blocked from being imported. If you try to import the UpdateServices, Import-Module raises and error. This is a much better user experience given that a few modules simply do not work in PowerShell.

The Import-Module stops you from loading modules as defined in the powershell.config.json file in PowerShell's home folder. This file also holds a list of Experimental Features which you have enabled. In my daily build folder, the file looks like this:
{ "ExperimentalFeatures": [ "PSCommandNotFoundSuggestion", "PSCultureInvariantReplaceOperator", "PSImplicitRemotingBatching", "PSNullConditionalOperators", "Microsoft.PowerShell.Utility.PSManageBreakpointsInRunspace", "PSDesiredStateConfiguration.InvokeDscResource" ], "WindowsPowerShellCompatibilityModuleDenyList": [ "PSScheduledJob", "BestPractices", "UpdateServices" ], "Microsoft.PowerShell:ExecutionPolicy": "RemoteSigned" }
As you can see there are only three modules that are simply not going to work in PowerShell 7. For me, this is pretty good going and, at least for WIndows, gives IT Pros little reason not to move forward.

For those modules that DO work in the compatibility solution, there is one other minor issue you might trip over. The display XML that is used to display objects returned from a command. This display XML is not present in your PowerShell session by default which means default output may not be as nice as you might like for example when viewing the windows feature objects returned by Get-WIndowsFeature. Fortunately, there is a very simple workaround for that - just manually load the display XML.

So What About AD?

In terms of Active Directory, there are three modules that you need to use:
  • Server Manager module - this module enables you to add the AD DS feature to a server (which adds the other AD modules).
  • AD Deployment module - this module enables you to create new DCs (effectively do the job of DCPromo).
  • Active Directory module -   this module allows you to create, update, and delete objects in the AD Database such as adding users, updating groups, etc
The Server Manager module is supported by the compatibility solution and all the key commands work as they should. When you load this module, Import-Module does generate a warning message to warn you that this module is being imported via the compatibility solution. Once the module is imported, you can add, get, and remove windows features.

One small issue with this module is that the display XML that enables the output from Get-WindowsFeature to look so nice is not present by default in your PowerShell session. If this matters to you, you can deal with this by explicitly importing the display XML like this
Update-FormatData -PrependPath C:\Windows\System32\WindowsPowerShell\v1.0\modules\servermanager\feature.format.ps1xml
The AD Deployment module is also supported via the compatibility solution and was fully functional. I tested the following scenarios
  • Create a Forest Root DC
  • Create a replica DC in the first domain
  • Create DC(s) in a child Domain
  • Create an additional Forest and implement a cross-forest trust.
  • Create, update and remove OU, User, Computer, and group objects and manage group membership as well as other admin tasks (eg change password). 
The only relatively minor problem with all of this was that there are no commands to set up the cross-forest trust. To set up the trust, you just use the .NET objects and these work in .NET Core. But of course, that wasn't possible using the module in Windows PowerShell either!

The Active Directory module was one of the first modules to be ported and seems to work. I have not tested every scenario, but adding/modifying/removing users/groups/computers, managing OU contents and the like all work just as you would expect.

You can see the PowerShell 7 scripts that I developed in Github: Note that these scripts are currently being developed so may change before the book is published/


PowerShell 7 both now with RC2 and when it is fully released supports deploying and managing AD forests and domains. There is a minor issue with display XML and Get-WinowsFeature which has an easy workaround. The key point is that I was able to deploy multiple forests and manage the objects inside the AD as well as with Windows PowerShell.

If you are a Windows IT Pro and use Windows PowerShell to manage Windows services and applications, you really should try PowerShell 7. It's easy to download and use and you can run it side by side with Windows PowerShell. That can enable you to enjoy the new features where you can but fall back to Windows PowerShell when you need to

What are you waiting for?

Sunday, March 08, 2020

Remoting With PowerShell 7

With version 2 of Windows PowerShell came the PowerShell Remoting feature. Remoting worked but was a bit flakey. In Version 3 it was vastly improved. Remoting came with the PowerShell Remoting Protocol (PSRP). You can read about PSRP at

In Windows PowerShell, all remoting was done using WS-Man, implemented by the WinRM service. With PowerShell 7, you can perform remoting over SSH. This article covers traditional Remtinog via WinRM and looks at what's new with PowerShell 7.

PowerShell Remoting

Here's a simplistic picture of the WS-Man based remoting stack in PowerShell:

At the bottom is HTTP / HTTPs. Remoting uses HTTP and HTTPS to carry objects between remoting clients and remoting targets. Although remoting uses HTTP, the higher levels in this stack encrypt traffic. You can use HTTPS as a transport which provides mutual authentication via certificates which can be useful in some scenarios (eg a DMZ workgroup). 

Simple Object Access Protocol (SOAP) is used to carry the data - the objects exchanged between a remoting client and target. SOAP used XML XML to hold the objects transferred between the client and target. All transferred data (objects) are first serialised into XML and deserialized at the other end. The side effect of this is that methods are stripped off. 

WS-MAN controls the end to end communications for remoting. When a remoting client establishes a connection to a remote machine, it connects to a specific endpoint. The default remoting endpoint is held in the variable $PSSessionConfigurationName. WS-MAN (implemented by the WInRM service). 

Microsoft adapted the WS-MAN service to work with Windows via the WSMV (WS-Management Protocol Extensions for Windows Vista). You can read about this layer in the Remoting stack at:

At the top of the stack is PowerShell Remoting Protocol  WIth PSRP, remoting clients establish a session with a remoting target and use that session to send structured pipelines to the target and receive the results of those pipelines. Remoting establishes a session which holds state information with what PowerShell terms a runspace. For more information on PSRP, see:

Remoting is a complex subject and this overview necessarily omits much of the lower-level details. 

Using Remoting

You can use remoting in three main ways:
  • Use Enter-PSSession to enter a telnet-like remote session. Commands you type are executed on the remoting target and you see the (serialized) results.
  • Use Invoke-Command to run a script block or a script file on the remoting target.
  • Use New-PSSession to create a new remoting session on the remoting target and then use the other two mechanisms to run commands on the target.
If you use Enter-PSSession, PowerShell, in effect, runs a copy of PowerShell on the target host into which you type commands and see the results. State in the remote runspace is maintained until you exit the remoting session. With using Invoke-Command,  PowerShell tears down the remote runspace when the script block or script file has completed.

If you create a remote session using New-PSSession, you can use Invoke-Command and Enter-PSSession and PowerShell maintains state. You can also specify how long PowerShell should keep the remote runspace active.

Remoting in PowerShell 7

PowerShell 7 implements the same remoting but there are three gotchas:

By default, the installation process does NOT create PowerShell 7 remoting endpoints. So after installing PowerShell 7, running Get-PSSessionConfiguration shows NO endpoints: 

You can enable the PowerShell 7 endpoints by using Enable-PSRemoting. After enabling remoting, you see two PowerShell 7 endpoints, like this.

Notice that the endpoint name no longer contains Microsoft.

In PowerShell 7, Get-PSSSionConfiguration does not show Windows PowerShell endpoints. Even though those endpoints exist, you can't see them in PowerShell 7.

If you use either Invoke-Command or Enter-PSSession, you can specify a specific endpoint. to use. If you specify no endpoint, then PowerShell remoting uses the value of $PSSessionConfigurationName. By default, this is set to So by default, this is the behaviour on the local machine:

Once you create the PowerShell endpoints - you can use them like this:

In most cases, remoting is going to work as with Windows PowerShell. But with differences. 

I am hoping that the inability to see Windows PowerShell endpoints is a bug that can be fixed in 7.1. We'll see.

Sunday, February 23, 2020

Fun and Games With Windows Insiders Builds

For a very long time, I've quite enjoyed testing early builds of things. Getting onto the DOS 5 beta was cool, as was NT 3.1 B1. More recently, the Windows Insiders builds have been interesting. Most have been pretty simple - a bit of new functionality here, a bit there. Occasionally, stuff just doesn't work - in one build WSL was broken, in another Bluetooth was borked. Such is life - and if the feedback generated is helpful that's great.

Recently, I had some more serious issues. Build after build would simply not stick. After something like 25 failed attempts, there were two suggestions. The first was to take Hyper-V off which seemed like Voodoo (not that that is a bad thing!). The second was to do kernel debugging to help the team work out why the failures were occurring.

So I began looking at kernel debugging across the network. The directions at seemed to be pretty simple. And assuming you can read, they looked easy enough for me to try. 

Initially, I had issues using KDNet and got some curious error messages. KDNet said that my NIC was supported, but not plugged in. Weird as I was able to use Google to see if I could find the error message. After getting windbg setup on the debug host, KDNet on the target gave me further registry read errors. I took the opportunity of changing the network cable (although the old one worked fine in another host).  But I was stuck with networking not working. 

Using KDNet to setup kernel debugging makes changes to the boot configuration database (you can see that with BCDEdit). And that means after rebooting, the NIC appears to be gone. Looking at Device Manager, I saw a weird (to me) error message "This Device Has Been Reserved for Use by the Windows Kernel Debugger for the Duration of This Boot Session." I was about to just flatten the box, when one last bit of search engine magic pointed me to this post:

But even so, my system was just not working and I decided that flattening it and starting over would be the fastest way to move on. But given how long it was going to take - why not try to take Hyper-V off and see if it made a difference. So after backing up 250GB of VMs, I took Hyper-V off and rebooted.

With Hyper-V removed, I was back to having just one NIC, and after configuring the IP address, it worked. Then I used WU to download the latest Insiders build - and that installed OK! Adding Hyper-V and importing the VMs went flawlessly and I'm back in business. On the latest Insider's build and with a working Hyper-V farm (and no need to reinstall all those apps...)

Thanks to some great folks: Eddie Leonard, Jason Howard, and Murray Wall were all helpful. Murray was the one who suggested removing Hyper-V (which did not seem logical). Jason/Eddie got me into a mess of my own making. It is reassuring to know that the Insiders folks are able to help!

Kernel Debugging across the network is easy if you follow directions properly. IMHO, the instructions could be improved a bit for non-dev types and to make it even clearer what to do on the host vs what to do on the target. And for sure a "how to undo what kdnet did" section would be useful! 

It is disappointing that this episode did not capture any useful output for the Insiders team.  But it's good to be back and working again.  

Friday, January 17, 2020

PowerShell 7 - Release Candidate 2 has shipped

So there I was, totally relaxed after a lovely yoga class, and at home when Twitter, well actually Steve Lee via Twitter, informed the world that RC2 had shipped. My immediate reaction was, weall as expected, to download/install it NOW!

The Install-PowerShell command from GitHub installs PowerShell for Linux, Mac, and Windows.  You can also use it to download the RTM version as well as Preview version such as Release Candidate 2. I  saved it locally and run it more or less daily to get the latest daily build. 

So what IS as Release Candidate?. In theory, an RC is a set of software that should be the final version except for some bugs. The idea of an RC is to test it and to find those bugs before the final release.; Again, in theory, an RC build should be feature complete - but sometimes 'fixes' do rather look like new features. But for the most part, RC2 is what PowerShell 7 is going to be, minus a few bugs. 

And why does this RC matter? Well - for almost all IT pros (at least Windows IT Pros), Windows PowerShell has been a core tool for a decade. For many, if not most of you, PowerShell 7 is an upgraded replacement tool. It is now fully supported in Production and is really the future of PowerShell. EVERY IT Pro that presently uses Windows PowerShell should take a close look. Speaking personally, I now use  PowerShell 7 for just about everything.

If you are an IT Pro, and if you use Windows PowerShell now. With so many neat new features, you are likely to be pleased, even if you can, today, use PowerShell 7 for everything.

Wednesday, January 15, 2020

Background Jobs in PowerShell 7

The PowerShell background jobs feature allows you to run scripts or script blocks in the background. You use the Start-Job command to start a job, and Get-Job ane Receive-Job to view jobs and to get the output of a job.

With PowerShell 7, you have the option of running a job using either PowerShell 7 (ie the default) or using Windows PowerShell 5.1. You indicate it using the -PSVersion parameter and specifying "5.1". Such jobs then run under Windows PowerShell, which can be useful if you are using PowerShell 7 but have scripts that are not yet compatible (eg using WSUS). You can kick off these scripts using background jobs, have that script run in Windows PowerShell, and then incorporate the results in a PowerShell 7 script.

Here's an example:

In this screenshot, you see I am running this in today's build of the day. I then run a simple job and view the job results. By default, PowerShell runs the script in the same version of PowerShell (ie today's daily build). Then I ran that same script but explicitly asked for Window PowerShell 5.1, what the results you can see.

This is a nice feature to assist with backwards compatibility. PowerShell 7 provides great forward compatibility but it's not yet a perfect replacement for Windows PowerShell. Features like this provide a great workaround when a command is not supported natively. 

Sunday, January 12, 2020

Planet PowerShell - A New PowerShell Resource

As I am sure you know, the Interweb contains a great deal of excellent PowerShell related content. The question is, how do you find it?

If you know what you are looking for, today's search engines are pretty awesome. I use Google and Bing a lot. In my training, I encourage 'Google Engineering' as a way to be successful with PowerShell. Search engines are outstanding, so long as you know what you are looking for.

But what if you just want to learn more?  Planet PowerShell is a community web site that aggregates blog posts from around the internet. the URL for the home page is: And it looks like this:

From this landing page, you can click on the Preview Button to bring up the most recent postsm like this:

From this page, you can click on the View Original Post button to view the post 

Also, if you are on Twitter, you can follow the site's Twitter feed by following  @planetpshell. That feed looks like this:

This is a great resource to help you to learn more!

Tuesday, December 24, 2019

Changing Default Parameter Values - Yet Another User for Hash Tabales

In PowerShell, the hash table is an amazingly useful .NET class - for all sorts of things: With hash tables, you can:

  • Pass a hash table and an object to Select-Object, and have the cmdlet add a calculated property to the object for later use
  • Push a hash table to Format-* and have the command create and display a new property and have it look just the way you want it.
  • Splatting, splatting, and more spatting. What a great use of hash tables.
  • Specifying property values for obscure properties in Set-ADUser/Set-ADComputer, and Set-AD-Object.
  • And many more!
But one of my favourite uses is to change the default values for certain properties within a PowerShell session or even a script.

Herre's something I am sure you have all faced. If you run a PowerShell job, you receive the output generated by using Receive-Job. By default, once you run that command, the output is removed from memory and is no longer available. Of course, you could use the -Keep parameter - if you remembered to, which I often don't.  My muscle memory is kind of not yet that well developed in this regard. So I often find my self losing output (which takes time to reproduce). I'd like to do better!

There are other commands too, that I just prefer a different value for some parameters. I'm not saying that the developers were wrong with the defaults they chose, it's just that I prefer different values. For each of these, for example -Wrap with Format-Table, I'd like to tell PowerShell to use a different default setting for a parameter on one more command. 

As ever with PowerShell - you can have it your way, and of course, there's a script for that!

The basic hashtable class is System.Collections.Hashtable. Developers across the Windows and increasingly the Linux also create specific objects derived from the hashtable class. You can read about the hash table class at: The class is broadly the same in the full .NET and .NET core.

So how does this help to set default command values? When Windows PowerShell and PowerShell startup, the engine creates a variable: $PSDefaultParameterValues.  This variable is of type
System.Management.Automation.DefaultParameterDictionary which you can read about at: This object is a hash table with some improvements when used for parameter caching.

You add rows to this hash table to describe the command name, parameter name, and the new value you want PowerShell to use subsequently. For the key name, you specify the command and parameter as "<Command name>: then specify the value you want for that parameter. And of course, you can use wild cards to specify the command or property names.  You build this hash table either in a script (meaning that all commands in that script have a different default value) or stick it in the profile as I so.  So, for example, if you wanted to ensure that Receive-Job always used the -KEEP parameter, you would specify:
$PSDefaultParameterValues  @{Receive-Job:Keep = $true}
My standard profile on my laptop sets this and other values:
$PSDefaultParameterValues = @{'*:AutoSize'=$true;  '*:Wrap'=$true;'Receive-Job:Keep'=$true}
This way, any command that has an AutoSize parameter (eg Format-Table, always uses -AutoSize, likewise for any command that uses the  -Wrap parameter).

I just love hash tables - and this is another great example of where you can use one to have it your way.  

Monday, December 23, 2019

PowerShell Core/7 - Velocity

When the open-source PowerShell team shipped the first two releases of PowerShell Core 6.x, I tweeted that this was a little bit like what I remember of PowerShell V1. A product with incredible potential, a release that is a fantastic proof of concept, but with a loooooong way to go. Jeffrey Snover, in a reply tweet, agreed but made the point that PowerShell Core had a much better velocity. 

As I use PowerShell 7 RC1, I realise just how true his words were. For me, this is kind of like going from PowerShell 1 to PowerShell 5 - it's that big of an improvement in functionality, usability, and performance. 

If you navigate there you see a dashboard like this:

There are two interesting things, to me, about this dashboard. The first is that the bulk of usage is Linux with a small bit of Windows and tiny bit of Mac. But even more interesting, is the spike of usage over weekends for Linux.

As you can see from the overall graphs, usage is growing - I would expect a big increase in Windows come the new year. We'll see!