I’ve just seen an interesting blog article, that shows using PowerShell Jobs to do a specific task in Forefront Identity Manager (FIM). This article was neat ins two ways.
First, even though I am not really an FIM guy, by looking at the script, I had a pretty clear idea of what the script would do. So while I am not clear on the details (just how big is a management agent, how complex is the import process, and what do I do with them anyway!), I can see what the script will do. This shows up a point I keep making to my PowerShell classes: learning a new product’s cmdllets is mainly about learning about how that product’s objects are surfaced (WMI, CMDLET, Provider, etc) and then what they are and what they are used for. The latter bits of this of course require product knowledge, outside of PowerShell.
The second thing neat about this article was how the author took a set of operations (importing several management agents into FIM). In terms of the importing, you could have written the script to not use just – just calling the script block directly inside the loop instead of inside a separate job. This approach delivers parallelism that can speed up dramatically the time for a task to happen. But, it depends.
Each time you run a PowerShell job (whether in the ISE or console), PowerShell creates an instance of PowerShell.exe to execute the script or script block. This means that each job comes with a bit of overhead – process creation has CPU/IO/Memory requirements. But if you can run multiple tasks in separate processes, then with Window’s multiprocessing features, the jobs run in parallel and therefore a much shorter overall time. At least in theory!
As ever it depends. If the task being executed uses a lot or resources to actually execute, then having multiple tasks run in parallel therefore raises the resource utilisation. Specifically, with more than one or two ‘heavy’ tasks, you can find the system is paging heavily. This paging might actually end up slowing the execution down to the point where there the benefits of parallelisation are swept away by the paging costs.
A related issue is what I call the 3:00 wakeup test. If had to be woken up at 3:00 in the morning to have to fix this script, what would I make of it. Because the script is more complex, it might take me longer to work out what is going on (and therefore how to fix things).
So-this script illustrates something that can really improve the performance of certain tasks. But as ever it depends!