I wanted to rebuild my environment in vRealize Automation to use 7.6 as I had been using 7.3 for quite a while. When I had finished installing the appliance along with the IaaS hosts, I noticed that running the “vSphere Initial Setup” XaaS form would fail to perform the initial configuration. I attempted running the workflow in both vRA along with the embedded vRO to see if maybe something was wrong with the XaaS form, however much to my dismay that wasn’t the case. Some information on my environment and what I used.

 

The vRA environment is a lab, therefore I used the default tenant

vCenter endpoint is called “vcenter01”

Agent name is called “Agent01”

The login for the vCenter endpoint was the default administrator@vsphere.local

The requester was configurationadmin from the out of box setup in the final installation steps

 

Running the workflow would step through about half of the built-in scripts and yield the following logs (I stripped out the preceding ones for simplicty, else this post would be exceptionally long):

[2019-08-29 17:18:15.025] [I] ********************************************************

[2019-08-29 17:18:15.031] [I] Trigger data collection operation

[2019-08-29 17:18:15.037] [I] ********************************************************

[2019-08-29 17:18:15.105] [I] Agent entity with name vcenter01 found

[2019-08-29 17:18:15.111] [I] Wait for data collection statuses to be created.

[2019-08-29 17:18:15.116] [E] Error in (Workflow:vSphere Initial Setup / Data Collect Endpoint resources (item33)#43) TypeError: Cannot call method "getProperty" of undefined

[2019-08-29 17:18:15.152] [I] --------------------------------------------------

[2019-08-29 17:18:15.157] [E] Unable to execute step: Trigger data collection operation

[2019-08-29 17:18:15.162] [I] --------------------------------------------------

[2019-08-29 17:18:15.184] [I] Rollback creates entities

[2019-08-29 17:18:15.194] [I] 4c8ecb26-8a2c-48c3-b2d1-6bc138e1be71

[2019-08-29 17:18:17.198] [I] Fabric group with uuid 4c8ecb26-8a2c-48c3-b2d1-6bc138e1be71 deleted successfully

[2019-08-29 17:18:17.318] [E] Workflow execution stack:



***

item: 'vSphere Initial Setup/item1', state: 'failed', business state: 'Data Collect Configured Endpoint', exception: 'TypeError: Cannot call method "getProperty" of undefined (Workflow:vSphere Initial Setup / Data Collect Endpoint resources (item33)#43)'

workflow: 'vSphere Initial Setup' (daf78a25-baf7-4e8e-a12a-b7b1b0576795) 

What stood out to me was the “Agent entity with name vcenter01” found. I noticed this was the endpoint name inputted in the XaaS form initially, along with the workflow if ran from vRO. So my next step was to drill down what was going on. I first wanted to identify the dependencies and noticed the only attribute used is the vCAC:VCACHost variable, which was easy to map as it was simply vRA. I cloned the script into its own workflow and recreated the bindings. It did not hold and outs either (excluding the attribute) which led to the following:

I ran this workflow on its own to see if it would yield the same error. The above error code was thrown once more and what stood out to me was the ‘vcenter01’ was found. The endpoint however, is called ‘vcenter01’ but not the agent. In the next attempt, I decided to try it and input the Agent name instead because I noticed one particular oddity within the script.

var agentEntity = getAgentByName(iaasHost, endPointName);

function getAgentByName(vcacHost, agentName){
	var modelName = 'ManagementModelEntities.svc';
	var	entitySetName = 'Agent';	
	var properties ={
		AgentName : agentName
	};
	
	var resultEntity;
	var resultEntities = vCACEntityManager.readModelEntitiesByCustomFilter(vcacHost.id, modelName, entitySetName, properties, null);
	
	System.log("hi sirr meme");
	System.log(resultEntities[0]);
	
	if(resultEntities && resultEntities.length > 0){
		System.log("hi sirr meme");
		resultEntity = resultEntities[0];
	}
	
	System.log("Agent entity with name " + agentName + " found");
	
	return resultEntity;
}

The “System.log” portion caught my attention as it’s simply outputting whatever the in value is. It’s never actually performing any validation if the Agent was found. So when the resultEntities is trying to pull data, it’s yielding an undefined value. Subsequently, that was when I tried inputting “Agent01” as the endpoint name instead for “endPointName.” This worked and it found the Agent and began the data collection process. A bit strange, so I gave it a try on the original workflow. 

Of course, an easy solution is typically not the right one. This failed because as I assumed, the ‘vcenter01’ endpoint name is used in previous scripts. A dilemma really, but given that no “Outs” were needed in the data collection portion of it, I figured I could cobble a way to at least satisfy it. While not the most elegant way, I created an attribute string named “globalAgentName” and assigned it a static value of “Agent01” then tied it to the visual binding region.

The script of course also removed and mention of “endPointName” and substituted it with the “globalAgentName” attribute to pass the agent entity name instead. This worked! The inventory collection began to function without any issues, and I was able to invoke the workflow for the inventory collection at any point. The next part was to create this as a XaaS form to request it on the vRA self-service portal.

 

I proceeded to clone the vSphere Initial Setup, add in the modifications made and start adding the extra functionality needed for the Agent reference. The first thing I wanted to do was convert the attribute back to just an “IN” element, that way I could integrate it within the presentation form and nest it under the vCenter server settings as a mandatory input.

Now instead of being hardcoded in, the individual can pass any name they want. 

The next step was to create the XaaS form on the design tab of vRA. Adding the workflow was a simple process to do, and vRA did a fantastic job of generating all the form elements required. The following images depict the steps I took:

 

We can see here that the new workflow is there with the additional input variable I added.

I changed the text here a little to differentiate from the original workflow.

The form automatically imports the presentation from the vRO, which saved a lot of time! And further it showed the new input variable I added.

Given this isn’t provisioning anything but only configuring the environment, no managed machines are being instatiated so we’ll leave it as “No provisioning.”

Simply part, just adding it to the XaaS category.

 

Now with that all complete, I published the blueprint and added the entitlements to it. When performing the request of the XaaS object, I was able to see the token worked just fine in vRO, the endpoint was added in vRA along with all inventory objects.

I’ve been working on a vRealize Automation distributed environment for some time now. During the time using it, I wanted to try adding a proxy agent, but I decided prior to that I’d deploy a CA in my environment instead of using self-signed certificates across the board. To facilitate this, I used Windows 2016 CA and OpenSSL to generate the pem files I needed for the vSphere appliances. However, lately when I was automating certificate generation in my lab I noticed a particular flaw when generating the openssl.cfg file to materialize the CSR. The following code was used to perform the request:

$opensslCfg = <your_config_information>
$opensslCfg > openssl.cfg
Openssl req –new –nodes –out rui.cer –keyout rui-org.key –config “C:/<path_to_config>/openssl.cfg”

Doing this would yield the following error:

unable to find ‘distinguished_name’ in config
problems making Certificate Request
3252:error:0E06D06A:configuration file routines:NCONF_get_string:no conf or environment variable:crypto\conf\conf_lib.c:270

After a lot of troubleshooting, I checked a file that had previously worked when manually created and put it in a tool to check for differences. The text was effectively the same. I tried pasting the text into vim on a Linux box to perform the OpenSSL command and found it worked. This certainly puzzled me, and I went so far as to reinstall OpenSSL and ensure the environment variables were correctly configured. Upon further inspection, I found that the UTF encoding of the files were different. PowerShell by default saved it as a UTF-16, but the original file that worked was marked as UTF-8. I changed the file generated by PowerShell into UTF-8 and it worked flawlessly.

I used this code to perform the cfg generation instead:

[IO.File]::WriteAllLines($fileName, $opensslCfg)

This resulted in the following code:

$opensslCfg = <your_config_information>
[IO.File]::WriteAllLines($fileName, $opensslCfg)
Openssl req –new –nodes –out rui.cer –keyout rui-org.key –config “C:/<path_to_config>/openssl.cfg”

I haven’t found too many resources on leveraging the NetApp PowerShell Cmdlets, and I wanted to start using this toolkit to automate some of the work I do. I noticed an interesting challenge most people received when connecting to their filers or clusters, mainly that Windows domain authentication was required due to an RPC server being unavailable. My goal was to connect to a cluster without the use of Windows authentication, as I want to setup a NetApp controller from scratch. (This excludes joining two nodes as HA partners for an initial cluster setup.)

For reference, I’m using a FAS2520 for my lab where I’ll be testing the majority of NetApp scripts on.

For example, the error in the image below depicts the RPC error.

I realized this wasn’t enough and tried using the -HTTPS switch along with passing credentials to see if I could connect to the filer.

This one seemed to work, and I noticed a nice message from NetApp indicating to use the ‘Connect-NcController’ Cmdlet instead, as I noticed with the ‘Connect-NaController,’ a lot of functionality didn’t appear to work. After connecting via the NcController Cmdlet, I was able to freely modify the cluster with relative ease.

Needed to put a quick patchwerk script together to obtain the management IP and serials from multiple vCenters.

Add-PSSnapin vmware.vimautomation.core

$credentials = Get-Credential

function Get-VIHostInfo
{
    foreach($vmhost in get-vmhost)
    {
        $info = Get-VMHost -Name $vmhost.Name | Select Name,@{n="ManagementIP"; e={Get-VMHostNetworkAdapter -VMHost $_ -VMKernel | ?{$_.ManagementTrafficEnabled} | %{$_.Ip}}}
        $esxiHost = Get-VMHost $vmhost.Name | Get-EsxCli
        Write-Host $vmhost.Name","$esxiHost.hardware.platform.get().SerialNumber","$info.ManagementIP
    }
}

$arrVIServers = "" #Insert vCenter IPs here

foreach($server in $arrVIServers)
{
    Connect-VIServer -Server $server -Credential $credentials
    Get-VIHostInfo
    Disconnect-VIServer -Server $server -Force -Confirm:$false
}

I’ve been working with vSphere 6.5 lately and I noticed a shortcoming that it had which I’m sure others have seen too. There’s no current option to upload multiple files to a datastore, which subsequently leaves the only option to upload files one at a time. Suffice to say, that’s not an option that I’d like to do. I made a quick script to help faciliate uploading folders along with the structure they had.

#Dependency for pointing to a folder
[System.Reflection.Assembly]::LoadWithPartialName("System.windows.forms")

#region Template script for handling VIServer login

Write-Host "Welcome to the folder uploader script."
$VIServerConnection = $false
$datastoreValue = $false
$datastore = $null

While(!$VIServerConnection)
{
    #Gather vSphere username and password
    Write-Host "Please input the vCenter login credentials:"
    $vCenterCreds = Get-Credential

    #Request for the address
    Write-Host "Please input the vCenter IP address:"
    $vCenterAddress = GenerateForm -formName "vCenter Address" -formContent "Input the vCenter IP address"
    $vCenterAddress = GenerateWarningForm -priorFormContent $vCenterAddress -formName "vCenter Address" -formContent "Are you sure this is the correct information?"

    $VIServer = Connect-VIServer -Server $vCenterAddress -Credential $vCenterCreds -WarningAction Continue

    if($VIServer.IsConnected)
    {
        $ViServerConnection = $true
    }else{
        cls
        Write-Host "Unable to establish a connection to the vCenter: $vCenterAddress! Please try again."
    }
}

#endregion

while(!$datastoreValue)
{
    #Grab the datastore name
    Write-Host "Please input the datastore name"
    $datastoreName = Read-Host
    $datastore = Get-Datastore $datastoreName

    if($datastore -ne $null)
    {
        $datastoreValue = $true
    }else{
        Write-Host "Error finding the datastore with the name $datastoreName"
    }
}

$folderpath = $null
$folder = New-Object System.Windows.Forms.FolderBrowserDialog
$folder.rootfolder = "MyComputer"

if($folder.ShowDialog() -eq "OK")
{
    $folderpath += $folder.SelectedPath
}

#create the psdrive
New-PSDrive -PSProvider VimDatastore -Root "\testfolder" -Location $datastore -Name ds1
New-PSDrive -PSProvider FileSystem -Root $folderpath -Name base

Copy-DatastoreItem base: ds1: -Force -Recurse

#Optional to remove the folder that was created
#Code below will match folder name
#$path = [Regex]::Match($folderpath, '\\(?:.(?!\\))+$')
#$path = "ds1:" + $path.Value
#Remove-Item -Path $path -Recurse | Where {$_.PSIsContainer }
Remove-PSDrive -Name ds1
Remove-PSDrive -Name base

I ran into a bit of a problem when attempting to leverage the usage of textboxes in WPF. Prior before tackling WPF applications (which is something I’m still learning to utilize), I primarily created WFA applications to create tools that I might use to make life easier. I started to hit a few dislikes with WFA which led me to begin developing WPF applications instead.

My goal in this program was simply to have a line limit of 20, and after the line count exceeded that threshold, it would remove the line at element 0. With WFA it was relatively simple with the provided code below:

public void AppendTextBetter(string text, ListBox logbox)
        {
            if (logbox.Items.Count > 20)
            {
                logbox.Items.RemoveAt(logbox.Items.Count-1);
                logbox.Items.Insert(0, text);
            }
            else
            {
                logbox.Items.Insert(0, text);
            }
        }

Now, with WPF I had to tackle the approach a little differently and searching around I couldn’t find too many helpful resources on this task. However, I found a means to program a way to create the same functionality. I used a SubArray extension from another post I found, and with that, I implemented it into the WPF program.

private void textChanged(object sender, TextChangedEventArgs e)
        {
            if (textBox.LineCount > maxLines)
            {
                string[] arrLines = textBox.Text.Split('\n').ToArray();
                arrLines = test.SubArray(arrLines, 1, maxLines - 1);
                textBox.Text = "";

                foreach (string s in arrLines)
                {
                    textBox.Text += s + "\n";
                }
            }
        }

The extension method:
Credit to the poster on this thread – SubArray Link

public static T[] SubArrayDeepClone<T>(this T[] data, int index, int length)
{
    T[] arrCopy = new T[length];
    Array.Copy(data, index, arrCopy, 0, length);
    using (MemoryStream ms = new MemoryStream())
    {
        var bf = new BinaryFormatter();
        bf.Serialize(ms, arrCopy);
        ms.Position = 0;
        return (T[])bf.Deserialize(ms);
    }
}

I ran into an interesting issue today when removing rows in a DataGridView object in C#. I wrote a method that would loop through each cell to find a matching value that was passed, and when it found a match it would remove the cell’s associated row entirely. However, the method would only remove half of the cells with the associated data. As an example, if I had 6 rows with cells holding “Pancakes” and I wanted to remove the rows with cells containing “Pancakes,” it would only remove three.

So to provide a better example, here’s my crude MSPaint skills in action to impart a simple array of rows with cells holding certain values.

p1

Now, the first method removed the first cell with no issues, as it looped through the first element being 0, and found “pancake.” I noticed that it resorted the rows by decrementing their index values by 1.

p2

So the next iteration would find “toaster” and skip that row. With the row being skipped, the array of rows kept their same index values which led to element 2 that held “pancake” being removed. We were left with three values and it wanted to find Element 3, which naturally would be the 4th item in the list. It was out of bounds and we were left with “pancake”, “toaster”, “pancake” and if no error checking was implemented the program would vomit due to searching for a nonexistent element.

p3

The next solution would solve this issue. It starts as a do while loop with a bool parameter dictating whether a row was removed or not. The loop would continue to run while a row was removed. In the loop, a for loop was defined and looped through each row, and subsequently each cell. If a matching row was found and removed, the for loops would be broken and subsequently the row removed would be set to true. The do while loop would eventually iterate through each row and remove the rows associated because after the loop would break, the index count would start over at 0 and always iterate through the index’s new count.

p4

In return, I wrote this snippet to resolve the problem:

public void RemoveRow(DataGridView datagrid, string Identifier)
{
    bool rowRemoved = false;
    do
    {
        rowRemoved = false;
        for (int i = 0; i < datagrid.Rows.Count; i++)
        {
            for (int j = 0; j < datagrid.Rows[i].Cells.Count; j++)
            {
                try
                {
                    if (datagrid.Rows[i].Cells[j].Value != null)
                    {
                        if (datagrid.Rows[i].Cells[j].Value.ToString() == Identifier)
                        {
                            datagrid.Rows.RemoveAt(i);
                            rowRemoved = true;
                            break;
                        }
                        else { }
                    }
                }
                catch (Exception)
                {

                }
            }
            if (rowRemoved)
            {
                break;
            }
        }
    } while (rowRemoved);
}

I wanted to share a script I wrote a while back that will remotely install MSU patches. It requires PSTools to work and I ran this using Powershell 3.0.

This little method requires using two scripts, one that will install the MSU files locally, while the main script will execute the local installer remotely. It follows this process:

-The main script is ran and passed a path to where the MSU files are located at along with the secondary script, along with a textfile containing a list of computers.

The computer list should look like the following:
Computer1
CoolComputer252
ILikePancakes510

-The main script will robocopy all the files to the remote computer, this include the MSU patches and the secondary script.
-The main script subsequently runs PSExec on the secondary script which will install all the MSU patches under the system account.
-The secondary script then proceeds to taskkill itself to ensure the primary script starts installing patches on the next computer in the array.

It is also imperative that you have the proper credentials to remotely access the computer else this will fail.

Here is the first script.

Function Install_Patches([string]$path, [string]$computerList)
{
    Get-Content $computerList                                                     
    $arrayOfComputers = @()                                                  
    $arrayOfComputers = (Get-Content $computerList) -split "`n" 
    
    #Copy the files to each computer
    foreach($computer in $arrayOfComputers)
    {
        $OutputLocation = "\\" + $computer + "\C$\<LOCATION>"

        #path from the first paramater
        robocopy $path $OutputLocation /e /s
        Write-Host "Transfer completed for $computer"

        #patch the computer
        psexec $path -s cmd.exe /c "echo . | powershell.exe -executionpolicy bypass -file c:\<LOCATION>\CMDMSUInstall.ps1"
        Write-Host "Patching completed for $computer"
    }
}

Here is the second script:

cd C:\<LOCATION>
$path = "C:\<LOCATION>"
$files = Get-ChildItem $path -Recurse
$msus = $files | ? {$_.extension -eq ".msu"}

foreach($msu in $msus)
{
    $fullname = $msu.FullName
    $fullname = "`"" + $fullname + "`""
    $parameters = $fullname + " /quiet /norestart"
    $install = [System.Diagnostics.Process]::Start( "wusa",$parameters )
    $install.WaitForExit()
}
#kill itself to ensure it goes back to the next computer object
taskkill /f /im PSEXESVC.exe

Additonally, make sure you modify the location of where the patches will be copied too and where the local script is ran at.

Quick update, needed to add a bunch of new computer objects to a security group today. Figured I’d share this.

#Use one of the two below to get users.

#$arrObj = Get-ADComputer -Filter * -SearchBase "OU, DC, DC"

#$arrObj = Get-Content -Path <PathtoFile>

foreach($computer in $object)
{
    Get-ADComputer $computer | ForEach-Object {
        Add-ADGroupmember -identity 'Group Name' -members $_.SamAccountName
    }
}

Hey guys, another little update:

I’ve been working on utilizing the vSphere SDK as of late, but I’ve had little success in finding any useful guides. The VMWare site http://pubs.vmware.com/vsphere-60/index.jsp has some information that’s somewhat useful when attempting to learn how their SDK works, but I still didn’t get much out of it all. I googled around a bit to find bits and pieces and a lot of people expressed the same concern that I have – so I can at least relish the fact that I’m not alone in this thought. However, I want to try and figure it out to make my job a little easier and help develop some tools for the other individuals at our office. This will be a little mini-series of progressing through the vSphere SDK. I also wanted to provide some samples of a more recent build of the vSphere SDK, as a lot of other examples I’ve seen are relatively old and potentially out of date.

using System;
using System.Collections.Generic;
using VMware.Vim;

namespace VMTest
{
    class Program
    {
        // strings holding the basic data we'll need to connect
        private const string vURL = "https://iphere/sdk";
        private const string uName = "administrator@vsphere.local";
        private const string uPass = "password";
        
        static void Main()
        {
            Program p = new Program(); 
            Console.WriteLine("Starting the vSphere connection...");
            p.vSphere();
            Console.WriteLine("Program stopped!");
            Console.ReadKey();
        }
        
        public void vSphere()
        {
            //New VIServer connection
            VimClientImpl vClient = new VimClientImpl();
            //New connection to the vSphere Web Client (Over 443)
            ServiceContent sContent = vClient.Connect(vURL);
            //User credentials to utilize
            UserSession uSession = vClient.Login(uName, uPass);
            //Get the vms
            IList<EntityViewBase> vmList = vClient.FindEntityViews(typeof(VirtualMachine), null, null, null);
            //Power on the VMs
        
            foreach (VirtualMachine vm in vmList)
            {
                //VirtualMachine virt = (VirtualMachine)vm;
                Console.WriteLine("Powering on VM: " + vm.Name);
                ManagedObjectReference vmMor = vm.MoRef;
                vm.PowerOnVM(vm.MoRef);
            }
        
            //Log out of the vServer
            vClient.Logout();
        }
    }
}