VMware Power CLI: Could not establish trust relationship for the SSL/TLS secure channel with authority

With the Newer version of Power CLI, the Connect-ViServer fails with message:



Connect-VIServer : 28-04-2018 11:41:42 Connect-VIServer Error: Invalid server certificate. Use
Set-PowerCLIConfiguration to set the value for the InvalidCertificateAction option to Prompt if you’d like to connect
once or to add a permanent exception for this server.
Additional Information: Could not establish trust relationship for the SSL/TLS secure channel with authority
At line:1 char:1
+ Connect-VIServer vc.ikigo.net
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [Connect-VIServer], ViSecurityNegotiationException
+ FullyQualifiedErrorId : Client20_ConnectivityServiceImpl_Reconnect_CertificateError,VMware.VimAutomation.ViCore.


This can easily be worked around by

  • Importing The VMCA trusted root Certificate
  • Use Set-PowerCLIConfiguration to ignore certs


Importing The VMCA trusted root Certificate to windows Trusted Root Store

  • Launch a browser and head to https://vcenter.FQDN  (or https://vc.FQDN/certs/download.zip)
  • Click on Download trusted root CA certificates

  • Extract the ZIP file and import the certificate to windows trusted root store

Use Set-PowerCLIConfiguration to ignore certs

  • on an elevated power CLI, Run the below

Set-PowerCLIConfiguration -Confirm:$false -Scope AllUsers -InvalidCertificateAction Ignore -DefaultVIServerModeSingle

Replacing vmdir certificates on vCenter 6.0

vmdir is a vCenter component that Listens on port 636 and 389 (LDAPs/LDAP)


  • We will start creating a new configuration file called vmdir.cfg with the below content: (replace the contents under v3_req with the fields appropriate to your enveronment)

[ req ]
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req
[ v3_req ]
basicConstraints = CA:false
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = DNS:psc1.domain.com, DNS:psc1, IP: x.x.x.x
[ req_distinguished_name ]
countryName = US
stateOrProvinceName = State
localityName = City
0.organizationName = Company
organizationalUnitName = Department
commonName = psc1.domain.com


  • using openssl, create a new CSR file with the above configuration:

“%VMWARE_OPENSSL_BIN%” req -new -out c:\cert\vmdir.csr -newkey rsa:2048 -keyout c:\cert\vmdir.key -config c:\cert\vmdir.cfg



If the solution user certificates are signed with a CA cert, sign the CSR with the same issueing CA
else, Sign them using VMCA using the instructions below.

Sigining the CSR with the VMCA certificate.

  • Copy root.cer and privatekey.pem from C:\ProgramData\VMware\vCenterServer\data\vmca
    (appliance: /var/lib/vmware/vmca/) to c:\cert\


  • Run the beow command to signe the certificate:
    “%VMWARE_OPENSSL_BIN%” x509 -req -days 3650 -in c:\cert\vmdir.csr -out c:\cert\vmdir_signed.crt -CA c:\cert\root.cer -CAkey c:\cert\privatekey.pem -extensions v3_req -CAcreateserial -extfile c:\cert\vmdir.cfg


  • Now we have a certificate that can be used to replace the existing vmdir certificates. To proceed with the certificate replacement, Stop all vCenter services

service-control –stop –all

Note: For windows, you must be on path: “C:\Program Files\VMware\vCenter Server\bin”

  • Go into path: C:\ProgramData\VMware\vCenterServer\cfg\vmdird (appliance: ‘/usr/lib/vmware-vmdir/share/config/’)
  • (backup original certificates) vmdircert.pem and vmdirkey.pem to a temp directory
  • rename vmdir_signed.crt to vmdircert.pem  and  vmdir.key to vmdirkey.pem on the above directory
  • Start all services

service-control –start–all

Note: If the services fail to start (most likely inventory) then you it means that the wrong root cert was used when sigining the certificate. Replace the original files on the directory and restart the service to roll back to previous configuraton.

web client service crashes java.lang.OutOfMemoryError: PermGen space and java.lang.OutOfMemoryError

vSphere web client refused to start with memory errors

log location:
Windows: c:\programdata\VMware\vCenter\Logs\vsphere-client
Appliance: /var/log/vmware/vsphere-client



INFO | jvm 1 | 2018/04/03 15:34:25 | Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread “org.springframework.scheduling.timer.TimerFactoryBean#0”
INFO | jvm 1 | 2018/04/03 15:34:33 |
INFO | jvm 1 | 2018/04/03 15:34:33 | Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread “http-bio-9443-exec-10”
INFO | jvm 1 | 2018/04/03 15:35:12 |
INFO | jvm 1 | 2018/04/03 15:35:12 | Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread “http-bio-9443-exec-6”


[2018-04-03T15:28:45.855-04:00] [ERROR] http-bio-9090-exec-2 com.vmware.vise.util.concurrent.WorkerThread http-bio-9090-exec-2 terminated with exception: java.lang.OutOfMemoryError: PermGen space
[2018-04-03T15:28:46.773-04:00] [ERROR] http-bio-9090-exec-5 com.vmware.vise.util.concurrent.WorkerThread http-bio-9090-exec-5 terminated with exception: java.lang.OutOfMemoryError: PermGen space


Cause: insufficient  Web-client heap size/Insufficient PermGen space?configuration change


Scenario 1:  Heap-size


  • Ensure there is sufficient free memory on the vCenter

free -m

  • Review and Increase (double) the heap size of the web client

Appliance: cloudvm-ram-size -l vsphere-client
windows: C:\Program Files\VMware\vCenter Server\visl-integration\usr\sbin\cloudvm-ram-size.bat -l

cloudvm-ram-size.bat -C XXX 

  • start vsphere-client and observe if this still crashes.

Scenario 2: PermGen

  • Take a copy of the file service-layout.mfx as service-layout.mfx.bak
    Appliance path: /etc/vmware/
    Windows Path: C:\ProgramData\VMware\vCenterServer\cfg\
  • Edit service-layout.mfx with a text editor
  •  Change MaxPermMB size from 256 to 512 for the row vspherewebclientsvc. (increase accordingly depending on the number of plugins configured with vCenter_webclient)
  • start vsphere-client service

Scenario 3: Problem persists even after increasing/maxing out scenario 1 and scenario 2.

  • backup configuration file before you proceed

    cp /usr/lib/vmware-vsphere-client/server/wrapper/bin/vsphere-client /usr/lib/vmware-vsphere-client/server/wrapper/bin/vsphereclient.bak

  • Edit the file using a text editor

vi /usr/lib/vmware-vsphere-client/server/wrapper/bin/vsphere-client

  • Look for the line  “RUN_AS_USER=vsphere-client” and hash this
    vi vsphere-client
  • Start vsphere-client service.

Connecting to VMware appliance postgres/PSQL instance from an external computer/pgadmin

By default, the postgres instance on vCenter/vSphere replication..etc.. are configured to not accept connections from a computer on the network. On this  post, I will show you how to re-configure this to allow connections from an external box for tools like PGadmin etc.


Note: Depending on the appliance, the postgres, configuration files/paths might be different. On this post, we will search for the configuration and then change the respective file.


Start by ssh into the appliance.

Type the below command to search for the configuration file: postgresql.conf

find / -iname postgresql.conf

take a copy of the configuration.

cp /storage/db/vpostgres/postgresql.conf /storage/db/vpostgres/postgresql.conf.backup

Edit the file

vi /storage/db/vpostgres/postgresql.conf





Look for the line that says “listen_addresses = ‘XXXX”

In some cases, this will be hashed out, remove the hash. and replace local host with *


Save the configuration file (key combination: “Esc” + “:” and then type in “wq!”


Search for the Postgres configuration file

find / -iname pg_hba.conf


Copy the configuration file

cp /storage/db/vpostgres/pg_hba.conf /storage/db/vpostgres/pg_hba.conf.bak


Edit the file

vi /storage/db/vpostgres/pg_hba.conf

Look for the below and replace this with the your IP subnet

host all all trust   <———————————–From the below putty, you can see that I am on a 192.168.1.x subnet

The method is set to trust (not recommended) as I did not want to log into the DB with a password

Save the configuration file (key combination: “Esc” + “:” and then type in “wq!”

restart vmware-postgres service

service vmware-vpostgres restart

For vCenter server 6.5

service-control –vmware-vpostgres restart

Conform postgres port number and if it is listening to (vSPhere Replication appliance listens to a different port! it is best to know which port you need to connect to when accessing from an external box)

netstat -anop | grep postgres

From the above, we know the port is 5432


Launch pgadmin and add a new server


Give in a connection name and switch to the connection tab, Enter the Host name and port Number

Type in a username and a password (Note that I set the configuration to allow connection without the password in pg_hba.conf)

Save ad you are good to go!

Also note that in most cases, the db credentials is stored in certain configuration files like

  • VCDB.properties

find / -iname vcdb.properties

Cat  /etc/vmware-vpx/vcdb.properties

  • .pgpass from the home directory

ls -ltha ~/

cat ~/.pgpass


vCenter Pre-upgrade fails

Error: Internal error occurred during execution of upgrade process.

Resolution: Send upgrade log files to VMware technical support team for further Assistance.


Upgrade logs say:

less /var/log/vmware/upgrade/bootstrap.log
2018-03-23T20:14:34.11Z ERROR transport.guestops Invalid command: “/bin/bash” –login -c ‘/opt/vmware/share/vami/vami_get_network eth0 1>/tmp/vmware-root/exec-vmware47-
stdout 2>/tmp/vmware-root/exec-vmware235-stderr’
2018-03-23T20:14:34.12Z ERROR upgrade_commands Unable to execute pre-upgrade checks on host
Traceback (most recent call last):
File “/usr/lib/vmware/cis_upgrade_runner/bootstrap_scripts/upgrade_commands.py”, line 2199, in execute
preupgradeResult = self._executePreupgradeChecks()
File “/usr/lib/vmware/cis_upgrade_runner/bootstrap_scripts/upgrade_commands.py”, line 2655, in _executePreupgradeChecks
srcIpv4Address, srcIpv4SubnetMask, srcIpv6Address, srcIpv6Prefix = retrieveNetworkingConfiguration(self.opsManager)
File “/usr/lib/vmware/cis_upgrade_runner/bootstrap_scripts/transfer_network.py”, line 1309, in retrieveNetworkingConfiguration
File “/usr/lib/vmware/cis_upgrade_runner/bootstrap_scripts/apply_networking.py”, line 188, in _retrieveNetworkIdentity
networkConfig = vamiGetNetwork(processManager, interface)
File “/usr/lib/vmware/cis_upgrade_runner/bootstrap_scripts/apply_networking.py”, line 144, in vamiGetNetwork
output = _execNetworkConfigCommand(processManager, [VAMI_GET_NETWORK_CMD, interface])
File “/usr/lib/vmware/cis_upgrade_runner/bootstrap_scripts/apply_networking.py”, line 66, in _execNetworkConfigCommand
cr = transport.executeCommand(processManager, cmd)
File “/usr/lib/vmware/cis_upgrade_runner/libs/sdk/transport/__init__.py”, line 122, in executeCommand
return processManager.pollProcess(processUid, True)
File “/usr/lib/vmware/cis_upgrade_runner/libs/sdk/proxy.py”, line 81, in __call__
ret = self.func(*args, **kwargs)
File “/usr/lib/vmware/cis_upgrade_runner/libs/sdk/transport/guestops.py”, line 1184, in pollProcess
self._checkInvalidCommandError(processInfo, stderr)
File “/usr/lib/vmware/cis_upgrade_runner/libs/sdk/transport/guestops.py”, line 1123, in _checkInvalidCommandError
raise ExecutionException(error, ErrorCode.INVALID_REQUEST)
ExecutionException: (‘Invalid command: “/bin/bash” –login -c \’/opt/vmware/share/vami/vami_get_network eth0 1>/tmp/vmware-root/exec-vmware47-stdout 2>/tmp/vmware-root/
exec-vmware235-stderr\”, 1)

2018-03-23T20:14:39.442Z ERROR __main__ ERROR: Fatal error during upgrade REQUIREMENTS. For more details take a look at: /var/log/vmware/upgrade/requirements-upgrade-runner.log


Now look at the source appliance.

VMware VirtualCenter 6.0.0 build-3339084
vCenter:~ # ifconfig
eth0 Link encap:Ethernet HWaddr 00:50:56:AC:53:FD
inet addr:x.x.x.x Bcast:x.x.x.x Mask:
RX packets:45028984 errors:0 dropped:28266 overruns:0 frame:0
TX packets:16476384 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:74680502042 (71220.8 Mb) TX bytes:7187692049 (6854.7 Mb)

lo Link encap:Local Loopback
inet addr: Mask:
inet6 addr: ::1/128 Scope:Host
RX packets:147809637 errors:0 dropped:0 overruns:0 frame:0
TX packets:147809637 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:93984509789 (89630.6 Mb) TX bytes:93984509789 (89630.6 Mb)

Running /opt/vmware/share/vami/vami_get_network  less returns an dependency error:

vCenter:~ # /opt/vmware/share/vami/vami_get_network eth0 1 | less
/opt/vmware/share/vami/vami_get_network: error while loading shared libraries: libvami-common.so: cannot open shared object file: No such file or directory


To resolve this, re-create the link to dependency by running the below commands.

echo “LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}/opt/vmware/lib/vami/” >> /etc/profile
echo ‘export LD_LIBRARY_PATH’ >> /etc/profile


Re-run the command to confirm if it is returning the IP details


vCenter55:~ # /opt/vmware/share/vami/vami_get_network
interface: eth0
config_present: true
config_flags: STATICV4
config_gatewayv6: 10
hasdhcpv6: 1
Traceback (most recent call last):
File “/opt/vmware/share/vami/vami_ovf_process”, line 25, in <module>
import libxml2
File “/usr/lib64/python2.6/site-packages/libxml2.py”, line 1, in <module>
ImportError: No module named libxml2mod


vami_ovf_process and libxml2.py can be ignored

Re-run the upgrade/migration.