The cloud... such a wonderful term. But if we talk about multi-cloud environments, this can still also mean your personal cloud environment, besides all manner of public clouds. Google, Oracle, Microsoft and Amazon all host theirs. VMware Horizon can land on all of these, but also use a part in your own datacenter somewhere. Such a hybrid environment would be come hard to manage, unless there is something that can glue them together. This is called the Horizon Control Plane. This Control plane is included in the new subscription based licenses. Follow the link to learn more about this.
When integrating your private Horizon pod into this control plane, the system needs a way to be able to communicate to the controle plane in the cloud. This is called the Horizon Cloud Connector. This acts as a proxy for a number of services, like integrating with your on-premises Active Directory and showing metrics
from your horizon environment, such as sessions, but also usage of resources through a link with your vCenter server and report the health of your environment. It will also apply the Horizon license to the connection servers automatically.
For a nice write up of the basic deployment steps, including a demo video, please visit this article on the VMware EUC blog: https://blogs.vmware.com/euc/2020/11/onboarding-horizon-cloud-connector.html
My private VMware Horizon test lab consists of 2 connection servers, 2 Unified Access Gateway's and a load balancer, all using the same public certificate for connection security. When accessing the system from internally, this goes through the internal load balancer VIP, externally, from the external load balancer VIP, all using the same FQDN that is presented by the certificate. I can therefore access my environment using the same FQDN, regardless of whether I am on my private network, or outside.
I had downloaded the Cloud Connector OVF (version 1.10), deployed it and got stuck when trying to log in to the appliance through SSH. The appliance has 2 user accounts by default. Root, obviously, of which the password is set through the initial OVF deployment, but also the ccadmin account, which is used to log in with SSH. As of 1.8, root access over ssh has been disabled as a security measure. Unaware of this, before I knew it, I had of course locked first the one account, and then the other. Luckily, there is an article showing how to remedy this situation:
After successfully completing this, I was able to log in to the console, to enable SSH, by using this command:
sudo /opt/vmware/bin/configure-adapter.py --sshEnable
After this was done, I could use Putty to log on through SSH to the Cloud Connector appliance.
Next up was the precheck, by running this command:
sudo /opt/vmware/bin/precheck.sh vcs01.testing.lan
The precheck failed however:
ccadmin@hcc [ ~ ]$ sudo /opt/vmware/bin/precheck.sh vcs01.testing.lan
Connection server = vcs01.testing.lan
active
active
active
active
active
active
active
active
Invoking health precheck
Component/Service Name: "Cloud Broker Client Service"
Status: "NOT_INITIALIZED"
Message: Service is not initialized.
------------------------------
Component/Service Name: "vcs01.testing.lan"
Status: "ERROR"
Message: "class java.io.IOException : HTTPS hostname wrong: should be <vcs01.testing.lan>"
Details: "class java.io.IOException : HTTPS hostname wrong: should be <vcs01.testing.lan>".
Remediation: This is caused by connection server certificates not having valid hostname or wildcard hostname in CN or SAN field. Please update the certificates. To disable verification run - /opt/vmware/bin/configure-adapter.py -acceptHostNameMismatch. Once certificates are updated, reenable verification by running /opt/vmware/bin/configure-adapter.py -rejectHostNameMismatch
------------------------------
For my environment, I use a single domain public CA certificate. My DNS structure is used internally, and not using a publicly registered domain name. The CN (common name) in my certificate is ofcourse set to the outside FQDN of my environment, and I cannot use an internal domain name structure such as .lab, .lan or .local (Don't use this last one!!) as a SAN (Subject Alternative Name). Using a multidomain certificate with my internal FQDN as SAN's would also be impossible. Since I have no other option left than to have the system accept the Certificate and hostname mismatch:
After changing the configuration by entering the command:
sudo /opt/vmware/bin/configure-adapter.py --acceptHostNameMismatch
The precheck only showed that the Cloud Connector has not been configured, but no other problems were visible:
ccadmin@hcc [ ~ ]$ sudo /opt/vmware/bin/precheck.sh vcs01.testing.lan
Connection server = vcs01.testing.lan
active
active
active
active
active
active
active
active
Invoking health precheck
Component/Service Name: "Cloud Broker Client Service"
Status: "NOT_INITIALIZED"
Message: Service is not initialized.
------------------------------
Component/Service Name: "Connection Server Monitoring Service"
Status: "ERROR"
Message: ""
Details: "".
------------------------------
Both of these errors are to be expected as I had yet to go through the onboarding procedure through the web interface of the Cloud Connector appliance.
After finishing the 3 steps, my pod succesfully showed up in the Horizon Control Plane. Success!!
Hopefully this helps in setting up your VMware Horizon Cloud Connector.
As always, if you have any questions or remarks, don't hesitate to contact me.