Skip to main content

vCenter Server Heartbeat 5.6 - Architecture

I start to use VMware workstation since 2002 or earlier, my bad memory can't recall it. That's 1st generation of virtualization. If you look at today's virtual world, we are on the way to "Matrix"! J Enterprise is virtualizing more and more server lead to vCenter Server becomes to a critical role. We have to prepare for any contingency. vCenter Server Heartbeat (vCHB) is a nice candidate for protecting vCenter Server. It provides your infrastructure ability to prevent downtime/outage of vCenter Server. To gearing up for implementation in production environment, I did some testing on my LAB, the product is nice, but the document is not ideal. I'd like to share my experience, this blog also referred to my project document, please let me know if you have any idea can help me make my document ideally. Thanks in advance.

vCHB is a cluster service like Microsoft Cluster Service or any other 3rd part cluster software. The benefit of this product is you don't have to create the cluster on RDM and your ESXi maintenance operation would become much easier. You could deploy vCHB in HA or DR mode, I'll focus on HA mode at this moment since I haven't tested DR mode yet.

Server


My original LAB infrastructure contains one vCenter Server with remote SQL database server, data transmits over LAN. So my vCHB topology is one SQL database (I already have MSCS to protects SQL database server), two vCenter servers (Primary Server and Secondary Server).

vCHB uses Active-Passive for HA mode, Active Role runs protected applications, Passive Role receives changed data.

Primary Server - Original vCenter Server which I want to protect, it runs all vCenter components except outage happening.

Secondary Server - Another server of the pair, it's Passive Role. Generally it receives change of Primary Server and takes over Active Role when outage happens.

In my LAB Active Role is Primary Server, and Passive Role is Secondary Server in most of time.

Networking


vCHB have two networks: Public Network and VMware Channel Network. You could use single NIC to run all networks or multiple NICs to separate them.

VMware Channel Network - vCHB monitors alive of each via VMware Channel Network and syncs changed data, it's very important network.

Public Network - It contains two sub-networks: Principle Network and Management Network. Principle Network for vCHB cluster, Management Network for day-to-day operation.

Confuse? To simple it, I understand the networks like that:

VMware Channel Network - Can be private IP address or any IP address outside of the subnet of Public Network. It used for heartbeat and data transmitting.

Public Network - Principle Network is IP address of Cluster DNS name, Management Network is IP address for RDP, they are in same routable subnet, but better in different prefix of IP address, please refer to KB 2004926.

Storage


No special storage requirement, but 2GB free space should be there where you want to install vCHB to. We also need a reliable share folder to store cluster data, I prefer to create share folder on a server other than vCHB servers since vCHB networks usually interrupt for few seconds during vCenter failover.

Okay, I'll share how to install vCHB in next blog, this architecture for your reference:

Popular posts from this blog

Connect-NsxtServer shows "Unable to connect to the remote server"

When you run Connect-NsxtServer in the PowerCLI, it may show "Unable to connect to the remote server".  Because the error message is a little bit confusing with other login issues. It's not easy to troubleshoot. The actual reason is the NSX-T uses a self-signed certificate, and the PowerCLI cannot accept the certificate automatically. The fix is super easy. You need to set the PowerCLI to ignore the invalid certificate with the following command: Set-PowerCLIConfiguration -Scope User -InvalidCertificateAction:Ignore -Confirm:$false

Setup Terraform and Ansible for Windows provisionon CentOS

Provisioning Windows machines with Terraform is easy. Configuring Windows machines with Ansible is also not complex. However, it's a little bit challenging to combine them. The following steps are some ideas about handling a Windows machine from provisioning to post configuration without modifying the winrm configuration on the guest operating system. Install required repos for yum. yum -y install https://repo.ius.io/ius-release-el7.rpm yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm yum -y install https://packages.endpointdev.com/rhel/7/os/x86_64/endpoint-repo.x86_64.rpm yum -y install epel-release yum -y install yum-utils yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo Install  Terraform . sudo yum -y install terraform Install  Ansible . sudo yum -y install ansible Install  Kerberos . yum -y install gcc python-devel krb5-devel krb5-libs krb5-workstation

How to List All Users in Terraform Cloud

Terraform has a rich API. However, the API documentation does not mention how to list all users. We can leverage the organization membership API and the PowerShell command  Invoke-RestMethod  to get a user list. 1. Create an organization token in Terraform Cloud. 2. Create the token variable ( $Token ) in PowerShell. $Token = "abcde" 3. Create the API parameters variable in PowerShell. $params = @{ Uri = "https://app.terraform.io/api/v2/organizations/ZHENGWU/organization-memberships?page%5Bsize%5D=100" Authentication = "Bearer" Token = $Token ContentType = "application/vnd.api+json" } Note: You need to replace ZHENGWU with your own organization name. And I used 100 at the end of the URI to retrieve the first 100 users. It can be any number.  4. Retrieve the API return and list the user's email address. $Test = Invoke-RestMethod @params $Test.data.attributes.email