High Availability Between a VM and the KeyControl Cluster
When you register a VM with KeyControl, you must provide at least one KeyControl node IP address. If that is all you do, then that is the only KeyControl node that the VM will communicate with, even if that KeyControl node is part of a cluster. If that KeyControl node becomes unreachable, then the VM's heartbeat will fail and access to the VM may be disrupted.
If you have a cluster of KeyControl nodes, you can solve this issue by configuring High Availability between the VM and all the nodes in the KeyControl cluster. To do this, HyTrust recommends associating the VM with a KeyControl Mapping, which is a list of KeyControl nodes maintained in KeyControl. If you add or remove a KeyControl node from the cluster, or if any of the KeyControl node IP addresses change, you can update the Mapping and KeyControl will automatically disseminate those updates to all associated VMs at their next heartbeat.
Tip: | You can create a KeyControl Mapping even if you have a standalone KeyControl node. That way, if you ever decide to add additional nodes to form a KeyControl cluster, you just need to add the new nodes to your existing Mapping and the changes will be disseminated to the VMs automatically. |
Alternatively, you can configure High Availability on an individual VM-by-VM basis. If you do this, however, you must manually update the list of available KeyControl nodes on every VM any time you add or remove a KeyControl node from the cluster.
High Availability Through a Global KeyControl Mapping
A KeyControl Mapping is a list of KeyControl nodes maintained in KeyControl. Each KeyControl Mapping is assigned to a specific Cloud Admin Group, and it can be associated with any number of VMs registered with that group. If you add or remove a KeyControl node from the cluster, you only need to edit the Mapping and the changes will be disseminated to the associated VMs at their next heartbeat.
For each KeyControl node in the Mapping, you can specify an externally-visible IP address or hostname. This allows you to connect the VMs with the KeyControl nodes across a firewall or in an environment such as Amazon Web Services (AWS) or Microsoft Azure, where the node is assigned an internal IP address as well as an external IP address.
You can create as many KeyControl Mappings for each Cloud Admin Group as you need. The first node in each Mapping will always be the preferred KeyControl node for the associated VMs. So if you have some KeyControl nodes in the US and some in Europe, you can create one Mapping with the US nodes listed first and another mapping with the European nodes listed first. Then you can assign the appropriate Mapping to each VM based on its location.
You cannot, however, associate the same Mapping with multiple Cloud Admin Groups. Even if you want to use the same list of KeyControl nodes for every registered VM, you must still create a unique KeyControl Mapping for each Cloud Admin Group.
Failover with a KeyControl Mapping
The order of the IP addresses in the list determines the order of precedence. The first node in a KeyControl Mapping is considered the preferred node, and all VMs will use that node as long as it is available. If the preferred node is offline when a VM heartbeats, the VM will try the other IP addresses in the Mapping, starting with the second IP address in the list and working downwards. Once the VM finds an available KeyControl node, it will use that node to complete the current heartbeat, and it will continue to use that node until the cluster returns to a healthy state. After the cluster becomes healthy, the VM will resume using the preferred node at its next heartbeat.
If you want to change the preferred KeyControl node, you can change the order of the nodes in the Mapping, and, the next time the VM heartbeats with a healthy KeyControl cluster, it will begin using the new preferred node immediately.
Important: | If you remove the preferred KeyControl node from the cluster, KeyControl automatically gives you the option of removing the KeyControl node from the Mapping as well. Doing this by itself, however, is not enough to make the registered VMs fail over to a different node in the Mapping. Instead, you must also shut down or destroy the old node so that it is no longer reachable by the VMs. As soon at it is no longer reachable, the VMs will fail over to one of the other KeyControl nodes in the Mapping. At that point KeyControl will communicate the updated Mapping to the VMs and they will begin to use the KeyControl node that is listed first in the updated Mapping. |
High Availability on Individual VMs
You can maintain the list of available KeyControl node IP addresses on individual VMs using the hcl updatekc
command on each VM. In this case, if the list of KeyControl IP addresses changes in any way, those changes must be manually disseminated to each VM by re-running the hcl updatekc
command on each VM.
This method also requires that the VM and all of your KeyControl nodes be on the same network so that they can ping each other directly, or that you have set up a Domain Name Server (DNS) server with entries that map each KeyControl IP address to a single domain name. With a DNS server, if you add or remove KeyControl nodes from the cluster, the Policy Agents can continue to use the same domain name but you must update the DNS entries on the DNS server.
For details on the hcl updatekc
command, see Updating KeyControl Node IP Addresses on an Individual VM. For details about setting up a DNS server, see your DNS server documentation.