Customizing a Space

This topic describes how to customize a SpaceClosed Where GigaSpaces data is stored. It is the logical cache that holds data objects in memory and might also hold them in layered in tiering. Data is hosted from multiple SoRs, consolidated as a unified data model. in a KubernetesClosed An open-source container orchestration system for automating software deployment, scaling, and management. environment, by specifying multiple partitions, and by providing a backup partition (high availability option).

The topics in this section assume basic knowledge of the GigaSpaces platform. If you aren't familiar with GigaSpaces, review the contents of the general Getting Started section before performing the tasks described here.

Prerequisites

 

Deploy Multiple Partitions

Performance of the Kubernetes environment can be improved by allocating multiple partitions for a Space.

The steps to allocate multiple partitions for a Space are shown below. This example allocates two partitions for the demo Space.

  1. Install xap charts.

    helm install xap ./xap/ --version=16.4.0 
  2. Wait for all pods to be ready.

  3. Install the demo Space with two partitions.

    helm install demo ./xap-pu/ --set manager.name=xap,partitions=2
  4. Enter the command kubectl get pods to see the result:

  5. NAME                            READY   STATUS    RESTARTS   AGE
    demo-xap-pu-0                   1/1     Running   0          44s
    demo-xap-pu-1                   1/1     Running   0          23s
    xap-grafana-67d9b898b5-d5jn4    1/1     Running   0          2m27s
    xap-influxdb-0                  1/1     Running   0          2m27s
    xap-operator-55c7997cd5-rt6bt   1/1     Running   0          2m27s
    xap-xap-manager-0               1/1     Running   0          2m26s

Deploy with High Availability

High availabilty deploys a backup partition that is in sync with the data in the primary partition. This eliminates downtime if the primary partition goes down, by seamlessly serving the same data from the backup.

When running in high availability mode, GigaSpaces combines backup partitions with anti-affinityClosed This describes the relationship between VMs and hosts or when related to Kubernetes between pods. Anti-Affinity will keep VM and hosts separated and using Kubernetes an anti-affinity rule tells the scheduler not to place the new pod on the same node if the label on the new pod matches the label on another pod. In this case, anti-affinity allows you to keep pods away from each other. It also allows you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service. rules to ensure that the space is intact in case a host or a zone (e.g. cloud availability zone) goes down. The GigaSpaces operator uses the anti-affinityClosed This describes the relationship between VMs and hosts or in Kubernetes between pods. Affinity will keep VM and hosts together and with Kubernetes, will keep pods on the same node. pod level attribute to maintain high availability.

When a backup partition goes down, Kubernetes chooses to restart it on a new node according to the anti-affinity rules. In case of a primary partition malfunction, backup becomes primary and self healing is done in the same manner.

There are multiple options to define the topology of the anti-affinity rule. You can set up this option using the helm operator values.yaml file (under the antiAffinity.topology key):

  • topology.kubernetes.io/zone (default) - new backups are restarted in a different zone than the primary

  • topology.kubernetes.io/host - new backups are restarted in a different host (or virtual machine) than the primary.

  • Any other topology defined by Kubernetes spec

How to set up a Cluster with Anti-affinity

Prerequisites

  • A Kubernetes cluster with multiple zones (can be cloud installation)

  • Gigaspaces helm charts xap and xap-pu pulled to a local repository (defined as TEST_REPO in this example)

Check environment settings

Verify that your cluster nodes are labeled by zone (default in most cloud environments):

kubectl get nodes --show-labels

Result:


NAME                            STATUS   ...   LABELS
ip-10-41-1-154.ec2.internal     Ready    ...   …,topology.kubernetes.io/zone=us-east-1a
ip-10-41-2-63.ec2.internal      Ready    ...   …,topology.kubernetes.io/zone=us-east-1b
ip-10-41-3-77.ec2.internal      Ready    ...   …,topology.kubernetes.io/zone=us-east-1c

Install xap umbrella chart

helm install xap TEST_REPO/xap/ --version=16.4.0 \
--set operator.manager.name=xap \
--set manager.metrics.enabled=false,metrics.enabled=false \
--set manager.service.lrmi.enabled=false

Run the get pods command

 kubectl get pods

Result:


NAME                            READY   STATUS   RESTARTS    AGE
xap-operator-7b54b977f6-9znfd   1/1     Running   0          67s
ip-10-41-2-63.ec2.internal      1/1     Running   0          67s

Install a space with high availability and anti-affinity (by zone)

helm install demo TEST_REPO/xap/ --version=16.4.0 \
--set manager.name=xap \
--set partitions=2,ha=true,antiAffinity.enabled=true

Get pods and node allocation

kubectl get pods -o wide

Result:


NAME           ...      IP              NODE    
demo-xap-pu-0  ...      10.41.1.147     ip-10-41-1-154.ec2.internal [zone us-east-1a]
demo-xap-pu-1  ...      10.41.2.113     ip-10-41-2-63.ec2.internal  [zone us-east-1b]
demo-xap-pu-2  ...      10.41.3.142     ip-10-41-3-77.ec2.internal  [zone us-east-1c]
demo-xap-pu-3  ...      10.41.2.97      ip-10-41-2-63.ec2.internal  [zone us-east-1b]

Note that primary and backups are ordered in pairs and are in different zones:

  • demo-xap-pu-0 and demo-xap-pu-1 are primary/backup of the first partition and are in zones 1a and 1b

  • demo-xap-pu-2 and demo-xap-pu-3 are primary/backup of the second partition and are in zones 1c and 1b.