Detailed Configuration: Smart Cache

This topic contains detailed customization information required for the installation of Smart Cache.

The parameters provided are Helm values that can be overridden using -set or by including them in a dedicated values.yaml file and overriding them by using the -values helm parameter.

Helm values can be applied in the following two places:

  1. Initial deployment of Smart Cache. This is the umbrella installation. Refer to Smart Cache Installation for comprehensive instructions.

  2. Deploying a SpaceClosed Where GigaSpaces data is stored. It is the logical cache that holds data objects in memory and might also hold them in layered in tiering. Data is hosted from multiple SoRs, consolidated as a unified data model. (or PUClosed This is the unit of packaging and deployment in the GigaSpaces Data Grid, and is essentially the main GigaSpaces service. The Processing Unit (PU) itself is typically deployed onto the Service Grid. When a Processing Unit is deployed, a Processing Unit instance is the actual runtime entity.) when you change the Space deployment in order to add features such as Tiered StorageClosed Automatically assigns data to different categories of storage types based on considerations of cost, performance, availability, and recovery., Resources and Probes. Refer to Deploying a Customer Processing Unit for more information. When the parameter belongs to the Space installation it is defined as a PU Parameter.

Setup

Feature Flags

Default values are shown in bold.

Helm Parameter Values Purpose
metrics.enabled true/false Adding influx and grafana deployments
manager.metrics.enabled true/false Manager generate metrics in influxdb
manager.securityService.enabled true/false Manager runs in secure mode (see security documentation)
manager.securityService.ingress.enabled true/false Adding ingress for security service access (e.g. for oauth2 handshake)
manager.antiaffinity.enabled true/false Smart Cache Manager anti-affinityClosed This describes the relationship between VMs and hosts or when related to Kubernetes between pods. Anti-Affinity will keep VM and hosts separated and using Kubernetes an anti-affinity rule tells the scheduler not to place the new pod on the same node if the label on the new pod matches the label on another pod. In this case, anti-affinity allows you to keep pods away from each other. It also allows you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service.
manager.persistence.enabled true/false Enable space persistency (Tiered Storage)
manager.service.lrmi.enabled true/false Ability to access space with lrmi protocol from external hosts
operator.keystore.enabled true/false Ability to inject admission controller certificate using a volume
operator.autoCleanup.enabled true/false Clean up PU’s when operator is uninstalled
operator.statefulSetExtension.enabled true/false Enable adding more objects to a PU (see statefulset extensions)
dgw.enabled true/false Adding data gateway for direct JDBCClosed Java DataBase Connectivity. This is an application programming interface (API) for the Java programming language, which defines how a client may access a database. driver access
global.security.enabled true/false Secure mode for the entire system

Pods/Service/Ingress (Networking Configuration, H/A)

Network Configuration

By default, all services are configured with the ClusterIP. The following services have to be accessed from outside the cluster:

  1. SpaceDeckClosed GigaSpaces intuitive, streamlined user interface to set up, manage and control their environment. Using SpaceDeck, users can define the tools to bring legacy System of Record (SoR) databases into the in-memory data grid that is the core of the GigaSpaces system. - Used for running the UI

  2. XAP Security Service - For OAuth2.0 re-direction endpoint

  3. Data Gateway - When enabled, needs JDBC access (TCP connection)

The table below lists the details of the services. Nginx configuration is included for reference.

Service Name and Port Type Nginx Config ( if used)
SpaceDeck: 4200 http Using ingress manifest
xap-security-service :9000 http Using ingress manifest
xap-dgw-service: 5432 tcp Using TCP mapping

Any of the services listed above can be modified to LoadBalancer, if it is supported by the KubernetesClosed An open-source container orchestration system for automating software deployment, scaling, and management. cluster. Alternatively, for the http services listed, an ingress can be used. For SpaceDeck and the XAP Security Service these ingress manifests are installed with the Smart DIH Helm installation by default.

Nginx Configuration

The Nginx controller can be used as a router for incoming traffic. As stated previously, another ingress controller or a Load Balancer can be used instead.

The controller is used for satisfying the following requirements:

TLS termination (this can be done outside as well)

RoutingClosed The mechanism that is in charge of routing the objects into and out of the corresponding partitions. The routing is based on a designated attribute inside the objects that are written to the Space, called the Routing Index. according to Kubernetes ingress declarations for HTTP traffic

TCP routing by setting TCP mapping in the Nginx configuration (using configmap). Refer to this Kubernetes guide for specific TCP configuration.

Nginx Deployment Example

In the example value file to install nginx below, a single pod Nginx installation using a hostPort connection is shown. Replace the parameters according to the list below.

nginx-values.yaml

controller:
   ingressClassByName: true
   hostPort:
     enabled: true
     ports: {}
   nodeSelector:
     eks.amazonaws.com/nodegroup: <nginx-asg>

   service:
     enabled: false
     ports: {}
     nodePorts:
       http: <node-port>
     internal:
       ports: {}
       targetPorts: {}
    admissionWebhooks:
        nodeSelector:
          eks.amazonaws.com/nodegroup: <nginx-asg>
tcp:
  11701: <dih-namespace>/iidr-kafka:11701 #Smart DIHClosed Digital Integration Hub.
An application architecture that decouples digital applications from the systems of record, and aggregates operational data into a low-latency data fabric. only
  3000:  <dih-namespace>/grafana:3000
  5432:  <dih-namespace>/xap-dgw-service:5432
  9092:  <dih-namespace>/kafka:9092  #Smart DIH only

Legend:

nginx-asg: A dedicated node group for an nginx instance

dih-namespace: The namespace where Smart DIH is installed

node-port: The host port, for example 30820

Helm command used to install the Nginx controller calling the nginx-values.yaml:

helm install nginx nginx/ingress-nginx -n ingress --version=4.6.0 --values=nginx-values.yaml

Space High Availability (H/A) and Partitioning

The Space can be divided by two dimensions.

  1. Partition - Adding more pods to extend the capacity of the Space

  2. Replication - Adding a backup pod per partition for h/a of the Space

The following parameters are not found in the manager, but rather in the PU customer resource.

PU Parameter Values Description
partitions 1 or more Divide the data among multiple pods.
h/a true/false Setting additional replication for each partition.
antiAffinity.enabled true/false When h/a is true, primary and backup pods will run in different hosts.
antiAffinity.topology topology.kubernetes.io/zone See note below.
  • Anti AffinityClosed This describes the relationship between VMs and hosts or in Kubernetes between pods. Affinity will keep VM and hosts together and with Kubernetes, will keep pods on the same node. can operate in two ways:

    Deploy primary and backup partitions on separate Kubernetes zones (default).

    Deploy these two partitions on separate servers, but not necessarily on different zones.
    For this second option, set.antiAffinity.topology must be set to kubernetes.io/hostname.

     

  • Tiered Storage

    The following configuration must be completed to enable Tiered Storage:

    1. Ensure that you have a suitable driver which will satisfy the required persistent volume, which depends on the cloud or file-system. More details can be found in the Kubernetes spec.

    2. Enable the volume for Tiered Storage at the CRD level, using the parameters listed in the table below.

    3. Build an image that includes a jar that is built according to the instructions found in this Tiered Storage page.

    4. Follow the instructions for Deploying a Custom Processing Unit to load the jar.

    The following parameters should be set in order to set the volume in the CRD level:

    PU Parameter Values Description
    persistence.enabled true/false

    Enable the persistence at the pod level

    persistence.storageClassName One of the cloud's storage classes Storage Class Name (refer to Kubernetes documentation)
    persistence.accessMode ReadWriteOnce/ReadWriteMany Storage Access mode (refer to Kubernetes documentation)
    persistence.size Example: "1Gi" Depending on the expected amount of your total data (not cached data)
    properties[0].name Any key Key of a property passed to the space jar (refer to number 3 above)
    properties[0].value Any value Value of key[0]
    properties[n].name Key #n For additional properties names
    properties[n].value Value #n For additional properties corresponding values

    Space Anti-Affinity

    In h/a mode, the partition replications have to be placed in separate nodes or zones.

    Parameter Values Description
    operator.antiAffinity.topology topology.kubernetes.io/zone or topology.kubernetes.io/host The topology by which kubernetes activate the anti-affinity rule in the zone or the host
    PU Parameter Values Description
    antiAffinity.enabled true/false Enabling the anti-affinity for the specific PU

    Manager Anti-Affinity

    In a manager h/a scenario, 3 managers have to be set up for anti-affinity mode.

    Parameter Values Description
    manager.ha true/false Set up 3 managers (1 otherwise)
    manager.affinity.enable true/false Apply anti-affinity rule upon scheduling

    Environment and Properties that can be injected into a PU

    Properties

    There are two ways to set properties:

    1. As an array by setting the properties value at the PU level

    2. Using valueFrom enable to point to a specific value in a config map (or secret):

    3. properties:
        - name: SPECIAL_LEVEL_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: special.how
    4. Using propertiesFrom enable to reference a configmap or secret:

      propertiesFrom:
        - configMapRef:
            name: special-config

    Environment Variables

    Similar to properties, there are the same three options:

    As an array of env variables (as in a container) using env parameter

    Using valueFrom enable to point to a specific value in a config map (or secret)

    Using envFrom enable to reference a configmap of a secret

    StatefulSet Extensions

    The GigaSpaces operator creates a StatefulSet out of values given in the PU manifest. These values are used for handling the installation and various other conditions (for example, scale), by the operator. The user might want to have additional settings on the StatefulSet. These settings are integrated as-is to the created StatefulSet.

    Parameter Values Description
    statefulSetExtension.enabled true/false Enable extension of the StatefulSet

    The list of fields listed below are those that are allowed to be added. They can be added in the statefulSetExtension area in the chart/value file. Fields that are not listed are ignored by the operator.

    StatefulSet Field Path Description Merge Policy* Field-PK for Collision
    spec.template.metadata.annotation StatefulSet annotations Merge with PU precedence key
    spec.template.metadata.finalizers Pod controlled deletion Merge with PU precedence whole object
    spec.template.metadata.labels Labels of pods Merge with PU precedence key
    spec.template.spec.containers Containers running in pod Merge with PU precedence name
    spec.template.spec.initContainers Containers initializing Merge with PU precedence name
    spec.template.spec.securityContext Security pod definitions Override -
    spec.template.spec.volumes Volumes for containers Merge with PU precedence name

    Probes

    Liveness and Readiness probes are the means for the Kubernetes and GigaSpaces Manager to check the health of the PU.  For a Space, the operator sets the default probes if they are not provided in the settings of the PU manifest. These defaults ensure that the manager has a way to detect a failing partition of the Space.  If the user wishes to override this, for the case where there is additional logic that occurs at start-up of a PU, the probes can be set and support the URL. It is important to note that these endpoints still need to reflect the Space readiness.

    PU Parameter Values Description
    livenessProbe Same as kubernetes spec See Probes Spec. Description above.
    readinessProbe Same as kubernetes spec See Probes Spec. Description above.

    SpaceDeck Settings

    Refer to the SpaceDeck pages.

    Security Settings

    The security server runs alongside the grid manager and is responsible for handling security related requests coming from inside or outside the cluster. The list below is for the security server cluster configuration.  Details of the end-user operation related to security can be found in the SpaceDeck Login and SpaceDeck Roles Management pages.

    Parameter Values Description
    manager.securityServer.enabled true/false Enable security at the backend
    manager.securityServer.ingress.enabled true/false Adding an ingress to enable access from IDPClosed An identity provider, or IDP, stores and manages users' digital identities. IDP and SSO can work together to authenticate users.
    manager.securityServer.secretKeyRef.name root-credentials” or any secret name Reference to a secret that holds the root credentials
    manager.securityServer.rolesConfigMap roles-map” or other cm that has the roles Reference to a configmap that holds the roles

    Roles and Policies Configuration

    A list of roles and their corresponding policies can be found in the "roles map" config map. In the example below, a user that has the role ROLE_ADMIN has specified policies that are defined for that role.  Different roles, for example ROLE_MNGR, would have different policies defined.

    roles.ROLE_ADMIN: |
      SystemPrivilege MANAGE_IDENTITY_PROVIDERS
      , MonitorPrivilege MONITOR_PU
      , MonitorPrivilege MONITOR_JVMClosed Java Virtual Machine. A virtual machine that enables a computer to run Java programs as well as programs written in other languages that are also compiled to Java bytecode.
      , GridPrivilege MANAGE_GRID
      , GridPrivilege MANAGE_PU
      , GridPrivilege PROVISION_PU
      , SpacePrivilege EXECUTE
      , SpacePrivilege ALTER
      , SpacePrivilege WRITE
      , SpacePrivilege READ
      , SpacePrivilege NOT_SET
      , SpacePrivilege TAKE
      , SpacePrivilege CREATE
      , SystemPrivilege MANAGE_ROLES