Detailed Configuration: Smart DIH

This topic contains detailed customization information required for the installation of Smart DIHClosed Smart DIH allows enterprises to develop and deploy digital services in an agile manner, without disturbing core business applications. This is achieved by creating an event-driven, highly performing, efficient and available replica of the data from multiple systems and applications,

The parameters provided are Helm values that can be overridden using -set or by including them in a dedicated values.yaml file and overriding them by using the -values helm parameter.

Helm values can be applied in the following two places:

  1. Initial deployment of Smart DIHClosed Digital Integration Hub. An application architecture that decouples digital applications from the systems of record, and aggregates operational data into a low-latency data fabric.. This is the umbrella installation. Refer to Smart DIH Installation for comprehensive instructions.

  2. Deploying a SpaceClosed Where GigaSpaces data is stored. It is the logical cache that holds data objects in memory and might also hold them in layered in tiering. Data is hosted from multiple SoRs, consolidated as a unified data model. (or PUClosed This is the unit of packaging and deployment in the GigaSpaces Data Grid, and is essentially the main GigaSpaces service. The Processing Unit (PU) itself is typically deployed onto the Service Grid. When a Processing Unit is deployed, a Processing Unit instance is the actual runtime entity.) when you change the Space deployment in order to add features such as Tiered StorageClosed Automatically assigns data to different categories of storage types based on considerations of cost, performance, availability, and recovery., Resources and Probes. Refer to Deploying a Customer Processing Unit for more information. When the parameter belongs to the Space installation it is defined as a PU Parameter.


Feature Flags

Default values are shown in bold.

Helm Parameter Values Purpose
metrics.enabled true/false Adding influx and grafana deployments
manager.metrics.enabled true/false Manager generate metrics in influxdb
manager.securityService.enabled true/false Manager runs in secure mode (see security documentation)
manager.securityService.ingress.enabled true/false Adding ingress for security service access (e.g. for oauth2 handshake)
manager.antiaffinity.enabled true/false Smart Cache Manager anti-affinityClosed This describes the relationship between VMs and hosts or when related to Kubernetes between pods. Anti-Affinity will keep VM and hosts separated and using Kubernetes an anti-affinity rule tells the scheduler not to place the new pod on the same node if the label on the new pod matches the label on another pod. In this case, anti-affinity allows you to keep pods away from each other. It also allows you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service.
manager.persistence.enabled true/false Enable space persistency (Tiered Storage)
manager.service.lrmi.enabled true/false Ability to access space with lrmi protocol from external hosts
operator.keystore.enabled true/false Ability to inject admission controller certificate using a volume
operator.autoCleanup.enabled true/false Clean up PU’s when operator is uninstalled
operator.statefulSetExtension.enabled true/false Enable adding more objects to a PU (see statefulset extensions)
dgw.enabled true/false Adding data gateway for direct JDBCClosed Java DataBase Connectivity. This is an application programming interface (API) for the Java programming language, which defines how a client may access a database. driver access

Pods/Service/Ingress (Networking Configuration, H/A)

Network Configuration

By default, all services are configured with the ClusterIP. The following services have to be accessed from outside the cluster:

  1. SpaceDeckClosed GigaSpaces intuitive, streamlined user interface to set up, manage and control their environment. Using SpaceDeck, users can define the tools to bring legacy System of Record (SoR) databases into the in-memory data grid that is the core of the GigaSpaces system. - Used for running the UI

  2. XAPClosed GigaSpaces eXtreme Application Platform. Provides a powerful solution for data processing, launching, and running digital services Security Service - For OAuth2.0 re-direction endpoint

  3. Data Gateway - When enabled, needs JDBC access (TCP connection)

  4. KafkaClosed Apache Kafka is a distributed event store and stream-processing platform. Apache Kafka is a distributed publish-subscribe messaging system. A message is any kind of information that is sent from a producer (application that sends the messages) to a consumer (application that receives the messages). Producers write their messages or data to Kafka topics. These topics are divided into partitions that function like logs. Each message is written to a partition and has a unique offset, or identifier. Consumers can specify a particular offset point where they can begin to read messages. - To have to ability to connect directly to Kafka.

The table below lists the details of the services. Nginx configuration is included for reference.

Service Name and Port Type Nginx Config ( if used)
xap-security-service :9000 http Using ingress manifest
xap-dgw-service: 5432 tcp Using TCP mapping
kafka: 9092 tcp Using TCP mapping

Any of the services listed above can be modified to LoadBalancer, if it is supported by the KubernetesClosed An open-source container orchestration system for automating software deployment, scaling, and management of containerized applications. cluster. Alternatively, for the http services listed, an ingress can be used. For SpaceDeck and the XAP Security Service these ingress manifests are installed with the Smart DIH Helm installation by default.

Nginx Configuration

The Nginx controller can be used as a router for incoming traffic. As stated previously, another ingress controller or a Load Balancer can be used instead.

The controller is used for satisfying the following requirements:

TLSClosed Transport Layer Security, or TLS, is a widely adopted security protocol designed to facilitate privacy and data security for communications over the Internet. A primary use case of TLS is encrypting the communication between web applications and servers. termination (this can be done outside as well)

RoutingClosed The mechanism that is in charge of routing the objects into and out of the corresponding partitions. The routing is based on a designated attribute inside the objects that are written to the Space, called the Routing Index. according to Kubernetes ingress declarations for HTTP traffic

TCP routing by setting TCP mapping in the Nginx configuration (using configmap). Refer to this Kubernetes guide for specific TCP configuration.

Nginx Deployment Example

In the example value file to install nginx below, a single pod Nginx installation using a hostPort connection is shown. Replace the parameters according to the list below.


   ingressClassByName: true
     enabled: true
     ports: {}
   nodeSelector: <nginx-asg>

     enabled: false
     ports: {}
       http: <node-port>
       ports: {}
       targetPorts: {}
  11701: <dih-namespace>/iidr-kafka:11701 #Smart DIH only
  3000:  <dih-namespace>/grafana:3000
  5432:  <dih-namespace>/xap-dgw-service:5432
  9092:  <dih-namespace>/kafka:9092  #Smart DIH only


nginx-asg: A dedicated node group for an nginx instance

dih-namespace: The namespace where Smart DIH is installed

node-port: The host port, for example 30820

Helm command used to install the Nginx controller calling the nginx-values.yaml:

helm install nginx nginx/ingress-nginx -n ingress --version=4.6.0 --values=nginx-values.yaml

Space High Availability (H/A) and Partitioning

The Space can be divided by two dimensions.

  1. Partition - Adding more pods to extend the capacity of the Space

  2. Replication - Adding a backup pod per partition for h/a of the Space

The following parameters are not found in the manager, but rather in the PU customer resource.

PU Parameter Values Description
partitions 1 or more Divide the data among multiple pods.
h/a true/false Setting additional replication for each partition.
antiAffinity.enabled true/false When h/a is true, primary and backup pods will run in different hosts.

Tiered Storage

The following configuration must be completed to enable Tiered Storage:

  1. Ensure that you have a suitable driver which will satisfy the required persistent volume, which depends on the cloud or file-system. More details can be found in the Kubernetes spec.

  2. Enable the volume for Tiered Storage at the CRD level, using the parameters listed in the table below.

  3. Build an image that includes a jar that is built according to the instructions found in this Tiered Storage page.

  4. Follow the instructions for Deploying a Custom Processing Unit to load the jar.

The following parameters should be set in order to set the volume in the CRD level:

PU Parameter Values Description
persistence.enabled true/false

Enable the persistence at the pod level

persistence.storageClassName One of the cloud's storage classes Storage Class Name (refer to Kubernetes documentation)
persistence.accessMode ReadWriteOnce/ReadWriteMany Storage Access mode (refer to Kubernetes documentation)
persistence.size Example: "1Gi" Depending on the expected amount of your total data (not cached data)
properties[0].name Any key Key of a property passed to the space jar (refer to number 3 above)
properties[0].value Any value Value of key[0]
properties[n].name Key #n For additional properties names
properties[n].value Value #n For additional properties corresponding values

Space Anti-Affinity

In h/a mode, the partition replications have to be placed in separate nodes or zones.

Parameter Values Description
operator.antiAffinity.topology or The topology by which kubernetes activate the anti-affinityClosed This describes the relationship between VMs and hosts or in Kubernetes between pods. Affinity will keep VM and hosts together and with Kubernetes, will keep pods on the same node. rule in the zone or the host
PU Parameter Values Description
antiAffinity.enabled true/false Enabling the anti-affinity for the specific PU

Manager Anti-Affinity

In a manager h/a scenario, 3 managers have to be set up for anti-affinity mode.

Parameter Values Description
manager.ha true/false Set up 3 managers (1 otherwise)
manager.affinity.enable true/false Apply anti-affinity rule upon scheduling

Environment and Properties that can be injected into a PU


There are two ways to set properties:

  1. As an array by setting the properties value at the PU level

  2. Using valueFrom enable to point to a specific value in a config map (or secret):

  3. properties:
      - name: SPECIAL_LEVEL_KEY
            name: special-config
  4. Using propertiesFrom enable to reference a configmap or secret:

      - configMapRef:
          name: special-config

Environment Variables

Similar to properties, there are the same three options:

As an array of env variables (as in a container) using env parameter

Using valueFrom enable to point to a specific value in a config map (or secret)

Using envFrom enable to reference a configmap of a secret

StatefulSet Extensions

The GigaSpaces operator creates a StatefulSet out of values given in the PU manifest. These values are used for handling the installation and various other conditions (for example, scale), by the operator. The user might want to have additional settings on the StatefulSet. These settings are integrated as-is to the created StatefulSet.

Parameter Values Description
statefulSetExtension.enabled true/false Enable extension of the StatefulSet

The list of fields listed below are those that are allowed to be added. They can be added in the statefulSetExtension area in the chart/value file. Fields that are not listed are ignored by the operator.

StatefulSet Field Path Description Merge Policy* Field-PK for Collision
spec.template.metadata.annotation StatefulSet annotations Merge with PU precedence key
spec.template.metadata.finalizers Pod controlled deletion Merge with PU precedence whole object
spec.template.metadata.labels Labels of pods Merge with PU precedence key
spec.template.spec.containers Containers running in pod Merge with PU precedence name
spec.template.spec.initContainers Containers initializing Merge with PU precedence name
spec.template.spec.securityContext Security pod definitions Override -
spec.template.spec.volumes Volumes for containers Merge with PU precedence name


Liveness and Readiness probes are the means for the Kubernetes and GigaSpaces Manager to check the health of the PU.  For a Space, the operator sets the default probes if they are not provided in the settings of the PU manifest. These defaults ensure that the manager has a way to detect a failing partition of the Space.  If the user wishes to override this, for the case where there is additional logic that occurs at start-up of a PU, the probes can be set and support the URL. It is important to note that these endpoints still need to reflect the Space readiness.

PU Parameter Values Description
livenessProbe Same as kubernetes spec See Probes Spec. Description above.
readinessProbe Same as kubernetes spec See Probes Spec. Description above.

SpaceDeck Settings

Refer to the SpaceDeck pages.

Data Integration (DI) Settings

Refer to the DI pages.

Service Creator Settings

The following attributes are provided to the Service Creator Operator during the installation phase. The Service Creator Operator controls the creation of deployment, service and ingress of a low-code service.

Parameter Values Description
service-operator.operatorConfig.memoryLimit 400Mi or other memory capacity Memory limit of a newly created low-code service
service-operator.operatorConfig.cpuLimit 1 or other cpu core number CPU limit of a newly created low-code service
service-operator.operatorConfig.controllerClass nginx The ingress controller class
service-operator.operatorConfig.image gigaspaces/mcs-query-service:1.2.3 or any other tag The low-code service image and tag {current_namespace} or any other host name Host name (e.g. DNS) that fills the ingress rule

Security Settings

The security server runs alongside the grid manager and is responsible for handling security related requests coming from inside or outside the cluster. The list below is for the security server cluster configuration.  Details of the end-user operation related to security can be found in the SpaceDeck Login and SpaceDeck Roles Management pages.

Parameter Values Description
manager.securityServer.enabled true/false Enable security at the backend
manager.securityServer.ingress.enabled true/false Adding an ingress to enable access from IDPClosed An identity provider, or IDP, stores and manages users' digital identities. IDP and SSO can work together to authenticate users. root-credentials” or any secret name Reference to a secret that holds the root credentials
manager.securityServer.rolesConfigMap roles-map” or other cm that has the roles Reference to a configmap that holds the roles

Roles and Policies Configuration

A list of roles and their corresponding policies can be found in the "roles map" config map. In the example below, a user that has the role ROLE_ADMIN has specified policies that are defined for that role.  Different roles, for example ROLE_MNGR, would have different policies defined.

roles.ROLE_ADMIN: |
  , MonitorPrivilege MONITOR_PU
  , MonitorPrivilege MONITOR_JVMClosed Java Virtual Machine. A virtual machine that enables a computer to run Java programs as well as programs written in other languages that are also compiled to Java bytecode.
  , GridPrivilege MANAGE_GRID
  , GridPrivilege MANAGE_PU
  , GridPrivilege PROVISION_PU
  , SpacePrivilege EXECUTE
  , SpacePrivilege ALTER
  , SpacePrivilege WRITE
  , SpacePrivilege READ
  , SpacePrivilege NOT_SET
  , SpacePrivilege TAKE
  , SpacePrivilege CREATE
  , SystemPrivilege MANAGE_ROLES