Deploying a GigaSpaces Service in Kubernetes

A GigaSpaces service (Processing Unit) is a container that can hold any of the following:

  • Data only (a Space)
  • Function only (business logic)
  • Both data and a function

You can use the event-processing example available with the GigaSpaces software packages to see how data is fed to the function and processed in services. The example creates the following modules:

  • Processor - a service with the main task of processing unprocessed data objects. The processing of data objects is accomplished using both an event container and remoting.
  • Feeder - a service that contains two feeders, a standard Space feeder and a JMS feeder, to feed unprocessed data objects that are in turn processed by the processor module. The standard Space feeder feeds unprocessed data objects by both directly writing them to the Space and using OpenSpaces Remoting. The JMS feeder uses the JMS API to feed unprocessed data objects using a MessageConverter, which converts JMS ObjectMessages into data objects.

As a prerequisite for running this example, you must install Maven on the machine where you unpacked the GigaSpaces software package.

To build and deploy the event-processing example in Kubernetes, the following steps are required:

  1. Build the sample services from the GigaSpaces software package.
  2. Uploading the pu.jar files for deployment.
  3. Deploy a Manager (Management Pod).
  4. Deploy the services that were created when you built the example to Data Pods in Kubernetes, connecting them to the Management Pod.
  5. View the processor logs to see the data processing results.

Building the GigaSpaces Service Example

The first step in deploying the sample services to Kubernetes is to build them from the examples directory. The example uses Maven as its build tool, and comes with a build script that runs Maven automatically.

Open a command window and navigate to the following folder in the GigaSpaces package:

cd <product home>/examples/data-app/event-processing/

Type the following command (for Unix environments) to build the processor and feeder services:

./ package

This build script finalizes the service structure of both the processor and the feeder, and copies the processor JAR file to /examples/data-app/event-processing/processor/target/data-processor/lib, making the /examples/data-app/event-processing/processor/target/data-processor/ a ready-to-use service. The final result is two service JAR files, one under processor/target and another under feeder/target.

Uploading the pu.jar Files

In order to deploy the services on Kubernetes, a URL must be provided. You can use an existing HTTP server, (for example, a local HTTP server using Helm), or you can use the GigaSpaces CLI (or REST API) to upload the Processing Unit files to the Manager Pod.

Ensure that your Kubernetes environment has access to the URL that you provide.

Use one of the following options to upload the pu.jar files for deployment.

The upload stage does not provide high availability. The pu.jar files are uploaded only to the active Manager Pod, and are not replicated to other managers. High availability only takes effect after the service has been deployed.


./<GS_HOME>/bin/  pu upload 
<GS_HOME>\bin\gs  pu upload 


Upload a pu.jar to the target.

Parameters and Options:

Item Name Description
Parameter file Path to the service file (.jar or .zip).
Option --url-only Return only the service URL after uploading

Input Example:

This example uploads a service named myPu to the mypu.jar file.

<GS_HOME>/bin/  pu upload mypu.jar
<GS_HOME>\bin\gs  pu upload mypu.jar


PUT /pus/resources


Upload a service to the target.


curl -X PUT --header 'Content-Type: multipart/form-data'
--header 'Accept: text/plain' {"type":"formData"} 'http://localhost:8090/v2/pus/resources'

Leave this command window open so the server remains available and Kubernetes can connect to it.

Deploying the GigaSpaces Components

Similar to deploying a Space cluster, it is best practice to first deploy the Management Pod (with the Manager), and then deploy the Data Pods (first the processor, then the feeder).

To deploy the GigaSpaces components:

  1. Open a new command window and navigate to the Helm chart directory (where you fetched the charts from the GigaSpaces repo).

  2. Type the following Helm command to deploy a Management Pod called testmanager:

    helm install insightedge-manager --name testmanager
  3. Type the following Helm command to deploy a Data Pod with the processor service from the location where it was built in the examples directory:

    helm install insightedge-pu --name processor --set,resourceUrl=
  4. Lastly, type the following Helm command to deploy a Data Pod with the feeder service from the same directory:

    helm install insightedge-pu --name feeder --set,resourceUrl=

Monitoring the GigaSpaces Services

You can use the GigaSpaces Ops Manager to monitor the status and alerts of the GigaSpaces cluster and services. Alternatively, you can use one of the Kubernetes tools to view the logs for the processor Data Pod, where you can see that the sample data has been processed.

Configuring the Container Memory Allocation

The Docker container is always allocated an absolute amount of memory. If this is undefined in the Helm chart, the container will use as much as is necessary to accomodate the data and processes it contains. You can limit the memory allocation for the contents of the Docker container (Data Pod, Manager Pod, processes, etc.) and the heap memory.

The on-heap memory allocation can be defined as any of the following:

  • A positive absolute value for the heap memory.
  • A negative absolute value for the heap memory, calculating the heap size as ([total allocated container resources] - [XMib]).
  • A percentage of the Docker container.

The following Helm command allocates the amount of memory for both the Docker container and for the on-heap memory as an absolute value:

helm install insightedge --name test --set pu.resources.limits.memory=512Mi,

The following Helm commands allocates the amount of memory for the Docker container, and sets aside a specific amount of memory for the container to use. The rest of the memory is available to the Java heap.

helm install insightedge --name test --set pu.resources.limits.memory=512Mi,

You can define the maximum size of the Docker container as an absolute value, and the maximum on-heap memory allocation for the Java running inside the Docker container as a percentage. If you use this approach, make sure you leave enough memory for the Java.

The following Helm command sets an absolute value for the Docker container, and defines the maximum Java on-heap memory as a percentage of the container memory:

helm install insightedge --name test --set pu.resources.limits.memory=256Mi,

Configuring Additional Java Options

You can configure additional Java options for each Platform Manager instance by using the java.options parameter, as shown below.

helm install insightedge --name test --set"-XX:+PrintGCDetails"

Overriding the GigaSpaces Service Properties

It is recommended to define the service properties in the pu.xml as placeholders (as described in the Processing Unit Deployment Properties topic), so you can override these properties using the Helm chart.

After defining the properties as placeholders, use the key1=value1;key2=value2 format to pass the override values to the Helm chart using either the --set properties=<your key-value pairs> command, or using a custom YAML file.

Configuring the MemoryXtend Properties

The Kubernetes environment supports using MemoryXtend for off-heap RAM and MemoryXtend for Disk (SSD).

MemoryXtend for Off-Heap RAM

To configure your Kubernetes-based environment, you need to make sure that the container memory allocation is sufficient to accommodate the overall RAM requirements. Additionally, you should define the memory threshold properties as placeholders in the pu.xml file. For more information about the MemoryXtend Off-Heap RAM driver, see MemoryXtend for Off-Heap RAM.

MemoryXtend for Disk (SSD)

To configure your Kubernetes-based environment to use external storage, you need to enable persistent volume storage in both the Processing unit pu.xml and the pu Helm chart. This is described in detail in MemoryXtend for Disk (SSD/HDD).

For information about the Kubernetes persistent volume storage model, refer to the Kubernetes documentation.

helm install insightedge --name demo --set manager.nodeSelector.enabled=true,manager.nodeSelector.selector="disktype: ssd"
helm install insightedge --name demo --set pu.nodeSelector.enabled=true,pu.nodeSelector.selector="disktype: ssd"