Managing the Kubernetes Application Environment

Configuring the Container Memory Allocation

The Docker container is always allocated an absolute amount of memory. If this is undefined in the Helm chart, the container will use as much as is necessary to accomodate the data and processes it contains. You can limit the memory allocation for the contents of the Docker container (Data Pod, Manager Pod, processes, etc.) and the heap memory.

The on-heap memory allocation can be defined as any of the following:

  • A positive absolute value for the heap memory.
  • A negative absolute value for the heap memory, calculating the heap size as ([total allocated container resources] - [XMib]).
  • A percentage of the Docker container.

The following Helm command allocates the amount of memory for both the Docker container and for the on-heap memory as an absolute value:

helm install insightedge --name test --set pu.resources.limits.memory=512Mi,

The following Helm commands allocates the amount of memory for the Docker container, and sets aside a specific amount of memory for the container to use. The rest of the memory is available to the Java heap.

helm install insightedge --name test --set pu.resources.limits.memory=512Mi,

You can define the maximum size of the Docker container as an absolute value, and the maximum on-heap memory allocation for the Java running inside the Docker container as a percentage. If you use this approach, make sure you leave enough memory for the Java.

The following Helm command sets an absolute value for the Docker container, and defines the maximum Java on-heap memory as a percentage of the container memory:

helm install insightedge --name test --set pu.resources.limits.memory=256Mi,

Configuring the Data Grid Using the Helm Chart

Default Helm Chart

The InsightEdge Helm chart has a list of supported values that can be configured. To view this list, use the following Helm command:

helm inspect insightedge

The values.yaml file is printed in the command window, and each configurable value has a short explanation above it. The indentation in this printout indicates a use of a ".' (dot) in the value name. For example, the high availability property for the Platform Manager is listed as follows in the file:

ha: false

The value you will set will look like this in the command window: manager.ha=true

Customizing a Helm Chart

You can create additional values.yaml files with customized values.

The following Helm command shows how a custom YAML file can be used to override the values in the original GigaSpaces Helm chart:

helm install insightedge -f customValues.yaml --name hello

Overriding the Processing Unit Properties

It is recommended to define the Processing Unit properties in the pu.xml as placeholders (as described in the Processing Unit Deployment Properties topic), so you can override these properties using the Helm chart.

After defining the properties as placeholders, use the key1=value1;key2=value2 format to pass the override values to the Helm chart using either the --set<your key-value pairs> command, or using a custom YAML file.

Monitoring the Data Grid

While Kubernetes provides a number of ways to monitor the Pods and services, you can use the GigaSpaces administration tools to monitor the data grid (Spaces and Processing Units).

REST Manager API

You can open the GigaSpaces REST Manager API and verify that your data grid was set up properly. You can access it from the minikube on your local machine or VM.

To get the IP address of your minikube, type the minikube ip command in the command window. Then type the following URL (using the minikube IP address) in your browser to access the REST Manager API:

http://<minikube ip>:8090

For information on how to use the REST Manager API, see the Administration Tools section of the documentation.

GigaSpaces Command Line Interface

You can use the GigaSpaces CLI to monitor and administer the data grid.

To access the CLI, click the EXEC button in the Kubernetes dashboard to open a shell into the Management Pod. Next, navigate to the /opt/gigaspaces/bin directory and start the GigaSpaces CLI (insightedge or xap).

At this point, you can use the CLI commands to monitor the data grid, making sure to set the –server with the Manager Headless Service name.

To view a list of Spaces, type the following command:

./insightedge --server=test-space-xap-manager-hs space list

To view the Data Type statistics, type the following command:

./insightedge --server=test-space-xap-manager-hs space info --type-stats test-space

For more information about the GigaSpaces CLI and available commands, see the Administration Tools section of the documentation.

Advanced Monitoring Using Kubernetes Tools

You can monitor the status of the various Kubernetes components using the Kubernetes dashboard or kubectl, as described in the Monitoring the Kubernetes Cluster section.

The test-space-xap-manager-hs is one of the Kubernetes services. To list all of the Kubernetes services and exposed ports, type the following command:

kubectl get services

For more information on using the Kubernetes monitoring tools, refer to the Kubernetes documentation.


If the Kubernetes environment doesn't launch properly, you can investigate by checking the Init Container logs. An init container is always run before a GigaSpaces Pod is started. After the init container runs to completion, Kubernetes deploys the actual Pod (such as a Management Pod, Data Pod, etc.). So when you deploy a Space, for example, an init container runs first to verify that the Platform Manager is available, and then the Data Pod with the Space is created.

You can access this log in the Kubernetes dashboard, or run the following kubectl command to print the init container log in the command window:

kubectl logs test-xap-space-1-0 -c check-manager-ready