In most cases, the applications that use GigaSpaces are leveraging machines with very fast CPUs, where the amount of temporary objects created is relatively large for the JVM Java Virtual Machine. A virtual machine that enables a computer to run Java programs as well as programs written in other languages that are also compiled to Java bytecode. garbage collector to handle with its default settings. This means careful tuning of the JVM is very important to ensure stable and flawless behavior of the application.
The following GigaSpaces processes may run on a virtual or a physical machine:
GSA Grid Service Agent. This is a process manager that can spawn and manage Service Grid processes (Operating System level processes) such as The Grid Service Manager, The Grid Service Container, and The Lookup Service. Typically, the GSA is started with the hosting machine's startup. Using the agent, you can bootstrap the entire cluster very easily, and start and stop additional GSCs, GSMs and lookup services at will. - Very lightweight process in terms of its memory and CPU usage. This process does not require any tuning. You should have one per machine, or in some cases one per Zone.
GSC Grid Service Container. This provides an isolated runtime for one (or more) processing unit (PU) instance and exposes its state to the GSM. - The runtime environment. This is where the data grid and the deployed processing units This is the unit of packaging and deployment in the GigaSpaces Data Grid, and is essentially the main GigaSpaces service. The Processing Unit (PU) itself is typically deployed onto the Service Grid. When a Processing Unit is deployed, a Processing Unit instance is the actual runtime entity. run. This process requires the relevant tuning to address the memory capacity required. Number of GSCs should not exceed:
Total # of cores / 4. With virtual machine setup you should have one GSC per VM.
GSM Grid Service Manager. This is is a service grid component that manages a set of Grid Service Containers (GSCs). A GSM has an API for deploying/undeploying Processing Units. When a GSM is instructed to deploy a Processing Unit, it finds an appropriate, available GSC and tells that GSC to run an instance of that Processing Unit. It then continuously monitors that Processing Unit instance to verify that it is alive, and that the SLA is not breached. - Lightweight process. Does not require any tuning unless you have very large cluster (over 100 nodes). You should have two of these per data grid.
LUS Lookup Service. This service provides a mechanism for services to discover each other. Each service can query the lookup service for other services, and register itself in the lookup service so other services may find it. - Lightweight process. Does not require any tuning unless you have very large cluster (over 100 nodes). You should have two of these per data grid.
GigaSpaces VM Memory size = Guest OS Memory + JVM Memory for all GSCs + JVM Memory for GSM + JVM Memory for LUS
JVM Memory for a GSC = JVM Max Heap (-Xmx value) + JVM Perm Size (-XX:MaxPermSize) + NumberOfConcurrentThreads * (-Xss) + "extra memory"
It may be necessary to calculate the Space Where GigaSpaces data is stored. It is the logical cache that holds data objects in memory and might also hold them in layered in tiering. Data is hosted from multiple SoRs, consolidated as a unified data model. Object Footprint. For instructions on how to do this, refer to Capacity Planning.
A Compound Index can be used with AND queries to speed up the query execution time. This approach combines multiple fields into a single index. Using a Compound Index avoids having multiple indexes on multiple fields, which in turn can reduce the index footprint.
-XX:+UseCompressedOops allows a 64-bit JVM heap size of up to 32GB to use a 32-bit reference address. This can reduce the overall footprint by 20-40%.
Compressed Storage mode can be used to reduce the footprint of non-primitive fields when stored within the Space. This option compress the data on the client, where data stays compressed in the Space and is decompressed when it is read back on the client side. This approach may affect performance.
This option is not available for XAP.NET.
The default Space Data source Initial Load behavior loads all Space class data into each partition, and later filters out irrelevant objects. This activity may introduce large amount of garbage to be collected. You can use the
SQL MOD query to fetch only the relevant data items to be loaded into each partition, which speeds up the initial load time and drastically reduce the amount of garbage generated during this process.
The amount of redo log data depends on the following:
Amount of in-flight activity
Primary backup connectivity (long disconnection means a lot of redo log data in memory).
The redo logs swap over to the hard disk at some point, therefore is it recommended to place its location on an SSD drive. Do not use a regular hard drive to store redo log data. The redo log data footprint is similar to the actual raw data footprint without indexes.
This section provides examples of the JVM settings that are recommended for applications that generate A large number of temporary objects. In such situations, you afford long pauses due to garbage collection activity. These settings are appropriate for cases where you are running a IMDG In-Memory Data Grid. A set of Space instances, typically running within their respective processing unit instances. The space instances are connected to each other to form a space cluster. The relations between the spaces define the data grid topology. Also known as Enterprise Data Grid - EDG, or when the business logic and the data grid are co-located. For example, a data grid with co-located polling/notify containers, task executors, or Sservice remoting.
The following JVM settings are for g1 mode, and are useful for low-latency scenarios:
-server -Xms8g -Xmx8g -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:InitiatingHeapOccupancyPercent=50 -XX:+UseCompressedOops
If your JVM is throwing an "OutOfMemoryException', the JVM process should be restarted. You will have to to add this property to your JVM setting: SUN -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError="kill -9 %p" JROCKIT -XXexitOnOutOfMemory
For information on how to configure your environment for Java 11 or higher, refer to the page about Java 11 Guidelines in the Release Notes.
This setting controls the size of the heap allocated for the young generation objects (all the objects that have a short lifetime). Young generation objects are in a specific location in the heap, where the garbage collector passes frequently. All new objects are created in the young generation region (called "eden"). When an object survives (is still "alive") after more than 2-3 gc cleaning cycles, it will be moved to the "old generation" region; these objects are called "survivors". A recommended value for the
Xmn should be 33% of the
In many cases, the thread stack size needs to be tuned because the default size is too high. In Java SE 6 OS, the default thread stack size on Sparc is 512k for 32-bit VMs, and 1024k for 64-bit VMs. On x86 Solaris/Linux OS, the thread stack size is 320k for 32-bit VMs and 1024k for 64-bit VMs.
On Microsoft Windows OS, the default thread stack size is read from the binary (java.exe). As of Java SE 6, this value is 320k for 32-bit VMs and 1024k for 64-bit VMs. You can reduce your thread stack size by running with the -Xss option. For example:
java -server -Xss384k
In some versions of Microsoft Windows, the OS may round up thread stack sizes using very coarse granularity. If the requested size is less than the default size by 1K or more, the stack size is rounded up to the default; otherwise, the stack size is rounded up to a multiple of 1 MB. 64K is the least amount of stack space allowed per thread.
Extra memory is the memory required for NIO direct memory buffers, JIT code cache, classloaders, Socket Buffers (receive/send), JNI, and GC internal info. Direct memory buffer usage for Socket Buffer utilization on the GSC side:
com.gs.transport_protocol.lrmi.maxBufferSize X com.gs.transport_protocol.lrmi.max-threads
For example - with the default
maxBufferSize size and 100 threads:
64k X 100 = 6400KB = 6.4MB
With large objects and batch operations (readMultiple, writeMultiple, Space Iterator) increasing the maxBufferSize may improve system performance.
This JVM option specifies the maximum total size of java.nio (New I/O package) direct buffer allocations. It is used with network data transfer and serialization activity.
The default value for direct memory buffers depends on your version of your JVM. Oracle HotSpot has a default equal to the maximum heap size (
-Xmx value), although some early versions may default to a particular value. To control this specific memory area, use the
-XX:MaxDirectMemorySize property. See the following example:
java -XX:MaxDirectMemorySize=2g myApp
Some useful references:
It is highly recommended to use the latest JDK release when using these options.
To capture detailed information about garbage collection and how it is performing, add the following parameters to the JVM settings:
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:/path/to/log/directory/gc-log-file.log
Modify the path and file names appropriately. You must use a different file name for each invocation in order to not overwrite the files from multiple processes.
%p to the log name, for example
gc-log-file_%p.log, tells the JVM to generate an individual log file per PID.
In order to provide the highest level of performance, GigaSpaces takes advantage of features in the Java language that allow for effective caching in the face of memory demands. In particular, the SoftReference class is used to store data until there is a need for explicit garbage collection, at which point the data stored in soft references will be collected if possible. The system default is 1000, which represents the amount of time (in milliseconds) that objects will survive past their last reference.
-XX:SoftRefLRUPolicyMSPerMB is the parameter that allows you to determine how much data is cached by allowing the JVM to control how long it endures; it is recommended to set this value to 500 in active, dynamic systems:
The above means that softly reachable objects will remain alive for 500 milliseconds after the last time they were referenced.