MemoryXtend for PMEM

Overview

Persistent Memory (PMEM) is a new class of memory that combines high capacity and affordability. By expanding affordable system memory capacities (greater than 3TB per CPU socket), end customers can use systems enabled with PMEM to better optimize their workloads by moving and maintaining larger amounts of data closer to the processor, and minimizing the higher latency of fetching data from system storage.

The Persistent Memory Development Kit, PMDK, is a collection of libraries that have been developed for various use cases, tuned, validated to production quality, and thoroughly documented. These libraries build on the Direct Access (DAX) feature available in both Linux and Windows, which allows applications direct load/store access to persistent memory by memory-mapping files on a persistent-memory-aware file system. The PMDK also includes a collection of tools, examples, and tutorials on persistent memory programming.

The PMEM storage driver does not currently support Windows.

PMDK is vendor-neutral, started by Intel and motivated by the introduction of Optane DC persistent memory. PMDK is open source and will work with any persistent memory that provides the SNIA NVM Programming Model.

GigaSpaces has developed a PMEM driver that works with Intel's Optane DC PMEM array, which is currently in its beta stage. The MemoryXtend pmem storage driver stores Space objects in pmem, outside the Java heap.

Intel's Optane DC is not yet available in the market, so to use the GigaSpaces PMEM driver you need to emulate PMEM on your machine. For information on how to do this, see the section on how to emulate Persistent Memory on Intel's Persistent Memory Programming project website.

If you would like to receive early access to the Optane PC hardware for evaluation purposes, contact us to request a demo via the GigaSpaces website.

Prerequisites

Hardware Mode

Intel's Optane DC PMEM array supports two modes, Memory mode and App Direct mode. The MemoryXtend PMEM driver uses Intel's App Direct mode. Ensure that the PMEM array is set to App Direct mode so that it will work with the MemoryXtend PMEM driver.

For more information about the different PMEM modes, see this Intel blog.

Direct Access (DAX)

In order for the MemoryXtend PMEM driver to work properly, you must mount the PMEM array to a DAX-enabled system.

For more information about DAX, see this explanation of Direct Access for files.

 

Basic Configuration

The PMEM storage driver dose not currently support persistence mode.

You can create a Space that utilizes the MemoryXtend PMEM storage driver via the pu.xml configuration file, or in the code. For example, to create a Space called "mySpace' with a PMEM memory pool of 20GB to be allocated at /mnt/pmem0/pu-local-storage-pool.txt (the path in the file name should point to a file in the DAX file system):

<?xml version="1.0" encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:os-core="http://www.openspaces.org/schema/core"
       xmlns:blob-store="http://www.openspaces.org/schema/pmem-blob-store"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.3.xsd
       http://www.openspaces.org/schema/core http://www.openspaces.org/schema/14.2/core/openspaces-core.xsd
       http://www.openspaces.org/schema/pmem-blob-store http://www.openspaces.org/schema/14.2/pmem-blob-store/openspaces-pmem-blob-store.xsd">


    <blob-store:pmem-blob-store id="myBlobStore" memory-pool-size="20GB" file-name="/mnt/pmem0/pu-local-storage-pool.txt"/>

    <os-core:embedded-space id="space" name="MySpace">
        <!-- cache-entries-percentage=0 in order to make sure that objects are written to the ssd-->
        <os-core:blob-store-data-policy  blob-store-handler="myBlobStore"
                                         cache-entries-percentage="20"
                                         avg-object-size-KB="10"
                                         persistent="false"/>
    </os-core:embedded-space>
</beans>
// Create off-heap storage driver:
BlobStoreStorageHandler blobStore = new PmemBlobStoreConfigurer()
        blobStore.setMemoryPoolSize("20GB");
        blobStore.setFileName("/mnt/pmem0/pu-local-storage-pool.txt");
        blobStore.setVerbose(false);
        .create();
// Create space with that storage driver:
String spaceName = "mySpace";
EmbeddedSpaceConfigurer spaceConfigurer = new EmbeddedSpaceConfigurer(spaceName)
        .cachePolicy(new BlobStoreDataCachePolicy()
                .setBlobStoreHandler(blobStore)
                .setPersistent(false));
GigaSpace gigaSpace = new GigaSpaceConfigurer(spaceConfigurer).gigaSpace();

The general MemoryXtend configuration options also apply. For example, you can configure MemoryXtend to cache some data on-heap for faster access.

For an example of how to configure the on-heap cache properties, see the MemoryXtend overview topic.

Defining the Memory Pool Size

In order to use PMEM storage, you must define the amount of memory to allocate, for example. 20g. Use the following sizing units:

  • b - Bytes
  • k, kb - Kilobytes
  • m, mb - Megabytes
  • g, gb - Gigabytes

Before any operation that requires memory allocation (write, update, and initial load), the memory manager checks how much of the allocated memory has been used. If the threshold has been breached, an OffHeapMemoryShortageException is thrown. Read, take, and clear operations are always allowed.

Monitoring

The amount of used memory can be tracked with the following monitoring and administration tools:

  • Metrics - The space_blobstore_off-heap_used-bytes_total metric, as described on the Metrics page.
  • Admin API - Thru SpaceInstanceStatistics.getBlobStoreStatistics()
  • Web Management Console - In the Space instances view, right-click any of the columns in the table and add the Used Off-Heap column.

The data grid views the PMEM storage as off-heap storage. Therefore, the monitoring tools are the same for both the PMEM and the off-heap drivers.