SpaceDeck – Data Pipeline – Create New Pipeline

Data Pipelines allow a convenient, no-code method to pipe data from the System of Record to the GigaSpaces in-memory data grid.

A new data pipeline definition will include the definitions of the System of Record databases, tables and fields that will provide data to the pipeline. The definition also indicates the in-memory Space that will receive the pipeline data.

Additional information includes optional validation rules and automatic conversion of specified field definitions.

Display the Configuration screen

From the Data Pipeline main screen, click Create New + to begin defining your first pipeline.

 

Pipeline Configuration Screen

The Pipeline Configuration screen is displayed:

The fields outlined in red are mandatory fields.

Basic Pipeline Information

You can fill in some or all of the pipeline configuration items (shown below) from a JSON-format configuration file by clicking the Load Configuration button found at the top right side of the screen.
The configuration file may contains some or all of the required details. After the configuration details are loaded from the configuration file, they can be edited if desired, before saving .

  • Pipeline Name – The name assigned to the pipeline.

    The Pipeline name is not case sensitive and has no limits for naming but must not exceed a length of 63 characters.

  • Space Name – The name of the GigaSpaces Space object that will receive the pipeline data. This is a mandatory field.

  • Connector Setting:

    • Data Source Connection – The data source from the System of Record. This is a URL and points to a database such as Oracle. This is a mandatory field.

      It is possible to Create a new Data Source connection from here, if one has not already been created via the Data Sources menu. Follow the instructions from SpaceDeck – Data Sources.

Start the Pipeline

Once the Pipeline configuration has been completed, click Create Pipeline in order to move to the next stage of the configuration of including tables:

Select Schema and Tables for the Pipeline

Once the pipeline has been created (the Create Pipeline button turns gray), click Add / Remove tables to select which schema and tables to include in the pipeline.

This step has to be completed in order to have a fully functioning pipeline and be able to perform data queries or add services.

Initially, a pop-out containing a list of available schemas will be presented.  Click on the schema containing the tables that you want to add.

Now you will have the option of selecting the tables that you want to import. Once selected, click Update where you will see the number of tables selected is reflected (in our example it is 3):

Once Update is clicked, it will take a few seconds to update.

Editing the Tables

Editing the table can only be performed during the initial setup and configuration process. The pipeline has to be deleted and configured from new in order to perform this action again. However, tables can be added and removed when the Pipeline is running.

From the Included Tables area of the screen, click Edit

1: Return to Data Pipelines screen

2: Go to previous Space Type entry

3. Go to next Space Type entry

4. The Indexes area contains the following fields and options:

  • Index Name – Supply an index name

  • Properties – Select the columns to be part of this index

5 The Space ID area contains the following fields and options:

  • Position – In the case of a compound key (multiple keys in the same key), the position of every field in a key needs to be identified (first column, second column, etc.). For a single column, it should always be 1 (default).

  • Primary Key - Serves as a unique identifier in a table (space type)

6: The RoutingClosed The mechanism that is in charge of routing the objects into and out of the corresponding partitions. The routing is based on a designated attribute inside the objects that are written to the Space, called the Routing Index. ID area contains the following fields and options:

  • Routing ID – Select a column in order to change the routing key.

It is possible to change the name of the Space Type to be different to that of the table name.

Tiered storage will be part of a future release.

Add/Remove Table

It is possible to update tables by either adding or removing them once the pipeline has been started. As mentioned above, editing can only be performed during the initial setup and configuration process. The pipeline has to be deleted and configured from new in order to edit them again.

In the Fields tab of this screen, the Field (column) names are initially the names of the fields from the database table that are included in the data pipeline. These can be edited to provide different property names (column names) in the GigaSpaces object type (table), or to include or exclude it.
Other fields in this screen will be editable in a future release.

The Fields tab contains the following fields and options:

  • Field ID – Search for a field by field ID.

  • Field (column) names – Initially, the names of the fields from the database table, that are included in the data pipeline. These can be edited to provide different property names (column names) in the GigaSpaces object type (table).
    In this example, the fields are named MEAL_ID, COMPANY_ID, etc.

If you click one of the field names, the Pipeline fields section appears, which allows you to edit the field characteristics:

  • Name – Name of the field

  • Type - The type of class used for the field, e.g., BigDecimal.

  • Included Toggle - This includes or excludes a field of the table from the space type.

Once the setup has been completed, return to the main Data Pipelines screen and click Save.

In order to be able to edit the tables once they have been saved, they have to be removed and added again.

Start the Pipeline

Once the pipeline has been saved, you are returned to the Data Pipeline Status screen. The status will be Inactive.

In order to start the Pipeline, click Start Pipeline from the kebab menu (vertical three-dot menu) on the far right:

Alliteratively, select the Pipeline name and from the very top of the screen, click Start Pipeline from the kebab menu (vertical three-dot menu) on the far right:

 

A Parameters to apply pop-up will now be displayed:

Point in time options:

  • EARLIEST - Start a pipeline from the beginning, means whatever is available for consumption in our queue , start processing from the beginning.

  • LATEST - Start a pipeline from now , start consuming and processing data from a current message regardless last stored checkpoint.

  • COMMITTED - Start a pipeline from the last successful processed message (kind of a checkpoint).

Once the pipeline has been started, it cannot be edited. To make changes it is necessary to delete the pipeline (from the main data Pipeline screen) and build it again.

After you have added the tables and saved the pipeline, save the changes and press Start to start the pipeline.

The pipeline will show as Started in the Data Pipeline Status screen and Active in the Spaces screen.

Once the pipeline has been started, the Status will be displayed as Running:

Deleting a Pipeline

Delete the Pipeline from a Running status by clicking Stop Pipeline from the main screen after clicking the kebab menu (vertical three-dot menu) on the far right:

Then, once the Pipeline is in Inactive status, click Delete Pipeline.

 

Alternatively, click on the Pipeline name and from the top of the next menu perform and after clicking the kebab menu (vertical three-dot menu) on the far right, click Delete Pipeline:

If you have not started the Pipeline yet (status = Inactive), you will have the option to Start or Delete the Pipeline:

If you decide to delete a pipeline the following confirmation pop-up screen will be displayed before the action can be completed:

Once Approve has been clicked, the pipeline is deleted immediately and cannot be restored.