Documentation


About Lynexus

Lynexus is a web-based, Ruby web application, ETL and integration tool for transferring and integrating data from one system to another. It serves as a middleware for system to communicate and exchange information without the need of extensive coding and advance technical skills.




Core Features

Lynexus has several core features:

Lynexus supports direct database-to-database connections on MySQL and/or SQL servers. This means you can extract, transform, map, and load your data from one source to another with out the need to code internally.

Lynexus also allows users to extract and load data from files through SFTP or FTP servers as well as AWS S3 buckets. This allow for a more flexible integration. File supported includes CSV, TSV, JSON, XML, and DWN(JDA) formats.

Data can also be extracted and loaded through API requests. This allows systems that use API protocols to integrate to other systems or databases directly. Authentications and authorizations are included for security measures.

Lynexus provides a wide variety of building blocks to extract, transform, and load your data to various means such as different files, API requests, and database records. For more information, go to the Process Mapping section.

Lynexus provides an internal cron based scheduler. This allows user to run tasks automatically based on the designated schedule without supervision. When the tasks are done, email alerts with log details will be sent to the users.

Logs are created whenever a task is completed to ensure proper documentation of the integration. All logs are recorded and is kept to provide backtraces whenever needed by the user.



System Requirements

Minimal Specifications Optimal Specifications
Operating System Ubuntu 16 or higher Ubuntu 18.04
Processor 2 GHz dual core processor 2.4GHz octa core processor
System Memory 2GB RAM 8GB RAM
Hard Drive 30GB 100GB
Internet Access
Maximum records 30,000 ~ 50,000 150,000 ~ 400,000
Minimal Specifications Optimal Specifications
Operating System MacOS 10.12 Sierra (Fuji) MacOS 10.14 Mojave (Liberty)
Processor 1.5 GHz Intel Core i3 1.8 GHz Intel Core i5
System Memory 4GB RAM 8GB RAM
Hard Drive 30GB 100GB
Internet Access
Maximum records 50,000 ~ 100,000 150,000 ~ 400,000





Getting started

To get started, below are the concepts and sections you'll need familiarize and configure to start using Lynexus on you integrations.




Connection Servers

Lynexus cannot function without its server connections. This include database servers, SFTP and FTP services, and AWS3 bucket connections. You need to define and configure these servers, depenending on the scope of your integration.

To create a new server connection, go to Servers then click the New server button. You will need the following credentials when creating a server:


Server clause File and databases
Server type MySQL or SQL Server
Server hostname IP or servername of database instance
Server port Port of the database
Server username Username credentials
Server password Password for username
Server clause File and databases
Server type FTP or SFTP Server
Server hostname IP of FTP/SFTP service
Server port Port of the FTP/SFTP service
Server username Username credentials
Server password Password for username
Server clause AWS S3 Bucket
Server type AWS S3 Server
AWS S3 Bucket Region Region of the S3 Bucket
AWS Access key ID
AWS Secret access key
AWS Credentials
AWS Credentials provider
Provided by AWS when you create your S3 Bucket



Worker Queues

Lynexus includes worker queues that will process new jobs as they are pushed onto the queue. You can create and assign worker queues to specific scheduled tasks to manage the overall load of your server. Your account has 3 default worker queues- MW or the Main worker responsible for scheduled tasks, AW or API worker responsible for API requests, and EW or Email worker for mailing purposes.

To create a new worker queue, go to Workers Queues then click the New queue button.




Schedulers and Tasks

Tasks are the main component of your integration. This holds the actual processes that your integration will do from extraction, to transformation, to loading and importing your data to whichever endpoint needed depending on your integration's scope and requirements.

You can create and modify tasks by going to Workers Tasks. For more information, proceed to the Process Mapping section of this documentation.

Schedulers pertains to threads or lists of tasks that needs to run on a given schedule. Lynexus provides an easy user interface for doing this. Assuming you have created your tasks already, just go to Workers Scheduler. Once there, you will need to create a new schedule and add your tasks to that scheduled thread. Provide the necessary information like crontab expression, return task (for API triggers), and worker queue where your task will run.

Lynexus uses cron based scheduling for its task. This means that you'll need to provide a crontab expression for each scheduler. Lynexus also includes a guide for this powered by crontab.guru, just cilck the button.






Process Mapping

Each task has its own process mapping. This holds the logic that your integration will follow. It consists of data blocks, or simply blocks, that allows you to extract, transform, merge, process, load, and export data through out your integration. The following will guide you to what and how each block can be used on your task's process map.




Inbound Blocks

Inbound blocks refers to blocks responsible for extracting data through files or records. This can either be done through database connections, FTP/SFTP services, or AWS S3 Buckets. Below are the different blocks classified under inbound blocks:


SQL Query Block allows you to execute MySQL or SQL Server queries, stored procedures, scripts, and native commands to pull and extract records from your database servers.
Hover over the form fields for more information.
API Request Block allows you to use the GET method to pull data through any API. It also has an interface for placing parameters, headers, and authentications such as Bearer token and Basic auth.
Hover over the form fields for more information.


This request is not inheriting any authorization helper at the moment. Save it in a collection to use the parent's authorization helper.


This request does not use any authorization. Learn more about authorization
Import File Block allows you to extract files from FTP and SFTP servers. Supported files include CSV, TSV, JSON, XML and DWN(for JDA) files. To use this block, make sure you have an existing server for FTP/SFTP, or create a new server connection by going to Servers then click the New server button.
Hover over the form fields for more information.
S3 Bucket File Block allows you to extract files from AWS S3 servers. Supported files include CSV, TSV, JSON, XML and DWN(for JDA) files. To use this block, make sure you have an existing server for AWS S3, or create a new server connection by going to Servers then click the New server button.
Hover over the form fields for more information.



Outbound Blocks

Outbound blocks refers to blocks responsible for pushing data through files or mapping them as database records. This can either be done through database connections, FTP/SFTP services, or AWS S3 Buckets. Below are the different blocks classified under outbound blocks:


You can use SQL Query Block to insert or update records. This block also provides an interface for mapping out data from other blocks into the database's tables and fields.
Hover over the form fields for more information.
API Request Block allows you to use the POST, PUT, and DELETE method to push data through any API. It also has an interface for placing parameters, headers, body, and authentications such as Bearer token and Basic auth.
Hover over the form fields for more information.


This request is not inheriting any authorization helper at the moment. Save it in a collection to use the parent's authorization helper.


This request does not use any authorization. Learn more about authorization
Export File Block allows you to export files to FTP and SFTP servers. Supported files include CSV, TSV, JSON, XML and DWN(for JDA) files. You also have the option to disregard the file type for dynamic exporting. To use this block, make sure you have an existing server for FTP/SFTP, or create a new server connection by going to Servers then click the New server button.
Hover over the form fields for more information.
S3 Bucket File Block allows you to export files to AWS S3 servers. Supported files include CSV, TSV, JSON, XML and DWN(for JDA) files. You also have the option to disregard the file type for dynamic exporting. To use this block, make sure you have an existing server for AWS S3, or create a new server connection by going to Servers then click the New server button.
Hover over the form fields for more information.



Merge Blocks

Merge blocks refers to blocks responsible for joining one or more blocks. There are two kinds of merge blocks, the Full merge block and Join merge block:


Full merge block appends the content of each blocks connected to it. This is ideal for same structure blocks, meaning blocks with the same keys/fields.
Hover over the form fields for more information.
Join merge block appends the content of one block to another. This is ideal for different structure blocks or which holds referencing ID's. (Ex. User and user types connections)
Hover over the form fields for more information.



Flatten Blocks

Most, if not all, blocks run on flat JSON data. This is needed so that a uniform set of keys and fields can be used all through out the task. If you need to incorporate structured data to your integrations, you would need to use Flatten blocks first to use the data with other blocks.


Flatten JSON block parses and flattens structured JSON file records into readable rows with the same keys and field names. This is done so the succeeding blocks will have a uniform number and set of keys.
Hover over the form fields for more information.
Flatten XML block parses and flattens structured XML file records into readable rows with the same keys and field names. This is done so the succeeding blocks will have a uniform number and set of keys.
Hover over the form fields for more information.
Flatten JDA block parses and flattens DWN(JDA) file records into readable rows with the same keys and field names. This is done so the succeeding blocks will have a uniform number and set of keys.
Hover over the form fields for more information.



Structure Blocks

When exporting data, you can structure your records to conform to any format that your file, or API request needs. You can use the following blocks to build structured data from your flat JSON files.


Structure JSON block creates structured JSON file records from flat JSON files.
Hover over the form fields for more information.
Structure XML block structures XML files from flat JSON records.
Hover over the form fields for more information.
Structure JDA block structures flat JSON files into DWN(JDA) file records.
Hover over the form fields for more information.



Transform Blocks

Transform blocks refers to blocks used for editing, formating, adding, and removing fields from your records. These blocks primarily deals with Ruby codes so a more advance knowledge may be needed. There are two kinds of transform blocks, the Data transform block and the Filter block:


Avoid using Data Transform block unless necessary to lessen the load to your workers. The more blocks you have, the longer the task will be so for minimal changes to the records, it is recommended to directly transform the data on inbound blocks or outbound blocks.
Hover over the form fields for more information.
Filter block relies heavily on your knowledge of Ruby and its function. This is done to maximize the flexibility and dexterity of Ruby language.
Hover over the form fields for more information.



Process Blocks

Process blocks are advance blocks that mainly deals with the flow and technical side of the integrations. There are two kinds of process blocks - the Email block, and the Conditional block.


Conditional blocks are useful if you have one or more flow to your tasks. You can add multiple conditions on the text area provided you also added the endpoints on the block itself.
Hover over the form fields for more information.
Filter block relies heavily on your knowledge of Ruby and its function. This is done to maximize the flexibility and dexterity of Ruby language.
Hover over the form fields for more information.

Lynexus