Lynexus is a web-based, Ruby web application, ETL and integration tool for transferring and integrating data from one system to another. It serves as a middleware for system to communicate and exchange information without the need of extensive coding and advance technical skills.
Lynexus has several core features:
Lynexus supports direct database-to-database connections on MySQL and/or SQL servers. This means you can extract, transform, map, and load your data from one source to another with out the need to code internally.
Lynexus also allows users to extract and load data from files through SFTP or FTP servers as well as AWS S3 buckets. This allow for a more flexible integration. File supported includes CSV, TSV, JSON, XML, and DWN(JDA) formats.
Data can also be extracted and loaded through API requests. This allows systems that use API protocols to integrate to other systems or databases directly. Authentications and authorizations are included for security measures.
Lynexus provides a wide variety of building blocks to extract, transform, and load your data to various means such as different files, API requests, and database records. For more information, go to the Process Mapping section.
Lynexus provides an internal cron based scheduler. This allows user to run tasks automatically based on the designated schedule without supervision. When the tasks are done, email alerts with log details will be sent to the users.
Logs are created whenever a task is completed to ensure proper documentation of the integration. All logs are recorded and is kept to provide backtraces whenever needed by the user.
| Minimal Specifications | Optimal Specifications | |
| Operating System | Ubuntu 16 or higher | Ubuntu 18.04 |
|---|---|---|
| Processor | 2 GHz dual core processor | 2.4GHz octa core processor |
| System Memory | 2GB RAM | 8GB RAM |
| Hard Drive | 30GB | 100GB |
| Internet Access | ||
| Maximum records | 30,000 ~ 50,000 | 150,000 ~ 400,000 |
| Minimal Specifications | Optimal Specifications | |
| Operating System | MacOS 10.12 Sierra (Fuji) | MacOS 10.14 Mojave (Liberty) |
|---|---|---|
| Processor | 1.5 GHz Intel Core i3 | 1.8 GHz Intel Core i5 |
| System Memory | 4GB RAM | 8GB RAM |
| Hard Drive | 30GB | 100GB |
| Internet Access | ||
| Maximum records | 50,000 ~ 100,000 | 150,000 ~ 400,000 |
To get started, below are the concepts and sections you'll need familiarize and configure to start using Lynexus on you integrations.
Lynexus cannot function without its server connections. This include database servers, SFTP and FTP services, and AWS3 bucket connections. You need to define and configure these servers, depenending on the scope of your integration.
To create a new server connection, go to Servers then click the New server button. You will need the following credentials when creating a server:
| Server clause | File and databases |
|---|---|
| Server type | MySQL or SQL Server |
| Server hostname | IP or servername of database instance |
| Server port | Port of the database |
| Server username | Username credentials |
| Server password | Password for username |
| Server clause | File and databases |
|---|---|
| Server type | FTP or SFTP Server |
| Server hostname | IP of FTP/SFTP service |
| Server port | Port of the FTP/SFTP service |
| Server username | Username credentials |
| Server password | Password for username |
| Server clause | AWS S3 Bucket |
|---|---|
| Server type | AWS S3 Server |
| AWS S3 Bucket Region | Region of the S3 Bucket |
|
AWS Access key ID AWS Secret access key AWS Credentials AWS Credentials provider |
Provided by AWS when you create your S3 Bucket |
Lynexus includes worker queues that will process new jobs as they are pushed onto the queue. You can create and assign worker queues to specific scheduled tasks to manage the overall load of your server. Your account has 3 default worker queues- MW or the Main worker responsible for scheduled tasks, AW or API worker responsible for API requests, and EW or Email worker for mailing purposes.
To create a new worker queue, go to Workers Queues then click the New queue button.
Tasks are the main component of your integration. This holds the actual processes that your integration will do from extraction, to transformation, to loading and importing your data to whichever endpoint needed depending on your integration's scope and requirements.
You can create and modify tasks by going to Workers Tasks. For more information, proceed to the Process Mapping section of this documentation.
Schedulers pertains to threads or lists of tasks that needs to run on a given schedule. Lynexus provides an easy user interface for doing this. Assuming you have created your tasks already, just go to Workers Scheduler. Once there, you will need to create a new schedule and add your tasks to that scheduled thread. Provide the necessary information like crontab expression, return task (for API triggers), and worker queue where your task will run.
Lynexus uses cron based scheduling for its task. This means that you'll need to provide a crontab expression for each scheduler. Lynexus also includes a guide for this powered by crontab.guru, just cilck the button.
Each task has its own process mapping. This holds the logic that your integration will follow. It consists of data blocks, or simply blocks, that allows you to extract, transform, merge, process, load, and export data through out your integration. The following will guide you to what and how each block can be used on your task's process map.
Inbound blocks refers to blocks responsible for extracting data through files or records. This can either be done through database connections, FTP/SFTP services, or AWS S3 Buckets. Below are the different blocks classified under inbound blocks:
Outbound blocks refers to blocks responsible for pushing data through files or mapping them as database records. This can either be done through database connections, FTP/SFTP services, or AWS S3 Buckets. Below are the different blocks classified under outbound blocks:
Merge blocks refers to blocks responsible for joining one or more blocks. There are two kinds of merge blocks, the Full merge block and Join merge block:
Most, if not all, blocks run on flat JSON data. This is needed so that a uniform set of keys and fields can be used all through out the task. If you need to incorporate structured data to your integrations, you would need to use Flatten blocks first to use the data with other blocks.
When exporting data, you can structure your records to conform to any format that your file, or API request needs. You can use the following blocks to build structured data from your flat JSON files.
Transform blocks refers to blocks used for editing, formating, adding, and removing fields from your records. These blocks primarily deals with Ruby codes so a more advance knowledge may be needed. There are two kinds of transform blocks, the Data transform block and the Filter block:
Process blocks are advance blocks that mainly deals with the flow and technical side of the integrations. There are two kinds of process blocks - the Email block, and the Conditional block.