Could Someone Give me Advice for Integrating D2000 with External Data Sources?

Hello there.

I am working on a project where we are utilizing the IPESOFT D2000 real time application server to monitor and control various processes. While the platform has been fantastic in terms of flexibility and performance; I am facing some challenges when it comes to integrating external data sources.

Specifically; we are looking to pull data from a few different systems; including a SQL database; an MQTT broker; and a RESTful API. While I have been able to set up basic connections; I am unsure about the best practices for ensuring seamless integration and optimal performance.

What are the most efficient methods for ensuring data synchronization between D2000 and these external sources; particularly when dealing with varying data update rates? :thinking:
How does D2000 handle errors or interruptions in data flow from these external sources? Are there recommended strategies for error logging and recovery that I should be aware of? :thinking:

What are the best practices for securing data transmission between D2000 and external sources, particularly when dealing with sensitive information? :thinking:

Also; I have gone through this post; https://d2000.ipesoft.com/blog/d2000-and-working-with-databases-ccsp/ which definitely helped me out a lot.

Are there specific settings or configurations within D2000 that can help optimize performance when interfacing with external data sources; especially when dealing with large volumes of data?

Thank you in advance for your help and assistance. :innocent:

Hello,
these are three quite different data sources. And you are asking quite a range of questions, each of which can be answered for each data source. I’ll try to answer MQTT and SQL database (my colleagues will have to answer REST API in D2000).

As for MQTT: the D2000 KOM process implements MQTT protocol (documentation here). One communication line = one connection to an MQTT server. So, should there be high data volumes, you can create multiple lines to the same MQTT server and thus process incoming data in parallel. Of course, each line would have a different “Topic Filter” parameter. This presumes that there are multiple topics/topic groups that can be expressed by several filters so that the data can be split into several data flows.
MQTT protocol can also take care of communication interruptions and buffer the data during communication outages. This is related to the “Subscribe QoS” parameter - see more in our documentation and MQTT protocol documentation.
As for securing data transmissions, MQTT supports encryption (MQTTS). With D2000, you can use external tools to ensure encryption - the usage of stunnel is described in a blog. An advantage of using a separate utility is that you don’t have to update D2000 should you need updated/newer encryption methods.

As for SQL, things like encryption can be often achieved via ODBC configuration (supported e.g. by database drivers for PostgreSQL ODBC, SQL Server, or Sybase SQL Anywhere). And, of course, within SQL there are transactions, so you can ensure “all-or-nothing” is written to the database. As for reading from an external data source, SQL actions in our ESL script will return error codes should there be a connection error. If you are within a transaction, you have to close and reopen the transaction. If you are using "auto commit’ mode (outside of transaction), the DbManager automatically reconnects the database, so you can just repeat reading (or writing).
As for performance, for one Database object, there can be multiple connections (so that users/scripts working with a specific Database don’ have to wait for one another) as is written in our Working with databases document. Also, actions working with the database usually support structured variables, so that you can use a single DBS_INSERT to insert e.g. 1000 rows… the same is true for reading with DBS_READ - just specify a local structured variable and the maximum number of rows to be read.
Finally, you can use multiple Events to process the database data, or you can pipeline the processing - you can have one Event that reads the data and then use an asynchronous RPC CALL to send the data to another Event (or one of a group of Events if the processing needs more time/CPU).
Again, see our Scripting howto for more detailed reading about events.

Btw, several of our customers use D2000 to read thousands of orders/bids/trades per second from Joule Trayport trading system, process them, and store the data in a PostgreSQL database. They use a pipeline architecture described above (one task for receiving, multiple tasks for processing and inserting into PostgreSQL database).

As for logging, you can log into the D2000 Logging database (using the LOG or LOGEX actions) or you can log to a generic SQL database. Or, if you are only interested in a few latest errors, you can store them in a structured variable.