Orbit is a de-fi product for asset management for various asset classes such as crypto, stock, commodity powered by machine learning technology.
( those who want to know details of Orbit, please visit this link, https://qwe321.moa.finance/orbit-products )
Orbit performs many complicated internal tasks ranging from data collection to generating orders for asset management. Orbit receives streaming data at a predefined interval, then does some data pre-processing to ensure no missing data and anomaly data. After checking data integrity, Orbit starts data processing, analysis, adding features, and so on. All these sophisticated tasks need to be completed in minutes without errors. Orbit is just the first product. It is a beginning. We will develop more valuable products and services to solve a user’s problem. So, creating engines that provide standard functions for asset management was a reasonable approach when we designed the Orbit. Having the engines will accelerate the development process and the quality of the product by using proven codes.
Orbit fully utilizes the engines; most of the written codes in Orbit are for machine learning training and testing. The engines handle routine tasks.
In a broad view, The process for asset management has 3 operations,
We have developed the engines, Gaia for data, Athena for analysis, Mercury for execution.
Gaia is the name of Engine processing data in a real-time manner to provide data to where it is needed. So Gaia is the essential layer to other engines such as Athena and Mercury because those two engines need data to do its job.
Gaia has data crawling components to collect data from various data sources from websites to paid data services at specified intervals. Each data request different data collection intervals with a certain level of data integrity in real-time. This is not a simple problem to tackle when data volume is large, and the data is significant. Gaia adopted a parallel data processing system to deal with this technical issue.
Data feeding is the core function in Gaia. Gaia delivers the requested data in a second whenever there is a data request. Gaia has a hybrid data storage system for high-performance data feeding, and it splits the data storage system into two parts, one for reads and one for writes. Accessing corresponding data storage based on its needs improves performance for data feeding dramatically.
At this point, Gaia collects the following data from all over the world at a predefined interval.
Gaia has around 1,200,000,000 data items as of now(2020-08-24). Of course, the number of data has been increasing since we collect more data for higher performance.
Athena is an intelligence engine that analyzes data and makes a decision on what to do with assets. Unlike other engines, Athena has many mathematical algorithms for data processing and analysis. The performance of machine learning wholly depends on the data you feed. If the data is good, the performance is good. So when it comes to machine learning, key points are not algorithms but data processing in technical jargon, feature engineering. This is the most significant and crucial part of Athena. Those processes are complex and takes a long time to be done.
The most sensitive and significant part of Athena is the machine learning performance evaluation. After data processing, Machine learning starts training with the processed data. Evaluating performance results is the hardest part of the entire process because there is no guarantee of the machine learning model’s workability in the real environment, especially for the chaotic finance world.
Below are what Athena is doing.
Due to Athena’s nature, Athena uses 224 cores for data processing as well as machine learning at this moment(2020-08-24). The power coming from 224 cores is not small, but Athena demands more computing power soon because we are going to add various asset classes.
Mercury is an execution engine that takes orders from Athena, buying, selling, checking balances, etc. If Athena is a brain, Mercury is a hand that takes action. It is another essential part of the system because if Mercury has an error, the loss will be realized. So it must be robust and reliable. Mercury is supposed to access multiple markets simultaneously to buy or sell assets. And the number of markets and assets will be increased shortly. So besides robustness, extensibility is the issue to think. To incorporate the theme, Mercury has two abstraction layers, virtual exchange, and virtual account. The virtual exchange abstracts markets, a client that wants to buy an asset, does not need to know details in it, just request, and then it will be done. The virtual account is a sort of logical account in the asset. Unlike the real account in exchange, the virtual account can be transferred, split, combined, and so one. This feature will open a door for different de-fi services.
Another big part of Mercury is an execution algorithm. When trying to buy or sell a large number of assets, it will impact the corresponding asset’s price. Or someone can benefit from this activity. This is the picture the execution algorithm can fit in. It minimizes the market impact and hides its activity as much as possible. The algorithm monitors the market status and handles the transactions smartly without human intervention.
Mercury can access the following markets in real-time.
Like other engines, the number of markets will be increased soon.
Due to the 3 engines, Orbit can handle four different asset classes such as crypto, Korean, the USA, and Chinese stocks in an automatic manner. Orbit can access appropriate exchange in a second. Orbit can watch various asset markets 24 hours 365 days.
But there are many things to be done in 3 Engines to present innovative De-fi products and services. Orbit and products to come will be linked to various asset classes through Edenchain and other De-fi protocols for asset management. The 3 Engines enable us to reduce time and effort for upcoming products. I want to say thank you to Jacki(CTO) for his great job.