BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

In-Memory Computing Can Digitally Transform Financial Services And FinTech Applications For Capital Markets

Forbes Technology Council

Nikita Ivanov is Founder and CTO of GridGain Systems. Frequent international speaker, an engineer at heart.  

Financial services companies in the capital markets face three major application performance challenges. They must squeeze value from every ounce of soaring amounts of data. They must handle an increasing number of asset classes, the use of automated quantitative trading algorithms and continually evolving regulatory and cybersecurity requirements. And to reach business goals, they require that multiple business applications have real-time, massively scalable access to operational and historical data from disparate sources.

The Insatiable Demand For Performance

For many financial services companies, their investment book of record (IBOR) provides a single source of truth regarding positions, exposure, valuations and performance for investors and the company itself. IBORs support performance analysis, risk assessment, regulatory compliance and more. To accomplish this, trade transactions, account activities, reference and market data, and back-office activities must flow through the IBOR in real-time. 

Additionally, many applications and systems that power or analyze the data in the IBOR have their own performance and scale challenges.

• Portfolio management. Portfolio managers must track market positions and determine whether they are within their portfolio's granular limits, including limits based on hierarchical constraints such as book, legal entity or trader. "What if" analyses with sub-second response times are required to predict the impact of a trade on a portfolio prior to execution. To improve accuracy, managers may want to incorporate many years of historical data into their analyses.

• Pricing systems. Pricing systems are based on developing and running complex analytical models and libraries, which require high-performance, highly scalable infrastructure. The more complex models, such as for structured products that require Monte Carlo simulations, compute billions of present values (PVs). These models can take hours to run on systems that aren't optimized with distributed high-performance computing (HPC) infrastructures.

• Risk consolidation. Managing risk requires maintaining a holistic and real-time view of risk exposure. While the greatest concern is latency, other performance challenges include heavily loaded source systems that can't handle further data queries and limited source APIs that require additional processing following data extraction to interpret the data.

• Risk calculations. The current market risk regulatory framework requires banks to use more historical data and perform orders of magnitude more calculations than in the past. For example, determining the expected shortfall when a new trade is priced (a Fundamental Review of the Trading Book [FRTB] risk measure) can require 12,000 calculations. When implementing an X-Value Adjustment (XVA), another FRTB requirement, a portfolio of 10,000 derivative trades can generate up to 600 billion PV calculations and several TBs of data. Legacy systems — which must copy data from disk, move the data over the network, perform calculations and then return the results — put tremendous performance demands on systems and networks.

• Fraud detection. Fraud detection requires post-trade analyses, such as monitoring to ensure trading algorithms do not participate in the next flash crash or tracking trades to detect suspicious patterns. These tasks require a highly scalable, extremely fast architecture with real-time alerts that can trigger corrective action.

• Electronic trading. Electronic trading systems require high-performance compute and the ability to ingest massive amounts of data from multiple data stores and data streams in real-time. These systems must then power real-time activity based on the analysis of the data to drive effective trading processes.

In-Memory Computing Delivers Performance And Scale

For many organizations, in-memory computing (IMC) is the most effective path to affordably achieving the required levels of performance and scalability.

IMC platforms are deployed on a cluster of commodity servers on-premises, in public or private clouds, or on hybrid architectures. They can store vast amounts of data in-memory and use massively parallel processing (MPP) to deliver up to 1,000 times faster performance for applications built on disk-based databases. Since data is stored and processed in the IMC cluster, the movement of data over the network is drastically reduced or eliminated. The computing cluster can scale horizontally to petabytes of in-memory data, and some IMC platforms offer multitiered computing to allow seamless processing of data cached in memory or stored on disk.

IMC platforms unify data from disparate sources, which may include relational and NoSQL databases, data warehouses, data lakes and streaming data. These sources may reside in public clouds, on-premises datacenters, mainframes or SaaS applications. Some IMC platforms also support a unified API with support for SQL (including ODBC/JDBC), key-value, C++, .NET, PHP, REST, Java and more.

Three Steps To Getting Started With In-Memory Computing

• Get up to speed. As indicated above, IMC can deliver increased performance and scalability for a variety of applications. Developing a cost-effective and long-term IMC strategy requires understanding of how the different types of solutions on the market apply to different use cases. Abundant online resources are available, including analyst reports and vendor whitepapers. Consider watching recordings of recent In-Memory Computing Summit conference keynotes or technical sessions, which offer a comprehensive introduction to the technologies, use cases and implementation strategies.

• Assess requirements. In addition to understanding the latency and concurrency challenges of your applications, assess how the applications connect to data sources and how these connections will evolve over time. A short-term fix could result in costly future rework. The solutions you choose should support all of the architectures and APIs you need for current and future use cases. For example, a digital integration hub (DIH) architecture — also known as an API platform, smart data hub or smart operational datastore — lets you aggregate data from multiple disparate sources and make this data available to many consuming applications via a unified API.

• Third-party experts. Most IT organizations don't have internal IMC expertise. Even after consulting available online resources, you may want to consult with third-party IMC experts for unbiased advice on the best solutions and implementation strategies for your needs.

For financial services companies in the capital markets that require their business applications to have higher performance and scale or that need multiple applications to have real-time, massively scalable access to huge amounts of data from disparate sources, IMC can offer a cost-effective path to meeting ever-evolving business, customer and regulatory requirements.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website