In-Memory Computing In-memory computing
In-Memory Computing is a high-performance distributed system designed to store and process data in RAM in real time. Delivers performance orders of magnitude faster than disk-based systems. In-memory computing technologies speed up the processing of large amounts of data, so as the phenomenon such as Big Data grows, it becomes more and more popular among enterprises.
A catalog of BI solutions and projects is available on the TAdviser.ru
Content |
DBMS in-memory FAQ
Database management system of in-memory (IMDS) is a growing segment of the global DBMS market. The creation of in-memory DBMS was a response to the emergence of new tasks facing applications, new system requirements and the operating environment.
What is in-memory DBMS?
In-memory DBMS is a database management system that stores information directly in RAM. This is in radical contrast to the approach of traditional DBMSs, which are designed to store data on stable media. Since the processes of processing data in RAM are faster than accessing and reading information from a file system, in-memory DBMS provides an order of magnitude higher performance for software applications. Because the in-memory design of DBMSs is much simpler than traditional ones, they also impose much lower requirements on memory size and CPU performance.
If the goal is to abandon I/O processes, why not achieve it through caching?
Caching is a process in which traditional DBMSs store frequently used records in memory for quick access to them. However, caching speeds up only the process of finding the necessary information, and not its processing. Therefore, the performance gain is significantly less. In addition, cache management itself is a resource-intensive process that uses significant amounts of memory and processing power.
Is it possible to get an effect similar to in-memory DBMS by creating a disk in memory (RAM disk) and deploying a traditional DBMS on it?
As a temporary solution, placing the entire database on a RAM disk can speed up database writing and reading. Yet there are a number of downsides to this approach. In particular, the database will still be bound to the disk drive, and processes in the database, such as caching and data I/O, will occur even if they are redundant. In addition, many requests are addressed to the database located on disk, they require the time and resources of the processor, and this cannot be avoided in the case of a traditional DBMS, even if it is located on RAM. On the contrary, in-memory DBMSs use a single data transfer, which simplifies data processing. Removing redundant copies of data reduces memory load and improves process reliability and minimizes CPU requirements.
Is there data characterizing the quantitative difference in performance between the three approaches described above?
According to published McObject tests, which compared the performance of the same application, transferring a traditional DBMS to RAM made it possible to achieve read acceleration by 4 times and database updates by 3 times compared to a traditional DBMS on a hard disk. In-memory, the DBMS showed even more significant results in comparison with the DBMS on the RAM disk: reading the database was 4 times faster, and writing to the database was 420 (!) Times faster.
Performance of in-memory DBMS eXtremeDB compared to db.linux DBMS on RAM disk
McObject, 2009
What else distinguishes in-memory DBMS from traditional?
In-memory DBMS does not bear any load from data I/O operations. Initially, the architecture of such databases is more rational, and memory load processes and processor cycles are optimized.
For which applications is the use of in-memory DBMS relevant?
In-memory DBMSs are commonly used for applications that require ultra-fast access to and manipulation of data, storage, and systems that do not have a disk, but nevertheless must manage a significant amount of data.
How scalable are in-memory DBMSs? If the application controls a terabyte of data - is it a lot for in-memory DBMS?
According to the McObject report, in-memory DBMSs scale perfectly for sizes larger than terabytes. So, in the course of the tests, the 64-bit in-memory DBMS installed on the 160-core SGI Altix 4700 server running SUSE Linux Enterprise Server version 9 from Novell reached 1.17 terabytes and 15.54 million lines without visible restrictions for further scaling. Moreover, the performance in this test practically did not change as the DBMS reached hundreds of gigabytes, and then a terabyte, which indicates almost linear scalability.
Isn't it true that in-memory DBMS is not suitable for use on networks of several or more computers?
In-memory DBMS can be both "built-in DBMS" and "client-server." Client-server DBMSs are essentially multi-user, so that in-memory DBMSs can also be divided into several threads/processes/users.
In-memory calculations: facts
The new Aberdeen Research report draws attention not only to a few interesting facts about Big Data, difficulties in processing and analyzing the growing volume of data, but also to how in-memory computing can play a key role in accelerating the collection, sharing and management of information in the enterprise. At least in those businesses that can afford it.
- Each year, the volume of business data grows by 36%.
- The main problem with big data processing is how to get the result faster (data from the December 2011 report).
- Of Aberdeen's 196 customers discussing Big Data, 33 use in-memory computing. The reason why most are abandoning this technology is most likely its high cost.
- Obtaining information on demand takes 42 seconds instead of 75 minutes spent using conventional technologies.
- In memory calculations, 1200 TB/h is processed, compared to 3.2 TB using conventional technologies. There is a 375-fold increase in efficiency.
- In-memory computing, simply put, makes information processing and analysis fast, and that's good for users and IT organizations dealing with increasing amounts of information used in business decision-making.
Data Sources by Company Size
TechTarget, December 2011
According to TechTarget, in-memory DBMSs are most often used by medium-sized companies (23%) compared to small (18%) and large companies (15%).
In-memory computing issues
But computing in RAM, like any technology, has its own unique features, problems and pitfalls. First, it's not cheap. We need powerful servers, multi-core processors and tons of RAM. Appropriate software and analytical applications are required. Speed processing technology requires all of the listed components, since terabytes of data are stored with 'zero' latency of access directly in the server RAM, and not somewhere on disks.
Although manufacturers do not disclose prices for in-memory computing applications and the report also does not give prices, it is enough to look at the statistics from the report: an enterprise using in-memory computing spent about $850,000 over the past 12 months.
Another problem with in-memory computing technology is that it is only well suited for transactions with structured data sets such as item articles, customer information, sales reports.
If your company has the tools and understands the value of information in today's business strategy, in-memory computing technology can be the right choice for you.
Products
The creation of in-memory DBMS began in 1993 at Bell Labs. The system was prototyped as Dali Main-Memory Storage Manager. These studies marked the beginning of the creation of the first commercial in-memory DBMS - Databases.
In subsequent years, in-memory DBMSs attracted the attention of the largest players in the database market. TimesTen, a startup company founded by Marie-Anne Neimat in 1996 as an offshoot of Hewlett-Packard, was acquired by Oracle in 2005. Today Oracle sells this product, including as an in-memory DBMS. In 2008, IBM bought SolidDB in 2008, and is also working in the field of in-memory DBMS and Microsoft.
VoltDB, founded by Michael Stonebraker, one of the pioneers of the DBMS market, announced the release of in-memory DBMS in May 2010, at the moment the company offers both a free and proprietary version of this system. SAP released an in-memory DBMS, SAP HANA, in June 2011.
List of in-memory DBMS available on the market:
- Adaptive Server Enterprise (ASE) 15.5
- Apache Derby
- Altibase
- BlackRay
- CSQL
- Datablitz
- DiAna: Digital Analytics Pro
- Eloquera
- EXASolution
- EXASolution EXtremeDB
- Finances Without Problems
- FleetDB
- H2
- HSQLDB
- IBM TM1
- InfoZoom
- KDB
- #liveDB
- Membase
- Mercury
- Strategy
- MonetDB
- MySQL
- Oracle Berkeley DB
- Panorama
- ParAccel
- Polyhedra IMDB
- QlikView
- RDM Embedded
- RDM Server
- RDM Server Redis
- SolidDB by IBM
- SAP HANA
- SQLite
- SQLite
- Starcounter
- Tarantool In-memory computing platform
- TimesTen by Oracle
- Vertipaq
- VoltDB
- WebDNA
- TREX
- Xcelerix by Frontex
- WX2 by Kognitio
- Xeround
Russian realities
Many in-memory solutions are available to Russian customers. Among the most used − solutions are Oracle, IBM Cognos TM1, SAP HANA, Microsoft PowerPivot, QlikView and Pentaho Business Analytics. Such platforms are well used when real-time data analysis is needed, given that data can be changed at any time during the analysis process. Also, such solutions are well suited in cases where it is not possible to create a multidimensional data store and it is necessary to analyze the data of the accounting system without modifying it. The systems offer various ways to scale horizontally, both using the tools of the platform itself and using additional software.
Specifically, the virtual cube feature in Pentaho Business Analytics can be scaled using the JBoss Data Grid industrial solution, which is designed to create distributed in-memory information stores.
With this approach, you can create in-memory cubes of 1 TB and higher. From the point of view of affordability, these solutions are quite permissible for SMB companies. In particular, for the SMB market IBM , we have a comprehensive Cognos Express solution (TM1 is part of it), and Pentaho has a free version and special price offers for small companies.
Strictly speaking, in-memory technologies are divided into two classes. These are data discovery solutions and precisely in-memory, or, more correctly, "in-memory database management system (DBMS)." An example of a data discovery solution is Qlikview. The data in this system is presented in a convenient form, and the use of in-memory technology allows you to work with the visual component quickly. But you cannot connect other tools to it: data from Microsoft Excel files, Cognos systems, or Oracle BI.
In-memory DBMS is when data is initially stored in RAM, and access to it takes almost no time. For example, the chief accountant of the company needs to see the report for the year before last in dynamics by day: if you start this process on a classic DBMS, it will take at least 10 minutes (if the system is configured correctly). If the information is stored in RAM, the result will appear instantly. An example of such a solution is SAP HANA. This system, being a DBMS, provides memory access to any BI tool: you can load data from Excel tables, BI systems Cognos, Oracle and others.
The cost of such solutions consists of many factors, ranging from the timing of the project implementation to the cost of the technology itself. Some solutions are really expensive, but they pay off quickly by improving efficiency and efficiency. Such products are in demand in any company where it is important to receive analytical reports promptly. For example, if an analyst needs to generate a report consisting of 30 Excel files, he will need at least 3 days to compile it manually. If you have the necessary IT systems, you just need to point to these 30 files, after which the system itself will form a single report with which you can work.
Vladimir Itkin, Development Director of Partner Network Qlik (QlikTech) Russia, told TAdviser that the difference between QlikView is its focus on simplicity and convenience in building reports. With this approach, the implementation cycle is significantly reduced, and many of our partners can work in extreme programming mode. This is an iterative approach, where the duration of one cycle is usually no more than a week. Thus, the business user begins to see the result from the first days of the project and takes part in creating a solution.
"After 5-6 such iterations, the output turns out not just a BI solution, but an up-to-date and" live "analytics tool. Of the latest projects in this mode, we can call Geotek Holding and the A5 pharmacy chain, "the top manager explained. According to him, about 77% of all projects, from the joint creation of a TA to the launch of commercial operation, are implemented in less than 3 months. A third of customers implement QlikView on their own.
For example, CROC implemented a project to combine marketing information databases into a single information space in the pharmaceutical company Nicomed. Previously, various frameworks for working with marketing data were used to search and analyze information, which were often inconvenient and not intuitive. After the implementation of the solution, work with the data warehouse began to be provided by the Qlikview analytical system, thanks to which working with disparate information became fast and convenient.
In M.Video"," for example, the SAP HANA system was implemented with in-memory computing technology. The data storage and analysis system that the customer had before could no longer cope with this amount of information - data in more than 2.5 billion lines was downloaded for about 3 hours. After implementing SAP HANA, the system loads this data in less than 30 minutes.
An example is the Tern Group project for Surgutneftegas. The main objective of the project was to reduce the time spent on preparing reports, from data processing to visualization of the results. The time for preparing reports has been reduced hundreds of times, and now users can work with their analytical requests almost online.
Free reports
In-Memory Analysis: Delivering Insights in the Speed of Thought
Chronicle
2025: Global RAM Computing Market Size Reaches $15.16 Billion
At the end of 2025, the costs in the global market for computing in RAM (In Memory Computing, IMC) amounted to $15.16 billion. More than a third of global spending was in the North American region. Such figures are provided in a Fortune Business Insights study, the results of which were published on January 15, 2026.
The concept of IMC involves storing data and performing all computing operations directly in RAM - without using traditional drives such as hard drives or solid state devices. This approach reduces latency and improves performance, which is critical for applications of artificial intelligence, machine learning, financial analysis and a number of other tasks.
One of the key drivers of the market, the authors of the study call the rapid development of AI, including generative (Genia). Many AI systems require instant access to huge amounts of information, which is impossible with traditional drives. Against this background, hyperscalers and leading cloud providers are actively purchasing servers equipped with a large amount of RAM, which contributes to the expansion of the industry. According to experts, in 2025, approximately 88% of organizations regularly used AI in at least one business function: against this background, the burden on real-time data processing systems is rapidly increasing, which leads to an increase in the popularity of the IMC concept.
The development of the Internet of Things (IoT) infrastructure has a positive impact on the industry. IoT Analytics estimates that the number of these devices worldwide has reached 18.5 billion by the end of 2024, up 12% from 2023. It is expected that by 2030 the figure will increase to 39 billion. IoT equipment generates colossal streams of information, which often require real-time analysis. As a result, the need for IMC solutions increases.
| Organizations are increasingly combining streaming data from IoT systems with analytical tools to predict equipment maintenance, personalize customer service processes, and automate decision-making. In such conditions, computing in RAM becomes critical because it significantly reduces latency compared to traditional methods of storing and processing information, the authors of the study note. |
In addition, the growth of the market is facilitated by the use of hybrid and multi-cloud environments. Enterprises are increasingly distributing their workloads between private and public clouds, as well as local systems, to achieve a better balance between performance, cost, and control, which requires high-speed data access.
Depending on the application, the industry is segmented into BFSI (banking, financial services and insurance), healthcare, manufacturing, information technology and telecommunications, retail, etc. In 2025, the largest share of revenue was provided by the direction of BFSI - 23.6%. Geographically, North America leads with $5.76 billion, or 38%. Major industry players globally are:
In 2026, the global RAM computing market is expected to reach $16.72 billion. Fortune Business Insights analysts forecast a Compound Percentage CAGR of 11.8% going forward. Thus, by 2034, costs may increase to $40.8 billion.[1]
See also
- Business Intelligence, BI (Global Market)
- Global BI Market Trends
- Business Intelligence (Russian market)
- CPM (Global Market)
- Big Data (Big Data) Global Market
- Big Data in Russia
- Big Data
- Self-Service BI
- Data visualization
- Predictive analytics (predictive, predictive, predictive) Predictive analytics
- Cloud/SaaS BI


