Solve the performance challenges of digital transformation applications
Even in times of economic uncertainty, some industries and businesses can grow rapidly. Today, for example, food delivery, e-commerce, remote access, and collaboration services are experiencing increasing demand from consumers and businesses alike dramatically changing their behaviors, many of which may become permanent.
Businesses in these industries must quickly reach higher speed and scale to handle a significantly higher volume of website visitors, sales transactions, delivery requests, and video streaming without users experiencing service degradation. . And it’s a challenge that many companies will face as their industries recover and grow.
To increase the speed and scalability of applications, most organizations will need to modify their infrastructure or applications. The strategy they use will determine the performance gains, time to value, and return on investment.
In my last post, I talked about a Digital Integration Hub (DIH) as a strategy to power real-time business processes. An DIH architecture is particularly beneficial for business processes involving multiple applications and / or multiple data sources, creating an API service layer that aggregates data from one or more data stores, caches that data, uses massive processing parallel (MMP) to process data in real time, makes this data available to one or more business applications and synchronizes all changes made to the data by the application with the underlying datastores that have been modified.
However, accelerating and evolving a single existing application to meet growing demand is a somewhat different challenge and requires thinking about different issues. Let’s take a look at the pros and cons of four different approaches to speeding up and scaling individual applications.
Redesign from scratch to deliver the performance and scalability you need
Advantages: This is the ideal approach if time and money are not an issue. You can use any new hardware and software strategy to achieve the performance and scalability you want.
The inconvenients: Redesigning an application from scratch can take years, have a very high cost, and impact several other business applications. During this time, your business will continue with your existing app in the meantime, so you risk losing customer base due to frustrated users and lost opportunities.
Perform infrastructure upgrades, such as adding RAM, processors, and servers
Advantages: This is a common palliative approach. This can be cheaper and faster than a complete overhaul, and it can be a good solution for systems where the performance gains required are not large.
The inconvenients: If you have a rapidly growing system, this approach can still lead to significant time and money issues, without ever delivering the long-term benefits of a complete overhaul. Additionally, this strategy only “opens the pipes” of existing infrastructure, meaning it can support increased scale but not increased speed. For example, if the time it takes for an on-disk database to read and write from disk is a limiting factor in application speed, adding more RAM, processors, and servers will not remove this limitation. . Additionally, serving an application to multiple servers introduces considerably more complexity, as the application may need to be rewritten to handle database partitioning on a cluster of database servers.
Migrate to an in-memory database
Advantages: An in-memory database can dramatically increase the speed of an application. Since all data resides in RAM, an in-memory database eliminates all processing delays caused by disk access. Performance gains can also be sustained on a large scale.
The inconvenients: Migrating the database from an existing application requires a huge amount of work and testing. Additionally, the application may need to be rewritten to work with the in-memory database. If other applications access the original application database, this strategy will cause problems for those applications, which will then also need to be rewritten to use the in-memory database.
Deploy an in-memory data grid
Advantages: An in-memory data grid offers the same speed and scalability benefits as an in-memory database. However, it can be much faster to implement and much more cost effective because you can add the data grid between existing application and database layers without significant changes. In addition, the data grid will not impact other applications that access the underlying database. Open source in-memory data grid alternatives are available, making it easy and cost-effective for your business to get started, and free online resources for help.
The inconvenients: Using an in-memory data grid involves moving to a distributed computing paradigm, which may require some changes in approach if distributed computing is new to your business. For example, clustered computing might require adding an “affinity key” for data types, so that all data with a single key can be cached on the same server to minimize the risk. network data movement and optimize performance. While this doesn’t involve an app or database rewrite, it may require a change in mindset. Additionally, the data grid is deployed on a core server cluster and will require additional hardware investment to scale your application and maintain the desired level of performance.
To choose the optimal approach to application performance and scalability in the face of rapid growth, companies need to assess the performance they need, the window of opportunity to reach that level, the short and long term costs. the approach chosen, and how long the modified architecture will reach its goal before a major investment is required. Companies that do not take all of these factors into account risk wasting precious time and resources, frustrating their customers, and falling behind the competition.