Make 2025 Your Most Efficient Year Yet
Whether you're upgrading cloud services or scaling operations, our specialists can guide you in prioritizing and planning your IT infrastructure for maximum impact in 2025.
Whether you're upgrading cloud services or scaling operations, our specialists can guide you in prioritizing and planning your IT infrastructure for maximum impact in 2025.
Skilful capacity management allows you to make the most of existing IT resources and prepare for the challenges of business growth. "As a result, you no longer need to pay for the surplus you don't use, but at the same time you make sure you have enough resources so as not to block your potential", says Jacek Rafalak of Comarch, a specialized IT services company. Jacek Rafalak goes on to discuss the best capacity management strategies for mainframe , the most advanced IT devices available today.
Capacity Management (CAP) is a process designed to ensure sufficient IT resources for current and future business needs in a cost-efficient manner. It applies to all the resources necessary to provide a given service, and covers not only such areas as physical equipment, IT networks, hardware and software, but also specific business needs and human resources. For instance, a shortage of technical jobs may cause capacity problems.
An important objective of capacity management is to meet the current and future business needs of the organization. "On the one hand, these processes allow us to make the best use of existing IT resources, and on the other, to plan for the extra reserves that might be necessary in the near future and launch them in the most efficient and economical configuration. They also help us follow trends and identify and solve issues related to capacity incidents", explains Jacek Rafalak, senior system engineer at Comarch, who is an IBM Champion for Analytics.
CAP implementation is particularly vital for highly advanced environments and mainframe devices, used primarily by large organizations for the most critical applications (e.g. finance). "The technology relies on cooperation between many devices; when you link 32 mainframe computers together, you get a tool able to perform 569 million operations per second. Mainframe stands for speed and reliability", says Jacek Rafalak.
Mainframe solutions have been present on the market for more than 50 years. "The technology is designed to support key installations that need to have an availability of 99.99%. Throughout the world, these solutions are employed by financial institutions, insurance companies and large data processing centers", he adds.
IBM mainframe computers protect data through advanced safeguards; they are also designed to support blockchain applications and elements of machine learning. A single server is able to process transactions at 2.8 times the speed of alternative solutions. IBM mainframes guarantee top security and reliability thanks to millisecond response times, hybrid cloud implementation and scalable performance.
In order to tap their huge potential, mainframe solutions should be optimized on a regular basis. Their capacity tends to change over time, affected by hardware or software upgrades, changes in business data or the number of users, disk data growth and many other factors. It is also important that loads are evenly distributed across physical equipment, processors and working hours. The workload on every machine or processor should be similar at all times throughout the day to avoid sudden spikes known as peaks.
"We cannot allow a situation in which a machine has a load of 50% at one time and 100% at another, or where one processor has a maximum workload while others remain idle, because that reduces overall system performance. This may be influenced by the number of processors, the mainframe model, the software and the system itself", says Jacek Rafalak.
Workload control in large sites is very difficult, and may even prove impossible without the data delivered by well-organized capacity management (CAP) services. This is why the constant monitoring of capacity-related areas is best outsourced to a company with adequate resources and experience. Since mainframe devices are very expensive and not in common use, it may be hard to find experts who have actually encountered them in their career. This is where Comarch steps in: it stands out from the competition by offering not only the technology but also the top-class engineers to deliver the service.
IT department directors in large companies must continually control capacity management to check whether it is well-planned or beset by capacity incidents (for example, there may be too few disks to perform a given operation). Memory and processor usage is also important.
"For example, in a bank that sends out regular balance reports to its clients, capacity management allows you to predict whether such deliveries will still be possible if the number of recipients goes up. Or how the system load will be affected, if you change the number of past records stored by a mobile app and decide you want to see 10 instead of the default five. CAP allows IT departments to determine whether they have enough computing power for the services they provide. For business, this is crucial information; at stake are decisions such as whether to expand or restrict certain mobile app functions. We can tell companies how the change will affect their IT, network, storage, CPU load, memory, etc. We can calculate the cost of necessary IT resources or determine how much they will gain, if they are after savings", says Jacek Rafalak.
Capacity management is also a way to boost service quality and reduce IT costs at the same time. Resource use can be optimized, for instance, through improvements in equipment performance and more even load distribution, as well as many other measures that allow us to reduce delays and ensure consistent reporting. These elements influence cost optimization and at the same time improve service quality from the perspective of the end user.
Jacek Rafalak:
Capacity management tools allow current resource use to be tracked, and predict capacity incidents. If a bank expands its customer base by 10 000 every month, CAP is able to determine at which point the existing system architecture is likely to experience performance incidents. And this information will be an important factor in decisions about infrastructure expansion.
Capacity management is also a useful tool for companies planning to purchase new or replace existing mainframe solutions with later versions, as may be the case when existing architecture begins to experience performance issues. In the absence of a capacity management process, it is difficult to choose the right model (there are several dozen available versions of the latest z15), or decide on the kind and number of processors required to ensure sufficient IT resources for the smooth running of business services. With a CAP process in place, Comarch experts are able to find the answer on the basis of service features in the existing architecture. They will easily determine the time in years during which selected z15models can be expected to ensure smooth business services at the current system parameters.
"The current CAP service includes, for example, regular reports that list all the capacity threats that occurred in a given month. We analyze various areas where such capacity incidents can happen and suggest how to solve them. We are able to estimate, for instance, that the client will run out of disk space in a month and needs to purchase extra space or move a certain pool from one environment to another. We can also identify applications likely to experience capacity issues as a result of increasing the customer base", the Comarch expert explains.
If a bank installs new software and attracts 10 000 new customers, the equipment needs to be scaled up accordingly. "To do so, we can build capacity management databases, track data, send reports and turn on additional data tracking for capacity scaling and management. Every data center has some reporting procedures, for example, to track memory use or disk space. But capacity management does more, because it uses analysis to determine the impact of modifications on existing infrastructure", explains Jacek Rafalak. He adds that information of this kind is particularly vital for clients who have to deal with a very high number of changes.
The first step in the implementation of capacity management services is to define their scope. "For instance, you need to decide whether you want to measure only the production or also the post-production environment. You also need to define the capacity management database, monitoring systems, build CDB (Capacity Database) and the identification of the whole infrastructure. Thorough analysis may take as long as several weeks, because, to generate information on a given trend, you need to collect enough data. This is why building a capacity management service from scratch may take up to three months or so", explains Jacek Rafalak.
Marcin Trzaskowski, Data Center Manager at Comarch, adds that the actual time will depend on the complexity of the client's infrastructure. "This requires preliminary analysis, followed by goal setting; you must decide on the scope of capacity management, select the areas it should cover and determine which databases and resources are in use. It is only with this data in hand that we can suggest specific solutions", the expert explains.
To support capacity management services, Comarch relies on the ITIL methodology (Information Technology Infrastructure Library), a code of conduct for IT departments, which includes guidelines on effective and efficient IT service provision.
ITIL is a collection of the best management methods, including tips, suggestions and practices, to ensure the most effective and efficient service provision. At Comarch, the following processes are covered by these measures: change management, incident management, problem management, service level management and configuration management.
"ITIL describes the processes that ensure correct IT infrastructure management. These measures allow us to provide users with IT services at the expected level of performance and system availability. To employ the best practices as defined by ITIL, you need to implement the recommended process map, along with a list of concepts and definitions, and use tested pathways and role definitions for individual processes. The objective of the methodology is to minimize IT service costs. And, because we operate on highly advanced computers, adequate cost rationalization may save us a lot of money", says Jacek Rafalak.
"We rely on ITIL in projects that require certification, which is agreed on within the framework of an SLA. But not everyone wants to use the methodology and nobody can be forced to do so. In each individual case, we adopt a customized approach to our clients and their needs; this is what makes us stand out in the market", adds Marcin Trzaskowski.