In this article, I highlight a situation that many IT managers know from their daily work. One of the biggest challenges is still to ensure that all applications run smoothly. Always. The goal is to be able to say: "My users don't know any problems!" And that applies to internal users, i.e. primarily the company's employees, as well as to external users such as customers.
What sounds as logical as it is simple is actually one of the greatest challenges of our time. After all, in the face of increasingly complex application landscapes, ever more dynamic structures and trends such as virtualisation, cloud and containers, everything must be under control. Making the so-called "user experience" or also "customer experience" a success is today more than ever one of the main tasks for IT managers and top management. After all, nothing is more annoying than losing a customer because the ordering process in the online shop runs too slowly or the website reports a timeout; or having unmotivated employees because "the computer hangs again".
These are the building blocks for assuring continuous service quality:
Especially after changes such as updates or a server move, which are usually carried out when users are inactive, an early and comprehensive check is advisable. For this purpose, synthetic robots can take over the monitoring around the clock, which immediately sound the alarm in case of unexpected results.
The performance of an application should be measured on the end-user devices themselves. Because only there can you find out how quickly the result of a user action appears on the screen.
The measurement results should be evaluated on the basis of independent comparative values obtained under known conditions. In order to be able to intervene in a controlling manner, immediately descriptive preparations (tables, diagrams, graphs, etc.) are needed.
Those who know how applications and systems will behave in the future if developments remain the same can take early and planned measures. By automatically triggering an alarm when predefined threshold values are exceeded or systems behave differently than predicted, those responsible can intervene in good time.
Another recipe for success is to unerringly identify the culprit before the end user notices an emerging problem. If a company has leased the component in question from a network provider or in the cloud, it can hold the service partner responsible accordingly.
Whoever is the "master" of the carrier systems (servers and components such as SANs or networks) of applications not only needs the information about the cause, but must also be able to recognise the fundamental cause. If, for example, an application reaction does not meet expectations, a comparison of actual and target values can reveal correlations that help in the search for the true cause of a problem.
At the same time, the work of IT is made more difficult by the requirement to "reduce costs". You can kill "two birds with one stone", so to speak, if you simultaneously optimise processes and turn the cost screw by increasing the degree of digitalisation in both areas: IT and core business. Our software solutions provide this benefit:
Which experiences have you had in your company? What works well, what needs to be improved? Let us share your experiences - or even look for a solution together.