Published on the 18/05/2022 | Written by Heather Wright
Delivering digital in a complex multi-cloud world…
Monitoring has long been a requirement in the DevOps world. But the complexity of the new multicloud reality means it’s an increasing challenge.
Enter observability. It’s been around for a while, but a new report claims the technology, which aims to discern system health by analysing inputs and outputs – or in the words of Era Software’s CEO Todd Persen ‘is about ensuring that you can deliver reliable infrastructure and digital services in the face of increasing complexity of networks, systems, and applications’ – is ‘about to have its moment’.
“Observability leaders are more competitive, more resilient and more efficient.”
The State of Observability 2022 report, from Enterprise Strategy Group in collaboration with Splunk – which granted, is certainly not an impartial player – claims a wealth of benefits are being seen by companies which have already embraced the technology.
It says leaders – defined as those who have had an observability practice for 24 months or more – are able to cut downtime costs by 90 percent, and have launched 60 percent more products or revenue streams from AppDev teams in the last year compared to beginners. ‘Leaders’ among the 1,250 ‘observability practitioners, managers and experts’ surveyed globally, are also 2.1 times more likely to say they can detect problems in internally developed applications in minutes, and have a 69 percent better mean time to resolution for unplanned downtime or performance degradation.
Local figures are a little less clear. While Splunk does reveal some details for the Australian and New Zealand market – notably a significantly higher likelihood to have consolidated application performance management tools and teams with their observability practice, at 38 percent versus 21 percent globally – there’s no specifics provided on what A/NZ leaders have achieved using observability.
We are, however, apparently much less likely to use open-source tools for observability, at 42 percent versus 60 percent globally, despite being among those with the highest rates of using a single cloud. And just 44 percent of A/NZ organisations say application development teams have a say in the purchase of observability tools, versus 60 percent across other countries.
Spiros Xanthos, Splunk general manager of observability, says the most sophisticated observability practitioners have given themselves an edge in digital transformation, while ‘massively’ cutting costs associated with downtime and boosting their ability to out-innovate the competition.
“The observability leaders are more competitive, more resilient and more efficient as a result,” Xanthos says.
It’s the increased complexity of running hybrid architectures and multicloud operations that are driving observability. Seventy percent of survey respondents are using multiple cloud services. Seventy-five percent say they have many cloud native applications running in multiple environments.
And while just 34 percent of internally developed applications are cloud native (leaders are more likely to report commonly running cloud-native applications), 67 percent of organisations expect that number to increase over the next 12 months, with more than three-quarters expanding their observability tools and capabilities in the same timelines.
For those organisations looking to invest in observability, a lack of staff is a key issue.
But companies across all maturity levels also reported struggling with the ability to correlate data from multiple sources in a timely fashion, and collecting amounts of data that exceed human capacity to digest.
That’s a problem that was also highlighted in The 2022 State of Observability and Log Management report from Era Software says observability data volumes are exploding, fueled by data coming from infrastructure such as storage, network, CPU and VMs; security, cloud services, containerised applications, microservices and Kubernetes, among others. That report notes that log volumes could increase between 50 percent to quintuple this year, leaving companies with exabytes of data to manage in five years.
The adoption of cloud, modern applications, Kubernetes, public cloud, and edge is behind this massive growth in observability data volumes. And for some organisations, log data volumes are already approaching the exabyte range.
“While it appears that observability is a straightforward goal, many companies are realising that existing monitoring tools are unable to keep up with the massive data volumes created by modern digital systems,” Persen says.
“CIOs and business leaders need to rethink how they can solve today’s high-volume data and infrastructure management problems with strategy and architecture around observability.”
As to how leaders are addressing the situation, Xanthos says those with more mature practices tend to use more tools, but fewer vendors, consolidating with vendors and tooling which best fit their needs to gain lower training costs, better interoperability and simpler purchasing and onboarding processes.
“Other widely adopted practices among observability leaders include using CI/CD to automate the delivery of new code (96 percent) and AIOps to facilitate event correlation and analysis (71 percent),” Xanthos says.
“Organisations have turned to AIOps in particular to respond to incidents with greater intelligence and automation and detect anomalies faster. And they’re getting results: Faster mean time to detect and repair, faster root cause diagnosis, and an improved ability to gather data to gain a complete picture of the infrastructure.
“Topping the main challenges and concerns associated with observability is the astronomical volume of data. That points to the appeal – the necessity – of AI/ML solutions. AI/ML can address the skills gap as well, thus alleviating project delays, burnout, resignations and more.”