Software Failures and Lack of Runtime Visibility Prevent Engineering Teams from Trusting Coding Assistants and AI SREs
NEW YORK, April 14, 2026 — Lightrun, the leader in software reliability, today released its State of AI-Powered Engineering Report 2026, based on an independent poll of 200 SREs and DevOps leaders (Directors, VPs, and C-levels at Enterprises in the US, UK, and EU). The report reveals that, until AI-powered engineering tools have live visibility of how code behaves at runtime, they cannot be trusted to autonomously ensure reliable systems.
Lightrun’s report reveals that a major volume of manual work is required when AI-generated code is deployed: 43% of AI-generated code requires manual debugging in production, even after passing QA or staging tests. Furthermore, an average of three manual redeploy cycles are required to verify a single AI-suggested code fix in production.
As the volume of AI-generated code is rapidly increasing, it is essential to close this verification loop. As a result, engineering teams are turning to AI SRE (site reliability engineering) tools. These agents reason over existing observability, codebase changes, and infrastructure signals to propose incident causes and recommend fixes. However, the report found that 77% of engineering leaders lack confidence in current observability stacks to support automated root cause analyses and remediations.
Lightrun’s report, conducted with independent research firm Global Surveyz, captures the perspectives of senior engineering leaders on the AI-powered SDLC. It explores several timely issues, including:
● AI-Generated Code Reliability Concerns: 88% of companies require 2-3 manual redeploy cycles just to confirm an AI-generated fix actually works in production.
● Wasted Developer Time: Developers spend an average of 38% of their week (two days) on debugging, verification, and troubleshooting.
● The Runtime Visibility Gap: 60% of SRE and DevOps leaders identify a lack of runtime visibility as the primary bottleneck in resolving incidents. This is underscored by the fact that, in 44% of cases where AI SREs or APM tools investigations failed, it was due to the necessary execution-level data not being captured.
● AI SREs: The Trust Wall: 97% of engineering leaders say AI SREs operate without significant visibility into what’s actually happening in production. And, 54% of resolutions to high-severity incidents still use tribal knowledge rather than diagnostic evidence from AI SREs or APMs.
This represents the core challenge of AI-accelerated engineering. Today’s AI agents operate using probability, reasoning their way toward conclusions. To ground that reasoning in reality, the report makes clear they need real-time visibility into what’s happening, including variable states, memory usage, and how requests move through a system.
“Engineering organizations need runtime visibility to embrace the possibilities offered by AI-accelerated engineering. Without this grounding, we aren’t slowed by writing code anymore, but by our inability to trust it,” said Ilan Peleg, CEO of Lightrun. “When almost half of AI-generated changes still need debugging in production, we need to fundamentally rethink how we expect our AI agents to solve complex challenges.”
The report is available online at http://lightrun.com/ebooks/



