Does AI really make you more productive?

In October 2024, the DORA research programme published the 2024 Accelerate State of DevOps Report, which for the first time, included a section on how AI adoption is affecting software development at an individual, team and product level. With the recent emergence of the new “vibe coding” meta, ...

? https://www.roastdev.com/post/....does-ai-really-make-

#news #tech #development

Favicon 
www.roastdev.com

Does AI really make you more productive?

In October 2024, the DORA research programme published the 2024 Accelerate State of DevOps Report, which for the first time, included a section on how AI adoption is affecting software development at an individual, team and product level. With the recent emergence of the new “vibe coding” meta, a term introduced by Andrej Karpathy in February 2025, where AI is is enabling people to ship apps from idea to production in record time, I wanted to take the time to reflect on the report and discuss whether AI is really making us more productive as software developers.


Most developers are relying on AI
The report found that almost 76% of participants are “relying on” some form of AI tooling in their daily responsibilities as a software developer. This can include writing, optimising, documenting and debugging code, explaining unfamiliar code, writing tests, data analysis, and summarising information. “[D]evelopers who trust gen AI use it more”, but almost 40% of participants “reported having little or no trust in AI”.I do not trust AI. I have over 11 years of professional industry experience, and have been making websites in some form or another for almost 30 years. With all this considered, AI code generation has only ever made me feel less productive. I prefer to understand every single line of code I ship in my applications: it makes everything easier to debug, fix, and extend. I have found that AI-generated code is often sloppy, unnecessarily complex, and a lot of the time, just plain wrong. For me, AI code generation is akin to mindlessly copy-pasting code snippets from Stack Overflow, and we all know how that goes. It usually takes me longer to understand AI generated code than write my own.


You may be sacrificing software delivery metrics by relying on AI
DORA’s software delivery metrics provide an effective way of measuring outcomes of software delivery processes. They are split into two categories: throughput and stability. The report found that “AI adoption is negatively impacting software delivery performance”, and the “negative impact on delivery stability is larger”.


Sacrificing throughput with AI
Throughput measures the velocity of changes in software that are being made, that is, how quickly and how frequently software teams can ship changes to production. Throughput is all about how efficient and responsive a team can be.The report hypothesises that “the fundamental paradigm shift that AI has produced in terms of respondent productivity and code generation speed may have caused the field to forget one of DORA’s most basic principles — the importance of small batch sizes.” Since AI code generation will often spit out huge batches of code in one fell swoop, pull requests are getting larger. Larger pull requests and changes are much more difficult and time-consuming to review thoroughly for edge-cases and potential issues.Speaking from experience, code reviewers are more likely to skim over large changes and miss important details; walls of impenetrable code are much more difficult to process and interpret by a real human brain than smaller changes, amidst an already busy work-day. Whilst combing through that +11456 -7892 code review, you’re probably thinking about all those lines of code you need to write yourself in order to stay “productive”.Of course, there are tools that provide AI-assisted code reviews. But if we, as developers, do not trust AI to produce maintainable and reviewable code, why should we trust AI to review it? If you find yourself constantly reviewing large pull requests with a lot of AI generated code, you’re probably sacrificing throughput. You might not be shipping as fast.


Sacrificing stability with AI
Stability measures the quality of software changes delivered, and the team’s ability to fix bugs. Stability is measured using a combination of change fail percentage, which is the percentage of deployments to production that require hot-fixes or rollbacks, and failed deployment recovery time. A lower change fail percentage means the team has a reliable delivery process, and a lower recovery time indicates more resilient and responsive systems.The report suggests that it’s “possible that we’re gaining speed through an over-reliance on AI for assisting in the process or trusting code generated by AI a bit too much.” I would argue that speed here is the illusion of speed, and the concept of “over-reliance” is key here, especially in the context of the new vibe coding meta. Vibe coding is about being dependent on AI code generation. Describe the app or code you want, and let an LLM take care of it for you. Ask it for a few edits. Ship it.The report states that a developer’s productivity “is likely to increase by approximately 2.1% when an individual’s AI adoption is increased by 25%”. Vibe coding and any sort of AI adoption is attractive because it feels fast; it feels more productive. Now, I’m not proposing that professional software developers are going all-in on vibe coding, but increased adoption of AI and reported productivity increases poses a risk to software stability. We think we’re going faster, but we may be shipping poorer quality software and more broken changes, because we’re putting our trust in AI generated code too much.Data from the industry is proving that vibe coders are shipping unstable applications, reporting outages and critical security vulnerabilities in their apps, losing months of work that didn’t use version control, and the anecdotal struggle to finish vibe-coded apps. If professional software development teams do not stay vigilant with their use of AI, they run the risk of sacrificing stability.Plus, if you’re moving fast, but always having to pick up the pieces caused by over-zealous AI code generation, are you really being more productive?

Similar Posts

Similar

Troubleshooting Common Monitoring Challenges and Errors: Reducing Downtime and Avoiding Costly Mistakes

We've all been there. The 3 AM phone call. The Slack channel exploding with messages. Customers reporting outages before your monitoring does.Monitoring should be your early warning system, but too often it's just another source of frustration. After 10+ years managing production systems, I've seen ...

? https://www.roastdev.com/post/....troubleshooting-comm

#news #tech #development

Favicon 
www.roastdev.com

Troubleshooting Common Monitoring Challenges and Errors: Reducing Downtime and Avoiding Costly Mistakes

We've all been there. The 3 AM phone call. The Slack channel exploding with messages. Customers reporting outages before your monitoring does.Monitoring should be your early warning system, but too often it's just another source of frustration. After 10+ years managing production systems, I've seen every monitoring failure imaginable – and found ways to fix them.Let's dive into the monitoring problems that are probably costing you sleep, money, and sanity right now.


The Real Cost of Poor Monitoring
Every minute of downtime costs you:
⛶- Revenue from lost transactions
- Engineering time spent firefighting
- Customer trust (the hardest to rebuild)The cost of downtime can be staggering - combining lost revenue, engineering time, and damaged customer trust. Yet despite these high stakes, most teams still use monitoring setups that are incomplete, noisy, or too slow.


The Monitoring Nightmares Costing You Sleep (and Money)



Missing Critical Issues
The worst feeling in our industry: learning about an outage from your customers instead of your tools.


Real-world case study:
⛶Tuesday, 2:15 PM: SSL certificate expires silently
Tuesday, 2:15 PM: Payment API goes down
Tuesday, 2:15 PM: Monitoring shows "All Systems Green" ?
Tuesday, 3:40 PM: Customer support tickets flood in
Tuesday, 4:20 PM: Team finally discovers the issueDamage: Hours of lost revenue and frantic firefighting that could have been prevented.


Why this happens:

Incomplete monitoring coverage
Relying on basic ping checks instead of functional tests
Manually tracking certificates and dependencies (often in spreadsheets!)



Alert Fatigue Is Real
Alert fatigue isn't just annoying – it's dangerous.


Real-world case study:
A fintech team I consulted with received over 200 alerts daily across their monitoring tools. Eventually, they started ignoring them all. When a critical database issue hit, the alert sat unnoticed for hours while customers couldn't access their accounts.
⛶# What their alert flow looked like
$ grep -c "ALERT" /var/log/monitoring/alerts.log
237 # ?


Why this happens:

Poorly configured thresholds (often too sensitive)
No alert filtering or prioritization
Monitoring tools with limited customization options



The Root Cause Treasure Hunt
The most time-consuming part of any incident isn't the alerts – it's the investigation.


Real-world case study:
⛶- Website shows intermittent 500 errors
- APM shows normal response times (when successful)
- Database metrics look fine
- Load balancer metrics look fine
- 4 hours of investigation later: A third-party API was timing outFour hours of multiple engineers searching while customers couldn't complete orders.


Why this happens:

Limited visibility across system boundaries
No clear incident timelines
Disconnected monitoring tools with no centralized view



Stop the Madness: Practical Solutions That Actually Work



Catch Everything (Yes, Everything)
No more excuses for missing critical issues:
⛶# Step 1: Map your entire system
$ ./map-dependencies.sh dependencies.json

# Step 2: Verify every component has monitoring
$ ./check-monitoring-coverage.sh dependencies.json

# Step 3: Add functional checks, not just health checks
$ curl -s https://api.yourservice.com/v1/auth \
-d '{"username":"test","password":"test"}' \
| grep "token"Key improvements:
Audit your entire system: document every endpoint, API, and dependency
Automate discovery: use network mapping tools to find endpoints you forgot
Monitor functionality: test critical user journeys, not just uptime



Smarter Alerts, Happier Teams
Here's how to implement smarter alerts using PromQL (Prometheus Query Language):
⛶# Instead of this simplistic alert rule
instance:node_cpu_utilization:avg 0.8

# Use something like this for more intelligent alerting
# Alert only if:
# - CPU has been high for 5 minutes
# - It's happening on production, not testing
# - It's not during a known maintenance window
# - The service is showing actual impact (latency increase)

(
instance:node_cpu_utilization:avg{environment="production"} 0.8
and on()
(maintenance_window == 0)
)
and on(instance)
(
rate(http_request_duration_seconds_sum{job="api-server"}[5m])
/
rate(http_request_duration_seconds_count{job="api-server"}[5m]) 0.5
)Key improvements:
Set contextual thresholds: based on actual patterns, not static numbers
Create escalation policies: different issues need different responses
Consolidate tools: fewer sources of alerts means better signal-to-noise ratio



Find Root Causes Fast
When minutes count, try these approaches:
⛶1. Start with user impact (what exactly is failing?)
2. Check recent changes (deployments, config changes, etc.)
3. Look for correlated events across systems
4. Follow the request path (front to back)
5. Use distributed tracing if you have itKey improvements:
Visualize dependencies: so you can quickly see what affects what
Maintain detailed incident timelines: to spot patterns
Correlate events across systems: to pinpoint the true culprit



The Monitoring Tool That Actually Works
After trying dozens of monitoring solutions over the years, I've found Bubobot to be the most effective at solving these real-world problems:


1. Complete coverage in minutes
⛶- HTTP/HTTPS endpoints
- SSL certificate monitoring
- Backend services
- Specialized systems (Kafka, MQTT, etc.)
- Synthetic user flowsSetting up comprehensive monitoring shouldn't take days or require a PhD.


2. Alerts that make sense
Bubobot's approach to alerts focuses on signal, not noise:
Detects issues in seconds, not minutes
Intelligently routes notifications to the right people
Filters out false positives
Provides context so you know what to do next



3. Fast diagnosis when it matters
When something breaks, you need answers fast:
Detailed incident timelines
Clear dependency mapping
Performance comparisons to normal baselines



The Bottom Line
The best monitoring isn't the one with the most dashboards or the fanciest charts. It's the one that:
Detects real problems quickly
Filters out the noise
Helps you find and fix root causes fast
If your current monitoring setup isn't doing all three, it's time for a change.For more detailed troubleshooting strategies and monitoring best practices, check out our full guide on the Bubobot blog.


MonitoringErrors, #DowntimeReduction, #ITReliability
Read more at https://bubobot.com/blog/building-effective-on-call-rotations-to-maintain-uptime?utm_source=dev.to
Similar

Why you should learn multi-module architecture in compose as soon as possible!

Hi,I’m Mehedi Hasan, a professional software engineer and developer. I have been developing Android apps for nearly three years. Throughout my journey, I’ve faced many challenges and learned valuable lessons. I believe sharing my experiences and insights can help beginners navigate their own pat...

? https://www.roastdev.com/post/....why-you-should-learn

#news #tech #development

Favicon 
www.roastdev.com

Why you should learn multi-module architecture in compose as soon as possible!

Hi,I’m Mehedi Hasan, a professional software engineer and developer. I have been developing Android apps for nearly three years. Throughout my journey, I’ve faced many challenges and learned valuable lessons. I believe sharing my experiences and insights can help beginners navigate their own paths more effectively.


Lets start with multi-module architecture,
Multi-module architecture is one of the best ways to organize an Android app. Throughout my journey, I’ve watched countless YouTube videos on getting started with Jetpack Compose, and most of them recommend ignoring multi-module architecture. While I somewhat agree with this argument, I don’t entirely support it.I followed their advice when building Lumolight, one of my most popular apps. It’s free, open-source, and available on Google play and GitHub for anyone to check out. However, looking back, I realize I could have done much better if I had started with a multi-module architecture from the beginning.Some popular YouTubers also claim that you can always scale a single-module project into a multi-module architecture later. I strongly disagree with this idea. Retrofitting a multi-module structure into an existing monolithic app can be much more challenging than designing it that way from the start.


My Recommendation,
First of all, don’t build a production-grade app using a single-module architecture—unless you’re sure you won’t need to scale it later. As your app grows, adding new features and managing functionality becomes increasingly difficult, leading to more bugs and maintenance headaches.If I had to start over, here’s how I would approach it:

Learn the fundamentals – Start with Kotlin, understand how Jetpack Compose works, and get comfortable with core concepts like ViewModel, Dependency Injection, and Coroutines.

Explore architectural patterns – Study different architectures like MVI, MVP, and MVVM to understand how to structure your app effectively.

Build small projects – Create a few simple apps using a single-module architecture to get hands-on experience.

Transition to multi-module architecture – Once you’re comfortable with the basics, move on to multi-module architecture for better scalability, maintainability, and performance.
By following this approach, you’ll have a strong foundation before diving into multi-module development, making the transition much smoother.The biggest advantage of using a multi-module architecture is achieving a clear separation of concerns. At first, it might not seem like a big deal, but as you start breaking down complex problems into smaller, manageable modules, development becomes much more structured. This approach allows you to focus on solving one problem at a time, making it significantly easier to build and maintain a large-scale application. And if you encounter any bugs, It is much more easy to find-out because of the multi-module design.


My Achievements
I currently own two applications built using a multi-module architecture.The first one is Groovy – Music Player, which is far more feature-packed than Lumolight, yet I was able to develop it in less time. At the time, I was still learning multi-module architecture, so not everything went as smoothly as I had hoped.The second app, Everplan – Expense Tracker, is where I fully embraced multi-module architecture with a well-structured modular design. And now, it’s finally paying off! Compared to my previous apps, Everplan has significantly fewer bugs, is much easier to scale, and allows for seamless feature additions.This experience has reinforced my belief that starting with a well-planned multi-module architecture can save time, reduce complexity, and make long-term development much more manageable.
Similar

timered-counter – An animated value change web component

A game launch countdown page I built months ago became obsolete when the timer ended, but developers love modularizing reusable code - hence timered-counter....

? https://www.roastdev.com/post/....timered-counter-an-a

#news #tech #development

Favicon 
www.roastdev.com

timered-counter – An animated value change web component

A game launch countdown page I built months ago became obsolete when the timer ended, but developers love modularizing reusable code - hence timered-counter.