Well now…. did you spot the deliberate mistake?
If you attempted to read the latest edition of AWSCQ this morning you may have noticed that we only got an opening paragraph from Tom before reverting back to Sam Coles article from last week.
I’d love to blame this on some AI gremlin but this was (refreshingly?) 100% human error (mine).
So lets try again!
Welcome to AWSCQ.
For this edition we’re delighted to welcome the wonderful Tom Misiukanis as Guest Editor.
Tom is an Engineer, Architect and Principal Consultant at Steamhaus.
You can catch Tom as this months AWS Community Summit where he’ll be delivering the workshop ‘Spec-Driven Development with Kiro’.
Until then over to Tom with (his words) Yet Another GenAI Post.
Enjoy!
Introduction
First off, I know what you're thinking: another post on Generative AI. Believe me, I get it. It seems like every tech blog, LinkedIn post, and email newsletter is saturated with the same buzzwords like disruptive, transformative, game-changing. It’s hard not to feel a sense of fatigue when your feed is flooded with proclamations about how GenAI will revolutionise everything from customer service to software development.
But bear with me. This isn’t another “GenAI is going to change the world” sermon. Instead, I want to offer a few thoughts that might help you navigate all the noise, particularly from the perspective of someone who’s been building and deploying systems on AWS for years.
The truth is, I’ve been wrestling with my own relationship with GenAI. On one hand, I’m genuinely excited about its potential. On the other hand, I’m sceptical of the hype machine around it. What I want to explore is the middle ground, where the technology delivers value without the foghorn.
The Tech Hype Cycle: A Familiar Story
Generative AI is just the latest to run the gauntlet of the tech hype cycle. If you’ve been in this business a while, you can almost set your watch by it.
First comes the early excitement. This new technology will change everything. Then the peak of inflated expectations, where it can do no wrong. Every problem seems solvable, every process automatable. Inevitably, reality sets in and we tumble into the trough of disillusionment. Finally, we emerge with steady, realistic applications and a grounded understanding of where it fits.
We saw it with blockchain: 2017’s fever pitch of “we’ll revolutionise finance, supply chains, and democracy itself.” We saw it with cloud computing: “just someone else’s computer” versus “death to on-prem.” Even AI itself has been through multiple winters, the hangovers from hype highs in the 70s and 80s.
GenAI is firmly in that hype phase now. The question isn’t if it will be useful, it already is. The question is whether it will match the stratospheric expectations we’ve piled on it.
GenAI as a Performance Enhancer, Not a Job Replacer
A lot of the conversation is framed in doom terms: “AI will take your job.” My take? That’s oversimplified.
Think of it as a performance enhancer. In cycling, no amount of supplements or advanced training tools turns a novice into a Tour de France champion. You still need skill, endurance, and experience. The tools just amplify what’s already there.
Same with GenAI. Skilled developers, writers, analysts will not be replaced by it. Developers who learn to use it effectively should find they can tackle problems more quickly and spend more time on the work that really matters. It’s not about doing your job for you, it’s about helping you do more of the high-value work and less of the repetitive grind.
How I’ve Started Using GenAI
I didn’t expect to be using GenAI in my daily work as much as I do now. Not to hand over the whole job, but to get me unstuck.
When drafting an article or a talk, I’ll throw an outline at it to explore different angles or spot gaps. It’s like having a colleague who’s always ready to brainstorm, never gets tired, and doesn’t mind you asking “does this make sense?” ten times in a row.
For technical work, it’s brilliant at boilerplate such as docs, simple code stubs, test cases, and summaries. It’s not glamorous, but it’s valuable because it frees up focus for architecture, design, and problem-solving.
Building an AI Modernisation Assessment Tool
While the GenAI conversation often lives in the abstract, I’ve been working on something concrete: an AI-powered application modernisation assessment system.
The challenge is a common one. Large organisations often have hundreds of legacy applications to evaluate for cloud migration. Traditional consulting approaches don’t scale. Manual reviews can take weeks per application, and the outcomes are often subjective, varying depending on who takes part in the assessment.
Using Amazon Bedrock as the backbone, I built a system that can analyse application codebases and infrastructure configurations to:
Identify the technology stack by detecting programming languages, frameworks, databases, and architectural patterns from available artefacts.
Map out modernisation pathways by evaluating each application against patterns like containerisation, cloud-native adoption, and managed service integration, with confidence scores for each approach.
Estimate risk and effort with realistic timelines and resource implications for the options available.
The aim wasn’t to replace architects, but to take away the repetitive groundwork. Instead of spending time on manual, in-depth discovery and assessment of application code and infrastructure configurations, and relying on subjective input from stakeholders, the system applies a consistent framework built on years of architecture consulting experience, which includes:
Structured templates to guide evaluation criteria.
Confidence scoring that differentiates between evidence-based and inferred findings.
Tiered recommendations, because modernisation is rarely all-or-nothing.
The results have been promising. The system can complete an assessment in minutes rather than through a lengthy manual process, surfacing not only the obvious technical characteristics but also architectural patterns, gaps, and opportunities that matter for a successful migration. Most importantly, the output is actionable. It delivers clear, prioritised recommendations with effort levels attached, blending technical depth with business context in a way traditional automation has rarely achieved.
The tool is designed to give consultants data-driven insights rather than relying solely on subjective input, and to help guide the decision-making process. The output is not treated as final or unquestionable. Instead, it becomes the starting point for workshops, executive briefings, and collaborative planning sessions with customers, refining the strategy into something that is both technically sound and aligned with business priorities.
This is the kind of human-AI partnership I keep coming back to. The AI handles the systematic analysis and pattern recognition, while humans bring the business priorities, risk tolerance, and strategic decision-making. It’s a performance enhancer in action.
Real-World Applications Beyond the Hype (and Their Limits)
While the media focuses on splashy AI art and viral demos, the most valuable GenAI work I’ve seen is practical, repeatable, and embedded into workflows. None of it is perfect, and that’s the point. Each example comes with caveats you need to understand to use it well.
Code scaffolding - Generating service stubs, boilerplate handlers, and templates so you can focus on the hard problems. Useful, but it can overfit to patterns that aren’t a match for your architecture. You still need to review the output for maintainability and security.
Test automation - Creating unit, integration, and regression tests without slowing development. Great for coverage, but only if you verify that the tests are actually testing what matters rather than just checking happy paths.
Log and event triage - Summarising CloudWatch or X-Ray output to help reach root cause faster. It’s quick, but it can miss subtle correlations or misinterpret anomalies, especially if your logging conventions aren’t consistent.
Documentation upkeep - Keeping READMEs, API docs, and diagrams up to date alongside code changes. Saves time, but the AI won’t know if the meaning of a change requires deeper explanation or context.
Data processing - Cleaning and transforming data on the fly without building new ETL pipelines. Effective, but accuracy depends entirely on the quality and consistency of the input data.
These aren’t headline-grabbers, but they’re force multipliers. They cut interruptions, reduce context switching, and keep momentum high.
The catch is that GenAI often acts like “the dumbest smart person you know.” It has great recall, quick answers, and can mimic the structure of a good solution, but without contextual judgment it can still lead you astray. The best results come when the AI handles the structured, repeatable parts and you handle the judgment calls.
Agentic AI: The Next Frontier
For the technically inclined, Agentic AI is the logical next step. These systems don’t just respond, they act. They monitor, adapt, and make decisions without you scripting every scenario.
Imagine DevOps monitoring that not only alerts you but analyses logs, correlates changes, and suggests fixes. Or document processing that adapts to new formats automatically. This is already happening in certain contexts.
The AWS Perspective: Building with GenAI
AWS has been embedding AI into tools developers already use.
Amazon Bedrock lets you experiment with and integrate models without standing up infrastructure.
Amazon Q Developer lives in IDEs, your terminal and the AWS Console, providing coding assistance and AWS service guidance in context.
Textract and Comprehend ingest and structure text data within existing pipelines.
It’s additive, not disruptive. Enhancements without a workflow overhaul.
Kiro: AWS’s GenAI-Powered IDE
And then there’s Kiro, AWS’s new GenAI-powered IDE. If Bedrock is the foundation and Q Developer is the assistant, Kiro is the workspace rebuilt around AI capabilities.
What makes Kiro different:
Specs before code - From your prompt, Kiro produces specs, user stories, diagrams, and API contracts before writing code.
Proactive automation - Save a component and it updates tests. Change an API and docs refresh automatically.
Familiar base - Built on VS Code OSS, so your plugins work, but with deeper AWS integration.
Agentic behaviour - It observes, suggests, and acts, keeping you in flow.
High demand - Preview launched July 2025, waitlist capped almost immediately.
Kiro isn’t about replacing developers. It’s about making sure you spend more of your day solving meaningful problems and less of it wrestling with structural tasks.
Looking Forward: Realistic Expectations
We’re somewhere between hype peak and reality check. The teams that win will use GenAI deliberately, testing in low-risk areas, learning its strengths and weaknesses, and integrating it where it fits.
The future isn’t “humans vs AI.” It’s humans who can use AI well versus humans who can’t.
Final Thoughts
GenAI is powerful, but like any tool, its value depends on how you use it. Approach it with optimism and realism.
If you want to try these tools in your workflow, here are some great ways to get started:
Kiro Preview - Join the waitlist for AWS’s AI-powered IDE
Amazon Q Developer (IDE & CLI) - The integrated GenAI assistant in your IDE and terminal
Building Generative AI with Amazon Bedrock - Hands-on guidance from prototype to production
Creating Asynchronous AI Agents with Bedrock - A practical guide to building agentic AI workflows
And that’s a wrap on another AWSCQ.
A massive thanks to Tom for putting this one together.
As we mentioned you can see Tom at the AWS Community Summit later this month.
If you’re interested in meeting our vibrant, friendly community (which by reading this you’re already a part of!) then grab a ticket now using the code ‘AWSCQ’ for £15 off when you checkout.
See you there!
Before you go be sure to give our sponsors a click!
AWS Community Summit Events and AWSCQ are only possible with their generous support.
Ready?
CLICK!