We're joined by Deepak Singh from the Kiro team. Kiro is AWS's attempt at building an AI coding environment to take you from prototype to production. It does that by bringing structure to your agentic workflow with spec-driven development. Their aim: the flow of AI coding, leveled up with mature engineering practices. :link: https://changelog.fm/662
| Ch | Start | Title | Runs |
|---|---|---|---|
| 01 | 00:00 | This week on The Changelog | 01:10 |
| 02 | 01:10 | Sponsor: CodeRabbit | 01:07 |
| 03 | 02:16 | Start the show! | 07:26 |
| 04 | 09:43 | The idea resonates | 03:24 |
| 05 | 13:07 | How Kiro looks/works | 05:15 |
| 06 | 18:22 | Sponsor: Outshift by Cisco | 01:17 |
| 07 | 19:39 | Don't get in the agent's way | 06:01 |
| 08 | 25:40 | Where should the agent live | 04:11 |
| 09 | 29:51 | Which model to use? | 04:38 |
| 10 | 34:30 | Model cost perspective | 04:11 |
| 11 | 38:40 | AWS teams using Kiro | 05:00 |
| 12 | 43:40 | Full ecosystem plans? | 06:54 |
| 13 | 50:34 | Hooks are interesting | 01:50 |
| 14 | 52:24 | Compaction | 02:53 |
| 15 | 55:16 | Let's talk stack | 05:31 |
| 16 | 1:00:47 | Can current models get us there? | 04:19 |
| 17 | 1:05:06 | Let's talk pricing | 06:49 |
| 18 | 1:11:54 | Still figuring it out | 02:03 |
| 19 | 1:13:58 | The looming tollbooth | 08:54 |
| 20 | 1:22:52 | Wrapping up | 00:45 |
| 21 | 1:23:37 | Closing thoughts | 01:41 |
Interesting take from Adam about the toll booths. I agree that the new tools will drive down cost (some places) for things, but I didn't think about how this tools may become the bare minimum. Kind of like everyone just has to have a data plan to participate in the world. There could be a world where developers need a Claude subscription to be competitive.
I'm not sure how it'll play out, indie movies do exist alongside summer blockbusters, but even in that case, one is more prevalent and funded
Regarding the agent flow. Has anybody tried just writing a test and then asking the agent to write some code that passes the test? Of course you'd need to give some extended prompt about what the app is trying to do, not to use any mocks or stubs etc.
instead of writing specs why not write a test suite and have the agent toil until all tests pass.
This would be a good test of an agent I bet.
I've had success (based on an internal blog post) having the AI generate the failing tests for me based on spec, and then run TDD on it's own (after I prompted it to do so).
But you're right, pulling in the Red phase of TDD into the spec/design phase is a big win
Once again I'll mention cucumber :)
is there a link somewhere to the specifications format that was mentioned? potentially created by Atlassian? i've had a bit of a search, but can only find Confluence templates
oh, EARS is from Rolls-Royce, not Atlassian: https://alistairmavin.com/ears/
@Adam Stacoviak i share your "toll road" concerns, but open-source ethically-trained offline-only models will hopefully catch up sufficiently
they don't need to be as polished as long as they still allow us to compete within the same ballpark as folks that are able to afford the subscriptions
Last updated: Dec 16 2025 at 01:26 UTC