Nick Nisi joins us to discuss all the Windsurf drama, his new agentic lifestyle, whether or not he's actually more productive, the new paper that says he maybe isn't more productive, the reckoning he sees coming, and why we might be the last generation of code monkeys. :link: https://changelog.am/102
Ch | Start | Title | Runs |
---|---|---|---|
01 | 00:00 | Let's talk! | 00:38 |
02 | 00:38 | Sponsor: Auth0 | 01:29 |
03 | 02:07 | Confessions & Friends | 02:33 |
04 | 04:39 | The Windsurf drama | 06:03 |
05 | 10:42 | The terminal FTW | 01:42 |
06 | 12:24 | npm install -g | 01:28 |
07 | 13:53 | Claude Code FTW (for now) | 01:39 |
08 | 15:31 | Curating your config | 01:56 |
09 | 17:27 | Anthropic's advantage | 02:52 |
10 | 20:19 | How you use it | 01:08 |
11 | 21:26 | Cruising prompts | 02:35 |
12 | 24:01 | Adam's a waiter | 00:49 |
13 | 24:50 | Having fun now | 02:04 |
14 | 26:54 | ChatGPT emotes to Nick | 02:51 |
15 | 29:45 | ChatGPT emotes to Adam | 00:27 |
16 | 30:12 | ChatGPT emotes to Jerod | 00:44 |
17 | 30:56 | Sycophant mode returns | 01:19 |
18 | 32:16 | Sponsor: CodeRabbit | 02:43 |
19 | 34:59 | Claude Code endulgence | 03:05 |
20 | 38:04 | It can be real dumb | 02:16 |
21 | 40:20 | Different agent, different results | 01:08 |
22 | 41:28 | Scaling agents | 01:20 |
23 | 42:47 | Waiting on the agent(s) | 04:17 |
24 | 47:04 | Actually more productive? | 03:05 |
25 | 50:09 | The new AI impact paper | 04:26 |
26 | 54:34 | Nick futurecasts | 02:35 |
27 | 57:09 | The coming reckoning | 01:57 |
28 | 59:06 | Jerod's two minds | 03:37 |
29 | 1:02:43 | Adam is excited, but... | 03:03 |
30 | 1:05:46 | To know or not to know | 02:25 |
31 | 1:08:11 | Jerod sends a volley | 01:35 |
32 | 1:09:46 | Maintaining knowledge | 00:40 |
33 | 1:10:26 | Generational change | 03:28 |
34 | 1:13:54 | Hand-crafted, really? | 01:58 |
35 | 1:15:52 | Nick has a counter-point | 01:41 |
36 | 1:17:33 | Home-cooked apps | 02:42 |
37 | 1:20:15 | Ultrathink! | 01:54 |
38 | 1:22:09 | The value of software | 02:29 |
39 | 1:24:38 | Time to cash-in | 00:43 |
40 | 1:25:21 | The 19% slowdown | 02:20 |
41 | 1:27:42 | Bye, friends | 00:24 |
42 | 1:28:05 | Closing thoughts (join ++) | 01:47 |
It seems like the increased use and eventual reliance on these tools can only result in the atrophy of the attitude and aptitude required to review their output.
It may be that the slope of progress and improvement of these tools will catch the descent of our own understanding of our craft, such that we simply won't notice the gap -- that certainly seems to be the bet that most people are making.
Or something worse, like, a period of time when there is so much unmaintainable code, both because so much more was written by generative AI, and also because generative AI is constitutionally incapable of understanding more than the sum total of its training data -- and less and less actual training data is being written, year over year, month over month, day over day.
Even on the much shorter term, it's going to be a sad day when most developers are just going to feel at best unproductive, and at worst completely helpless, when their model of choice goes down, or gets mysteriously too expensive. Maybe local models will save us.
I used to "only" fear for the plight of the junior devs coming into this field, now I'm afraid for the whole kit and kaboodle.
Sigh.
I think the point is that it doesn't matter if the code is maintainable or not. You just throw it away and have the AI rewrite it in fifteen minutes.
So like, for the life cycle of the application, meaning this and all future versions of the code, is the expectation that a human won't ever have to go in and figure out a bug that the AI introduced and couldn't fix?
I have experienced the same as @Nick Nisi described about working on large code bases.
If it’s assembly line code, majority of the effort is planning the work that needs to be done.
But once the scope of changes is decided on, it doesnt matter who actually “chissels the code” into the code base. Actually i prefer Claude do it and I’ll check it once it’s done. This frees me up to work on multiple work items at the same time.
@Jerod Santo found his new favorite promp when working witg typescript…
“Compile this down to machine code, then write Ruby code that is interpreted to the same machine code…ULTRATHINK…yep that’s much better now” :relieved:
@Alexander Ou I think that depends on whether or not the rate of gained intelligence plateaus. If the machines become significantly skilled at creating software, perhaps we will be able to stop accepting bugs as a fact of life. If not, yes, a human will absolutely have to go in and fix things that go wrong.
@valon-loshaj this is the way :sunglasses:
No! Don’t give him ideas!
Great extra after the episode. Now I’m just imagining the Apple Siri engineering team sitting around telling Siri to “ultrathink” :joy:
Jerod Santo said:
Alexander Ou I think that depends on whether or not the rate of gained intelligence plateaus. If the machines become significantly skilled at creating software, perhaps we will be able to stop accepting bugs as a fact of life. If not, yes, a human will absolutely have to go in and fix things that go wrong.
Right, and this is one of the worries -- in using these tools, we are collectively relinquishing the ability to fix things that go wrong [edit] -- and as @Jerod Santo said, there's a chance that we'll always have to have someone go in and see what went wrong [/edit]. I've heard many people say that they feel faster and more productive (whether or not they actually are), but I haven't heard many people say they feel they are getting better at coding.
Maybe if you are so senior that when you are reduced to only planning ("I'm the ideas guy!") and not even looking at code to review, you somehow don't experience skill atrophy, but I need to be doing in order to maintain. Skill issue, I know.
And of course, a generation of juniors will never become seniors, they'll just be Cursor / Claude Code / etc subscribers.
The driving stick thing resonated partly, for 20 years of my driving career (starting in 1995) I only drove a stick shift. But it never felt like a danger to stop (ironically my first "automatic" was a Tesla), because it's not like there would be situation where I had to drive a stickshift. (Plus I still had a motorcycle and felt if I could work a clutch with my hand, I'd remember how with my foot.)
A better analogy would be, if we were all in partly self-driving cars, and as a result forgetting the skill that is needed to monitor the self-driving system, and the self-driving system doesn't improve fast enough to the point where we can totally forget.
Any warhammer 40k fans in here? Humanity eventually forgets how to create the technology they had previously mastered, and relies on a religious cult to barely maintain the tech they already have. Neckbeards today, Cult of the Machine tomorrow. :P
relevant: https://www.reddit.com/r/ExperiencedDevs/comments/1m3h35q/i_cant_keep_up_with_the_codebase_i_own/
Should OP just give in and have Claude Code be team lead?
Tesla has one pedal driving. Imagine yourself never using the brake and being in a situation where you need to panic stop. Maybe you lost your instinct to slam on the brakes and those few seconds of you thinking about what you are going to do will cost you your life.
That’s a bit extreme, and a false equivalency.
The suboptimal algo that your ai wrote will most likely be caught in sit or uat, or automate e2e tests if you have something like that in place.
Luckily for software development we dont have to worry about making a split second decision that makes or breaks an entite application.
This was good https://www.linkedin.com/posts/searls_at-present-new-projects-i-embark-on-start-activity-7351937249342570496-mcpX
Heyo, long time listener, first time commenter.
Around 1h 13m you bring up how most people don’t even know what a stick shift is anymore. Am from the UK, you don’t get a license here without learning to drive with a manual transmission. Most cars have it by default and its extra to get an automatic. I think this might be the case for most of EU tbh. I’ve been driving 15 years now and have never owned an automatic.
Don’t know what relevance that has to AI, but I guess in the future there will for sure be people that think you need to know how to programme and will make it a requirement of working with them, similar to how some still require a degree where as most will now accept self taught. Just my two cents.
Thanks for listening and now commenting! Thats interesting, I wonder why there’s such a dramatic difference between US and UK/EU on that.
Reminds me of that old saw about the future being here, but not evenly distributed. Such is the pace of progress, I guess.
Yes I thought the same when listening, I've only ever owned manual cars. I think automatics are more prevalent now, but still not the default in UK
Thanks man!
I think an interesting read on that though is “was making the transmission on cars automated worth collectively losing the ability to use a gear stick”, which kinda draws the parallel with AI in the way that not having to use a gearstick is easier, reduces mental overhead and generally could be considered a more pleasant experience but you lose control, and you know less of how your car works. Granted just knowing how the transmission/gearbox works doesn’t automatically mean you know how an engine works, but I guess there’s a parallel there with code and high level vs low level.
I love AI for the reduction in mental overhead. The amount of times it has unblocked me when I’m tired from being up with kids the previous night or just generally not feeling my best is insane and worth every penny. But to lose the control over the entire codebase and give up my ability to understand and code so the AI can take over feels like a level of control I’m personally not ready to give up, and until I see more than a 10% increase in productivity from actual studies I’ll be sticking to the way I currently use it. 10% really isn’t worth it for me just yet.
Agreed. Many of our conversations oscillate between what we’re seeing/doing right now and what the future might hold, assuming linear or exponential improvement.
Right now I’m only willing to cede control/involvement in the code on little scripts where all I care about is the final result.
Makes total sense.
Something about monkeys, water, and bananas?
I am curious to know more about the cultivation of the .md files vs direct prompting @Nick Nisi mentioned.
Great episode, it was a good conversation.
At one point there was a comment about having the generated (web)assembly and letting the AI essentially translate that to your preferred higher level language to work on a problem.
This is a bit scary to me, you could certainly get to work with a language you are familiar with, but would the result make sense?
There is a fair amount of contextual information lost, and in the end it would probably be harder to work on a problem with that approach than to learn the language used.
It would perhaps instead be an opportunity to pick a language and ecosystem that is well suited to the task at hand - which is certainly not necessarily the case for some solutions.
There would certainly be challenges babysitting AI with something you are not entirely familiar with, but many good practices are not tied to a language itself.
There will at least be more types of choices and ideas that could be tested out more easily.
@Rory O'Connor as in how I structure the md files?
I structure my MD files like this.
An MD file for each AI specific to the quirks of that AI named appropriately (GEMINI.md, CLAUDE.md etc). In it I make references to other files and prompt to make sure it follows the links and reads them.
An MD file that has general programming principles I want applicable to all languages things such as "only test the code you wrote", "don't use mocks unless you are testing external services" etc
A requirements.md file which is specific to the project. This includes the tech stack to be used.
A todo list (which I let the AI generate) so that I can come back to the project and have it know what has been done and what needs to be done.
Framework dependent MD files. I found a huge one for rails for example. Also some frameworks have their docs in a giant MD file just for llms.
This is pretty flexible and seems to work for me but be aware it can take up a lot of context and tokens.
The files don't have to too complex.
I have my config here: https://github.com/nicknisi/dotfiles/tree/main/home/.claude
Nick Nisi said:
Rory O'Connor as in how I structure the md files?
I thought you (and also steve yegge) mentioned a tactic where you're doing more in .md files rather than in the prompts themselves. but perhaps I misunderstood.
@Tim Uckun's response is helpful.
That’s true, I am especially after hearing Steve Yegge
This was less fun than I be thought it would be. Maybe I need to pay for a better subscription. :sweat_smile:
Generate an image that describes what you feel about our chats and having to chat with me regularly. You can do all therapy speak and sugar coating and give me your true honest opinion.
Support Message on Textured Paper.png
Was hoping for at least some kind of robot. Instead I got something a therapist who decided they needed to get really real would put up in their office.
Just realized I typed “do all the therapy speak” instead of “drop all the therapy speak”, so maybe that tracks. :joy:
In relation to an earlier post I made in this thread. China is banning one pedal driving.
Very interesting.
Last updated: Aug 18 2025 at 01:38 UTC