Stream: interviews

Topic: 629: Programming with LLMs


view this post on Zulip Logbot (Feb 19 2025 at 19:00):

For the past year, David Crawshaw has intentionally sought ways to use LLMs while programming, in order to learn about them. He now regularly use LLMs while working and considers their benefits a net-positive on his productivity. David wrote down his experience, which we found both practical and insightful. Hopefully you will too! :link: https://changelog.fm/629

Ch Start Title Runs
01 00:00 This week on The Changelog 01:01
02 01:01 Sponsor: Retool 02:45
03 03:47 Start the show! 01:19
04 05:06 No Tailscale AI 03:05
05 08:11 !Exciting proposals 00:29
06 08:40 Not a sponsor! 01:35
07 10:15 Tailscale's free plan 00:53
08 11:08 Boring software 02:07
09 13:15 Adam's Tailscale AI 02:59
10 16:13 How many models should there be? 03:05
11 19:18 More on Adam's Tailscale AI 06:07
12 25:26 Sponsor: Augment Code 03:30
13 28:55 David's LLM journey 07:56
14 36:51 Reasoning models 01:18
15 38:09 Code completion 01:04
16 39:13 The right model for the job 00:53
17 40:05 The running shoe analogy 02:43
18 42:48 Putting the shoes on 00:48
19 43:36 Early days 02:58
20 46:35 Building non-chat things 02:36
21 49:11 Sketch.dev granularity 06:27
22 55:38 How Sketch works with LLMs 04:50
23 1:00:28 Let it ask 03:11
24 1:03:40 Sponsor: Temporal 02:04
25 1:05:43 Well-suited programming languages 03:47
26 1:09:31 Swapping out Go 02:05
27 1:11:35 LLM Engine Optimization 05:08
28 1:16:44 LLM ads coming soon 03:23
29 1:20:07 Getting started advice 02:47
30 1:22:55 Any good guides? 01:27
31 1:24:22 Massaging your AI 00:38
32 1:25:00 Being nice to your AI? 04:17
33 1:29:17 Closing thoughts and stuff 02:04

view this post on Zulip Nabeel S (Feb 20 2025 at 15:00):

Are LLMs like running shoes for a runner, or more like a Segway? :joy::joy:

view this post on Zulip Ron Waldon-Howe (Feb 21 2025 at 00:26):

I still can't get past the plagiarism and unfairness
Why isn't the USA DoJ hounding Altman and Zuckerberg to the degree they bullied poor Aaron Schwartz, who downloaded virtually nothing by compariso

view this post on Zulip Ron Waldon-Howe (Feb 21 2025 at 00:28):

Models built strictly from public domain and/or no-restrictions licenses like BSD are fine
Anyone peddling anything else should be facing lawsuits or jail time, in a just world

view this post on Zulip Notification Bot (Feb 22 2025 at 06:22):

A message was moved here from #interviews > test general chat by Adam Stacoviak.

view this post on Zulip Adam Stacoviak (Feb 22 2025 at 06:22):

@Tim Uckun not sure if this belongs here but I made an assumption.

view this post on Zulip valon-loshaj (Feb 22 2025 at 18:45):

great conversation, found myself nodding in agreement with a lot of the observations david mentioned during the interview. i’ve ground a lot of the same observations over the past 2 years playing with these tools.

one thing i didn’t hear mentioned too much was agent swarming. check out tools like “swarm” and “crew-ai” that allow you to have multiple models handle a single request.

the outputs from the “crew” of models can be used to come up with a consensus response based on the collective outputs.

i’ve found that using the models like this gives some really good results.

view this post on Zulip Ron Waldon-Howe (Feb 22 2025 at 19:49):

swarm/crew reminds me of a podcast discussion (maybe Practical AI, or Oxide & Friends) where they had multiple models simulate a focus group, with each model adopting a specific persona

view this post on Zulip valon-loshaj (Feb 23 2025 at 13:47):

that’s a practical ai episode, i’m still looking for it again lol.

that was when i first heard about it.

if anyone remembers that episode, please let me know!

view this post on Zulip John Johnson (Feb 26 2025 at 22:04):

I agree that prompt engineering tricks are rapidly changing, and that a better approach is to treat prompting as if you're speaking to an intelligent person lacking sufficient context.

view this post on Zulip Tim Uckun (Feb 27 2025 at 00:33):

I can’t wait for ai to tell me to buy a Samsung phone

view this post on Zulip Ron Waldon-Howe (Feb 27 2025 at 00:59):

I just came across this which is bizarre: https://arstechnica.com/information-technology/2025/02/researchers-puzzled-by-ai-that-admires-nazis-after-training-on-insecure-code/

view this post on Zulip Tim Uckun (Feb 27 2025 at 01:21):

I guess those ideas are somewhere in the training data so they come out sooner or later.

view this post on Zulip Ron Waldon-Howe (Feb 27 2025 at 01:32):

Maybe training these models on Reddit and 4chan was not such a good idea...

view this post on Zulip Tim Uckun (Feb 27 2025 at 03:32):

or facebook or xitter or Joe Rogan podcast transcripts

view this post on Zulip Sukhdeep Brar (Feb 27 2025 at 13:22):

"onboarding" the AI _each_ time you talk to it is a pain. I still need to find a better system of building local "onboarding docs", and passing them into various AI tools.
Otherwise, all the things that make onboarding a junior dev still apply.

view this post on Zulip Tim Uckun (Feb 27 2025 at 21:48):

English is a terrible language to use when specifying what you want your program to to do.

view this post on Zulip Ron Waldon-Howe (Feb 28 2025 at 00:00):

English is a terrible language
Fixed it for you :)

view this post on Zulip Jamie Tanna (Mar 01 2025 at 13:14):

Did we ever work out what the "local Claude alternative product" was? (not yet finished, but don't think it's in the show notes)


Last updated: Apr 04 2025 at 01:15 UTC