Stream: gotime

Topic: 331: How I lost my (old) job to AI


view this post on Zulip Jerod Santo (Sep 18 2024 at 15:06):

:link: https://gotime.fm/331

In this follow-up to episode #306, "How soon until AI takes my job?", the gang of (grumpy?) veteran software engineers candidly chat about how their day to day is changing in the midst of improving AI tooling & hype.

view this post on Zulip Matthew Sanabria (Sep 22 2024 at 15:19):

I forget who the guest is that says AI is not intelligence but I find myself agreeing with their views a lot. It's nice to hear people challenging what AI is.

view this post on Zulip Tillman Jex (Sep 24 2024 at 20:00):

A welcomed and sobering discussion!

view this post on Zulip Tim Uckun (Sep 25 2024 at 04:07):

Honestly I was disappointed with this episode. They spent too much time fighting straw men instead of having a serious discussion about this topic.

the show started by saying something astonishing which is that AI can generate code at the level of the average programmer. Wow. if that's true it's profound and should have warranted a long discussion on that topic alone.

They didn't spend enough time talking about the various AI out there and only seemed to be familiar with copilot and openai. The general consensus is that both google and anthropic do a better job at code generation. They also skipped over the fact that openai can actually test the code it generates and iterate.

They talked a lot about how their jobs are not at risk but if AI makes them ten percent more efficient then that means ten percent less programmers are needed. If the AI can generate a buggy app and the average programmer can generate a buggy app why would you use a programmer if the AI takes a day of iteration and the developer takes a month? Why couldn't I hire one programmer and write twenty apps myself and then give them to the programmer to fix the bugs?

They talked about how it's impossible to get detailed enough specs for an app but in real life those specs are given incrementally over many months of development. Why can't I do that with an AI but my turnaround time is minutes instead of a two weekly sprint?

Honestly there is much to talk about and I have no doubt AI has already replaced jobs and will replace many more in the future. This was a blown opportunity to have a serious conversation about this.

Oh and too many giggles. That's just a personal gripe though. Maybe that's an unpopular opinion.

view this post on Zulip Matthew Sanabria (Sep 29 2024 at 03:26):

Similar to @Tim Uckun I was also a bit disappointed with this episode. Since it's on GoTime I'd have liked to hear more about how AI is impacting the need for Go experience.

The one guest that held a more negative view towards AI seemed to have the most to say about it. Particularly around it not being intelligent at all. I enjoyed their takes overall. Everyone else was noting how they felt about AI with no real substance. Missed opportunity to compare and contrast AI models, use cases, or even present data on job loss due to AI. I would have liked to hear a more serious conversation as well.

No shade on the hosts or guests at all. The episode felt more like a filler episode rather than one with a topic.

view this post on Zulip Tim Uckun (Sep 29 2024 at 04:11):

I just listened to another podcast about AI and the guest had something really interesting to say. He said something like

AI is already better than an average human because the average human can't code at all.

Similarly the average human can't summarize a scientific paper, write a haiku about a hairless cat, diagnose an illness, or come up with 25 ideas about how to start a new SAAS company.

His point was that we are rushing too fast and have no idea how to control the AI and it's impact on the world right now and this is only going to get more dire as time goes on.

My opinion is that AI has already had a horrible negative impact on the world. The twitter algorithm is AI, the youtube algorithm is AI, the facebook algorithm is AI and these combined have already caused earth shaking results in elections, social division, riots, war, and strife all over the world.

Also of course in Gaza AI literally gave orders to soldier to kill people and they did it resulting in deliberate targeting and execution of innocent people. Somebody is going argue that the AI wasn't the one giving orders but any honest view of the situation is that the AI observed the situation, selected a target, gave the targeting information to a human who then pulled the trigger unquestioningly. The only reason the human was in that chain in the first place is that the AI didn't control the weaponry directly and that's going to change very soon.

maybe this topic is too heady for gotime and more suitable for practical AI podcast but very people seem to be willing to talk about it at all.

view this post on Zulip Tillman Jex (Sep 30 2024 at 13:32):

I think the reason I found this sobering, is the same reason you both found it frustrating. I enjoyed listening to an episode where a group of "veteran" programmers discuss how AI is effecting their day-to-day (as is the podcasts description). The episode wasn't set up to be a serious discussion. But imo, there's still so much speculation about what the software industry will look like once the AI hype has settled, that it's all very nihilistic.

So hearing that at least for these people, their job hasn't changed at all and the tooling is more often than not leading them into bugs and problems, is sobering.

I listen to enough other news and podcasts on this topic that make me super anxious about my career change... This episode, with all its giggles, was very welcome.

view this post on Zulip Matthew Sanabria (Sep 30 2024 at 15:19):

That's a fair take. I do feel similarly in that I'm happy that most people's jobs haven't changed much in the advent of AI. There's a lot of noise out there about AI and it's nice for people to speak plainly about it. That being said I still feel disappointed and that the episode was a filler without much topic.

view this post on Zulip Tim Uckun (Sep 30 2024 at 21:37):

@Tillman Jex Are you using it? Has it changed your workflow at all? I have found it very useful myself. I use the codeium VS code plugin and the auto complete alone is pretty amazing. I also like the window right there where I can ask a question and get an answer which includes a function or even a class. I used other AI which actually plans out an entire app and writes the code and the tests and everything.

All in all right now I find it a huge boost in productivity and I think it's only going to get better.

view this post on Zulip Tim Uckun (Sep 30 2024 at 21:39):

One other thing that occurs to me is that eventually there will be models fine tuned for individual languages. Could you imagine a coding AI which is fine tuned for go or ruby or whatever? It would be regularly updated with changes to the language and new code at github.

I think that would be really amazing.

view this post on Zulip Tillman Jex (Oct 01 2024 at 08:50):

Tim Uckun said:

Tillman Jex Are you using it? Has it changed your workflow at all?

I do use llama3 and deepseek-coder via ollama within neovim (gen.nvim plugin). But I have pre-defined prompts to use that don't end up giving me direct answers necessarily (e.g writing code). Or if they do generate code, I have an explanation generated as well. And I use them sparingly. I'll go to the docs first. Because I want to learn. I find AI to be a combination of a teacher + search engine. If your maths teacher was a robot AI, which you had 24/7 access to, would you really bother to understand its explanation of Pythagoras' Theorem? Or would you just prompt it again the night before you needed to hand your homework in?

I think the whole game with AI tooling is much more dangerous for non-established developers (senior and above?). For us, the ability to actually put together working solutions quickly is qualified by our desire to break into the industry (where people want you to write code that works). But it's a double edged sword use these tools to get there. Because committing to using increasingly powerful and optimised tools, like you suggest with language or user specific models, creates a false sense of ability.

Being two or three years into a career and needing to solve a bug in front of my manager when the internet is momentarily down and I can't access whatever ML tool I depend on, or I'm off-site somewhere, or work on a different computer that doesn't have _my_ custom model, and suddenly realising how little I can _actually_ do, is a scary thought.

I can't remember where I heard it, but it was a statement that it's more fun to be competent. I believe this to be true. I saw this a lot in the music industry for example. Where people use all sorts of tools to correct their timing and pitch in the recording process (to make a "splash" in the industry with a great release), and then they have a really tough, stressful and disappointing time performing live because the concert goers realize they can't actually play.

view this post on Zulip Tim Uckun (Oct 01 2024 at 21:06):

Interesting that my perspective is so opposite of this. Maybe it's because I have been doing this so long.

I haven't been able to memorize the standard library of any language so I absolutely can't code without the internet AI or not. I tried once when I was going to go on a long flight. I downloaded all the docs I thought I might need but I still couldn't do it. If I had an AI downloaded I think I could have though. So the AI is actually a better tool when you don't have the internet in my opinion.

I also find programming tedious and boring for the most part these days. It's always to slog to go through stupid trivial shit most of the day to do stupid mundane things like moving data back and forth, validating user input, dealing with messy stupid data from external sources, dealing with every niggling thing that could go wrong etc. The joy of programming has been replaced with drudgery and the AI promises to take that away. Let it write all that boring code and let me concentrate on the interesting stuff. Let it write the tests, the fixtures, the mocks etc, if not wholly then 90% so I don't have to deal with that boilerplate.

This is even more useful in a language like go which is so insanely verbose and tedious and frankly hard to type. I recently had to write a web scraper. I wrote in Ruby and it went smooth like butter. It was so easy to go incrementally, and debug in a console where I could try out different xpath expressions and see what resulted over and over again until I could find just the right incantation to narrow in on the div I needed. I then asked myself I could do this in go and holy crap the difference is night and day. If I already hadn't figured out the xpath using ruby I am convinced it would take me weeks to do it in go. During my spike the setup for the library alone was longer than the whole class I wrote in Ruby.

AI is perfect for go. Go is a simple language so there is less ways for the AI to hallucinate. There is a ton of ceremony in go and the AI can generate all that for you, there is a lot of tedium in go and AI can ease that for you.

It wouldn't surprise me if google already has an internal AI specifically made for writing go at this point.

view this post on Zulip Tillman Jex (Oct 02 2024 at 07:38):

Ah yep, that makes sense then. For you with an established career, AI is a huge productivity boost, while most likely not affecting your job security, also because your time spent in the industry and professional network help with that security. While for me it could well be snake oil; outsource my learning in place of fast execution with AI, only to find that n years down the track, managers want people who can code without AI (to some degree).

I would confidently make one prophetic point though; if you'd began learning to code using today's AI tools, it would've taken much less time for you to find programming tedious and boring.

view this post on Zulip Matous Michalik (Oct 13 2024 at 22:30):

In my experience the usefulness of AI like copilot is highly dependent on the problem domain you are in.

When copilot first come out I was working on simple CRUD APIs and copilot felt like magic. Most of the times it could generate whole function just from its name. ( moving data with pitchfork from A to B )

Then i moved to fintech where the code was expected to adhere to strict requirements. And most of the work was integration of arcane interbanking protocols. Copilot was still somewhat useful for boiler plate in tests but it didn’t understand the problem domain well. ( Moving data from A to B with steps in between that you cannot find answers for on SO )

Now i am working on orchestration layer for bespoke hardware product and it’s borderline useless. Best thing it can do is to guess a somewhat useful log line strings half of the time. ( There is handful of orgs on the planet with similar problem domain )

If i need to go fast lately i just disable the thing because the suggestions are distracting me.

All three examples are from Go codebases so my anecdotal evidence suggests AI isn’t good at Go. It’s good at solving problems that were solved millions of times before.

And in each company there is at least one cursed problem no one saw before that AI will not be able to solve.

view this post on Zulip Tillman Jex (Oct 14 2024 at 07:34):

That makes total sense! And I'm glad to hear it (for my sake) as I need these kinds of validations to maintain the ethos of not relying on it too much to actually learn how to program (I'm only three or so years in).

I've since written a short blog post to this effect.

It's also like (golang) Johnny said in the last episode on Changelog, that he's happy if people spend time generating AI slop (not in any way suggesting that's what you were doing!!) so that developers like him (experienced and knowledgeable without AI tooling) have plenty of work in the future haha. I see the truth in that for sure.

There was another post way back (on Gotime I'm pretty sure) about a team that came into rescue the US department of health's website because the current team couldn't figure it out. And that was PRE AI.

view this post on Zulip Tim Uckun (Oct 14 2024 at 21:09):

Here is what I would do if I had resources. Since I am not the brightest of bulbs I presume somebody has already done this.

  1. Take a Coding AI and fine tune it for a language.
  2. Take that and fine tune it again for all the code base in your company directory (sourcegraph?)
  3. Now build a rag for each individual project which includes all open tickets, design specs, email discussions, meeting minutes etc.

Now you have an AI which is hyper focused on this particular project and "knows" the business domain.

view this post on Zulip Andrei Jiroh Halili (Oct 24 2024 at 16:13):

Tim Uckun said:

I just listened to another podcast about AI and the guest had something really interesting to say. He said something like

Can you please link where is that podcast episode here in the thread for context (maybe it is already posted here (will backread again), so apologies if that's the case)?

view this post on Zulip Tim Uckun (Oct 24 2024 at 20:31):

@Andrei Jiroh Halili

It was not a changelog podcast .It was from universe today. Fraser Caine puts out amazing content IMHO. This is the episode https://share.fireside.fm/episode/2YAGasSP+qA4Yv5Dp


Last updated: Apr 04 2025 at 01:15 UTC