Stream: interviews

Topic: 672: From GitLab to Kilo Code


view this post on Zulip Logbot (Jan 07 2026 at 21:00):

We're joined by Sid Sijbrandij, founder of GitLab who led the all-in-one coding platform all the way to IPO. In late 2022, Sid discovered that he had bone cancer. That started a journey he's been on ever since... a journey that he shares with us in great detail. Along the way, Sid continued founding companies including Kilo Code, an all-in-one agentic engineering platform, which he also tells us all about. :link: https://changelog.fm/672

Ch Start Title Runs
01 00:00 Welcome to The Changelog 00:56
02 00:56 Sponsor: Depot 02:22
03 03:19 Start the show! 00:55
04 04:14 Sid's story 02:09
05 06:23 A health crisis 03:29
06 09:52 Doing it in parallel 03:17
07 13:09 Why go to China 01:13
08 14:22 Sid's health today 00:58
09 15:20 The useful medicines 04:28
10 19:48 The type of cancer 06:52
11 26:40 What's next? 03:18
12 29:58 Sponsor: Tiger Data 02:29
13 32:27 Working on Kilo 03:02
14 35:29 Open core Kilo 00:41
15 36:09 Approaching models 03:33
16 39:42 The all-in-one challenge 03:01
17 42:43 More parallels 01:59
18 44:42 Sounds pricey 02:56
19 47:38 Future budgets 01:34
20 49:12 The ultimate polymath 07:00
21 56:12 Sponsor: Notion 02:50
22 59:02 Competing with Cursor 04:44
23 1:03:45 Kilo's UX 01:22
24 1:05:08 Claude Web has repos 03:16
25 1:08:24 Sid the polymath? 00:30
26 1:08:54 The developer's future 03:43
27 1:12:37 Hiring? 01:57
28 1:14:34 Wrapping up 01:20
29 1:15:55 Closing thoughts 01:22

view this post on Zulip Don MacKinnon (Jan 12 2026 at 03:38):

Glad Sid's cancer is in remission, as someone who's had a lot of family go through it I know how difficult it can be. Sounds like he's doing a lot of amazing work with the companies that he is building and the cancer treatment initiatives he's pushing forward. I did have one nitpick about the conversation, Sid mentioned that AGI is "already here" but if he's referring to the 2023 paper, that opinion is not generally accepted by the experts in the fields. Most are estimating AGI to arrive somewhere around 2040. I know a lot of the VC backed companies would like folks to think otherwise or want to change the definition of what AGI is but it's not the case.

view this post on Zulip Ron Waldon-Howe (Jan 12 2026 at 07:33):

yeah, they've backed themselves into a corner by hyping up "AGI" all whilst pouring money into a technology that seems less and less likely to actually lead to AGI

not without making the term meaningless, at least

to me, i would want a successor to LLM technology to actually understand and process facts and instructions in some way, rather than pretending that it does this, so that hallucinations/fabrications and injection attacks are 100% eliminated

view this post on Zulip Don MacKinnon (Jan 12 2026 at 15:26):

The sentiment I have heard is that it's likely LLMs won't be the path to AGI, like you mentioned they have no understanding or reasoning. They're merely statistically predictive completions

view this post on Zulip Jerod Santo (Jan 12 2026 at 16:47):

I also took issue with the claim, hence my surprised response. I chose not to drill down after he stated the definition he was using, because:

A) I don’t know the reference
B) I didn’t want to derail the conversation

view this post on Zulip Ron Waldon-Howe (Jan 12 2026 at 22:56):

yeah, there's an art to interviews that I don't pretend to grasp <3

view this post on Zulip Tim Uckun (Jan 13 2026 at 03:32):

I don't know where I heard this from but..... "Everything the LLM says is a hallucination, it's just that you only recognize some of them". The point is that LLMs don't have an internal representation of the world. They are continually hallucinating.

view this post on Zulip Tim Uckun (Jan 14 2026 at 10:42):

I got really excited after hearing this episode do I installed kilo into vs code. I don't know why or how but after struggling for a while could not get it to work properly. I got the api keys for mistral, gemini and gemini cli and put them all in, I signed up for kilo and put the API key in etc but still won't work. I think the product needs a little more polish to make onboarding easier.

view this post on Zulip James McNally (Jan 23 2026 at 19:08):

Gave Kilo a go after this as I wanted to try a set an agent running in the background directly to github. Set it on a reasonable alteration to an open source library.

It's been my first experience of API pricing for this an s quickly burnt through $30. With estimates the real price is many times this it was interesting to see the real cost on a problem.

My estimate is this would have taken me about 4 hours and probably spent at least 0.5-1 hours babysitting, pointing out issues and cleaning up after. It's actually still not working, needs one last clean up.

It's interesting I'm hearing a lot more positivity around this use case and will be trialling more over the coming month but I'll probably stick to my Claude subscription for now!

view this post on Zulip James McNally (Jan 23 2026 at 19:09):

I am excited about having a central platform to work across gitlab and GitHub though - I like the philosophy behind itz certainly some rough edges still through

view this post on Zulip Tim Uckun (Jan 24 2026 at 09:12):

I set up lmstudio and ollama on the mac. I then installed continue.dev and then after a bit of fiddling connected it to lmstudio to use with my locally installed llms. I did run into a problem and apparently I ran out of context so I fiddled with my settings but I haven't tried it again to see what happens if I push it a bit. The performance of the local LLM seemed fine, a little slower than gemini on my machine but not much slower.

view this post on Zulip Ron Waldon-Howe (Jan 24 2026 at 09:34):

Tim Uckun said:

The performance of the local LLM seemed fine, a little slower than gemini on my machine but not much slower.

oh, which model(s) are you using locally? how many parameters?

view this post on Zulip Tim Uckun (Jan 24 2026 at 20:47):

I am using the gpt30b and qwen 2.5b for completions both in mlx formats. the 30B models are at the limit of what my machine can do though so I am going to downgrade to a 20 (or less) tonight so I have some spare ram. I am still learning all this stuff though and there are a lot of buttons to push to make things better or worse. Quantization, temperature etc. Most of the MLX models are provided by the community too so it's always a bit of a mystery what you are getting and how it's going to work.

BTW AFIK I know ollama doesn't support MLX out of the box yet which is why I am using LMStudio.

view this post on Zulip Ron Waldon-Howe (Jan 24 2026 at 22:55):

I'm guessing continue.dev is an alternative to https://block.github.io/goose/ ? sort of an orchestrator for agents/sessions?

view this post on Zulip Ron Waldon-Howe (Jan 24 2026 at 22:56):

yeah, i've heard that folks are getting decent results with newer 3B parameter models these days, but i've still not had anything basic work
keen to figure it out though
it also doesn't help that i haven't been able to get ollama to use Vulkan nor ROCm, so it's all on my CPU
I should look into ollama-alternatives like you have

view this post on Zulip Tim Uckun (Jan 25 2026 at 03:28):

I haven't given ollama a serious go but I will for sure because they are continually working on it and it is the "standard". I also plan on using smaller models with bigger context but I admit I still have a lot to learn about the nuances of these models and the all the parameters I can tune when running them.

view this post on Zulip Siddhartha Golu (Jan 26 2026 at 03:29):

The consensus in the community, or at least at /r/LocalLLAMA, is that Ollama has really gone down the drain with recent updates (or lack thereof). It was always a pretty wrapper on llama.cpp and now that llama.cpp has a built-in web UI and a way to hot switch b/w different models, it makes sense to set the default to llama.cpp. Everything else is using llama.cpp as its base anyway.

I use a local autocompletion model in neovim, using llama.vim and llama.cpp, and it works surprisingly well! Using qwen2.5-coder-7b along with qwen2.5-coder-0.5b as a draft model for speculative decoding.

view this post on Zulip Ron Waldon-Howe (Jan 26 2026 at 05:42):

yeah, i was using lsp-ai (EOL?) to supply LLM auto completions from ollama into helix, and that was working quite well
another constraint i have is that i want to use the same setup between personal and work, so i have to avoid anything with commercial limitations in its licence (although I think Qwen is Apache?)

view this post on Zulip Tim Uckun (Jan 27 2026 at 20:22):

I found out that 24GB is just not enough to run a decent sized model. It runs the 30B models but there isn't enough context to make them useful. I tried the 14b models and they do have enough context but on my machine it's just too slow to be useful. I am still going to try to run small qwen coders as a code complete though but am going to revert to gemini for planning and larger tasks. It may not be the best but it's good enough and I don't use it enough to go above the free tier.

view this post on Zulip Ron Waldon-Howe (Feb 01 2026 at 09:21):

wow, i've just had something actually work 100% locally for the first time
it took 43 minutes due to difficulty getting ollama to use my AMD GPU, but used goose+ollama+ministral-3:3b to use ls and cat to look at my dotfiles project and return a not-completely-wrong summary of it
hilariously, it must have used my $SHELL which is nushell, because I saw the tabular ls output along the way


Last updated: Feb 17 2026 at 17:33 UTC