Over the past two months, we’ve seen some of the most serious supply chain attacks in npm history: phishing campaigns, maintainer account takeovers, and malware published to packages with billions of weekly downloads. What is going on?! What can we do about it? Our old friend, Feross Aboukhadijeh, joins us to help make sense of it all. :link: https://changelog.am/111
Ch | Start | Title | Runs |
---|---|---|---|
01 | 00:00 | Let's talk! | 00:38 |
02 | 00:38 | Sponsor: Depot | 02:12 |
03 | 02:49 | Feross & Friends | 01:14 |
04 | 04:04 | The big picture | 01:46 |
05 | 05:50 | Why now? Why this? | 02:32 |
06 | 08:21 | Phishing maintainers! | 03:30 |
07 | 11:51 | Not for the lulz | 03:37 |
08 | 15:28 | Maximal profit | 03:31 |
09 | 18:59 | The most surprising hack | 04:03 |
10 | 23:03 | exfiltrate and extrude | 02:42 |
11 | 25:44 | Exploiting GitHub Actions | 04:12 |
12 | 29:56 | It all happened so fast | 01:14 |
13 | 31:10 | How Socket discloses | 01:20 |
14 | 32:30 | Disclosing 0days vs malware | 02:19 |
15 | 34:49 | Scanning GitHub Actions | 01:29 |
16 | 36:18 | GH Actions footguns | 03:46 |
17 | 40:04 | Socket's future GH Actions feature | 01:45 |
18 | 41:48 | Evil genius move | 01:25 |
19 | 43:14 | What devs can do | 04:16 |
20 | 47:30 | Staying off the bleeding edge | 02:51 |
21 | 50:21 | How many typosquats | 02:37 |
22 | 52:58 | How we got here | 01:36 |
23 | 54:33 | Was it worth it? | 02:36 |
24 | 57:09 | GitHub's responsibility | 01:28 |
25 | 58:37 | GitHub's roadmap | 05:17 |
26 | 1:03:54 | Why doesn't npm do this | 02:23 |
27 | 1:06:17 | A package vetting period | 01:51 |
28 | 1:08:08 | Publisher opt-in | 03:55 |
29 | 1:12:03 | We figured it out! | 00:33 |
30 | 1:12:36 | Adam goes GH Karen | 02:41 |
31 | 1:15:17 | Codegen everything instead | 04:51 |
32 | 1:20:08 | More companies vendoring | 02:08 |
33 | 1:22:16 | Proxies, mirrors, options | 01:06 |
34 | 1:23:22 | New tool! sfw | 04:15 |
35 | 1:27:37 | The next big thing? | 01:13 |
36 | 1:28:50 | The criteria for free | 02:17 |
37 | 1:31:07 | sfw is a great name | 00:23 |
38 | 1:31:29 | Bye, friends | 01:05 |
39 | 1:32:34 | Next week on the pod | 02:45 |
Unless I missed it, it wasn't mentioned in the episode: https://docs.zizmor.sh/ is an open source static analysis tool for GitHub Actions that will flag (and sometimes even auto-fix) common vulnerabilities like code injection via template expansion or the pull_request_target trigger discussed in the episode.
It's running as a GitHub Action itself, so it's straightforward to adopt, and most issues an initial run surfaces are usually quick to fix.
In the long long ago, in the before times, Docker Hub had verified builds
You would register your source code repository with Docker Hub, and your source forge (GitHub, GitLab, etc) would notify Docker Hub via webhook whenever you made a commit
Then Docker Hub would use it's own compute resources to fetch your code, build your Dockerfile, and store the resulting image in the Hub
No custom CI on your part, no possibility that the image contains things that were not in the source code repository
Then cryptocurrency came and ruined everything, by chasing all the free compute away: https://drewdevault.com/2021/04/26/Cryptocurrency-is-a-disaster.html
I really really wish there was a way to have verified packages in npm, crates, gems, etc where the package repository performed the build using its own trusted resources and directly from the source code without any tampering
Well, I suppose if package authors paid the central repository to help finance the compute resources, then the repository would be able to build the packages directly from source code and mark them as verified
So, we end up with "verified" being a paid indicator, as on social networks, but with actual functional differences instead of just being a weird flex
I don't understand why this is such a hard problem. Here is my simple easy solution.
This would be completely decentralized and packages would be cached by your laptop or CI server.
We need to get away from having all of our packages and source code hosted by one corporation or another.
1-5 and 7 don't offer any protection of a package is tampered with prior to publication, which we've seen with some of the npm malware
I agree that they seem to be good general improvements, nevertheless
Can 6 be solved without also solving the (unsolvable) Halting Problem? We might have some control regarding build-time / publish-time capabilities, but some of the malware we've seen tampered with runtime behaviour, too
Perhaps we're back to the issue of trust, and having to develop chains of trust, something like https://github.com/crev-dev/cargo-crev but for any programming language ecosystem?
GPG key signing parties, anyone? :P
I don't think anything is going to solve the problem of a malicious actor completely but steps and six and seven can effectively minimize this problem. Step 6 is basically code review. If code can be examined then we can isolate certain system calls such as file io, network access etc and present a report the user saying "this package contacts the URL xyz" or "this package attempts to open a file in directory xyz" etc. Step seven can ensure trust by checking a signature.
In my scenario packages are urls. http://microsoft.com/os/some_package/2.1.3 for example. Each version would be a unique URL. If the URL is an IPFS url or a namecoin URL or something like that then it's immutable which is even better. When you pull up that URL you get a manifest you can examine which details the author, package version, dependencies, public keys etc. In it there is an entry which is the hash of the zip file. You pull up the zip file using the hash from ipfs or freenet. You can then open the zip file and compare the manifest inside of it with the manifest you pulled to ensure you are getting the right version of the software.
As you pointed out it's not perfect in that people could camp on domain misspellings, post malicious binaries, etc. I guess for that we need additional layer of protections from the OS like requiring permissions before accessing external directories, pausing apps that consume too much CPU, etc.
@Tim Uckun can you explain what vulnerabilities in existing centralized package registries your solution solves? I’m not following
If I’m understanding correctly it sounds like early steps towards (the now abandoned) Pyrsia https://youtu.be/ec8vvD1SG-s?si=uUqQUFWBN-SDUk69
Finally. Under ideal circumstances you should be able to put a price on your published code. Ideally there would be some crypto which is premined like XRP so transactions are cheap and fast and don't require a lot of computation. You publish your library and if people want to use it they can pay a tiny amount for it. Of course ideally they would be being paid to run the file system node (i.e Storj or Filecoin) so it would balance out by and large.
In the container and infrastructure ecosystem, it's increasingly popular to sign artefacts using cosign: https://github.com/sigstore/cosign
But that's another step in the publication process
It seems like the solutions in this space all introduce friction for good faith sharing, and additional friction for consumption
@Kilian Kluge hadn't heard of zizmor, thanks for sharing!
I just received this email today. ECR Login Renewal is an open source tool I wrote a long time ago. Sounds very shady.
image.png
Last updated: Oct 16 2025 at 05:39 UTC