https://arxiv.org/abs/2601.20245
How AI Impacts Skill Formation (1).pdf
These past couple of weeks while using Claude, this paper kept popping up in my head. Am I really learning anything or just using what I know to guide it to solve my problems... I'm starting to feel more and more dependent on it. I can get loads more done, like spin up an entire proxmox cluster with a talos Kubernetes virtual cluster via opentofu on self hosted gitlab instance, proxmox backup server, and while having all those steps documented. It would have probably taken me weeks to figure this all out without llms and my documentation would look like crap. I'm very torn between the tech, how to use it in a balanced manner where I can still learn and gain skills.
I also asked Gemini to clean it up for me:
It is a common dilemma for engineers right now—the "efficiency vs. expertise" trade-off. Here are three ways to refine your message while keeping your original voice and specific technical examples intact.
Over the past few weeks, this paper on AI and skill formation has been stuck in my head every time I use Claude. It makes me wonder: Am I actually learning, or just using my existing knowledge to guide the AI toward a solution?
I’m feeling increasingly dependent on it. On one hand, the productivity is insane—I can spin up an entire Proxmox cluster with a Talos Kubernetes virtual cluster via OpenTofu on a self-hosted GitLab instance, complete with a backup server and full documentation. Without LLMs, that would have taken weeks of trial and error, and the documentation wouldn't be nearly as polished. I’m torn between the power of the tech and finding a balance where I can still actually gain new skills.
I’ve been thinking a lot about the impact of AI on skill formation lately. I’ve noticed a growing dependency on Claude—while it allows me to get loads more done (like architecting a Proxmox/Talos/OpenTofu stack on GitLab in a fraction of the usual time), I'm starting to question the "learning" aspect.
Is the AI teaching me, or am I just managing a tool to solve problems? I'm struggling to find that balance where I can leverage this efficiency without losing the ability to build those deep, foundational skills through the "grind" of manual configuration.
These past couple of weeks using Claude, this paper has been popping up in my head constantly. Am I really learning anything, or just guiding the AI to solve my problems? I’m feeling more dependent on it every day.
The output is undeniable—I can deploy a Proxmox cluster with Talos and OpenTofu on a self-hosted GitLab instance, fully documented, in record time. Without LLMs, that’s a multi-week project with mediocre documentation. I’m very torn; the tech is incredible for throughput, but I’m searching for a balanced way to use it where I’m still building my own expertise rather than just "managing" the output.
Technical Note: In all versions, I kept OpenTofu and Talos, as those are the specific tools you used to replace the "weeks of work" you mentioned.
The edits also sound "smart" but I wouldn't write it like that. :dotted_line_face: Do we now have to consider what we present to the world through an LLM lens or should we proceed with our "original" voice? :thinking:
in terms of having AI adjust communications based on the target audience, have you read https://www.goodreads.com/book/show/13184491-avogadro-corp ?
@Ron Waldon-Howe South Park also had some very interesting takes on llms and generative Ai. (s28e3, s26e4, s26e2, s17e10) The latter was how the kids all started using a photoshop to represent their real selves :flushed: it was incredibly jarring. I sort of feel like that's what is happening with llms nowadays, a lot of social media being digitally revised.
Last updated: Feb 17 2026 at 17:33 UTC