all posts tagged 'artificial intelligence'

How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt


đź”— a linked post to margaretstorey.com » — originally shared here on

But by weeks 7 or 8, one team hit a wall. They could no longer make even simple changes without breaking something unexpected. When I met with them, the team initially blamed technical debt: messy code, poor architecture, hurried implementations. But as we dug deeper, the real problem emerged: no one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together. The code might have been messy, but the bigger issue was that the theory of the system, their shared understanding, had fragmented or disappeared entirely. They had accumulated cognitive debt faster than technical debt, and it paralyzed them.

A very appropriate piece for me right now (thanks for sharing it, Simon!).

I set off earlier this week to build an iOS music player. It seemed like an ambitious-enough project that would help me become a better agentic programmer, using an idea that interests me deeply yet I’d realistically never be able to tackle this on by myself.

What I learned was that the glitz and glamour of seeing tokens fly by and then seeing code materialize into existence is addicting. It feels like a slot machine: perhaps this spin will be the thing that eliminates the UI lag! … ope, nope, just ran completely out of tokens. Better upgrade to Max!

I also learned that I’ve been missing something in my life: the joy of making something. I remember seeing my Plex library show up on my iPhone inside the app for the first time. It reminded me of how it felt when I figured out how to change the Windows 95 “It is now safe to power off your computer.” screen back in the day. I made the computer do that!

But yeah, cognitive debt.

I got the MVP up and working, but then attempted a refactor that left the whole codebase a giant goop of spaghetti. I wasn’t paying any attention to the architecture of the app, and pretty soon, I found myself with three different queues for storing media. Completely untenable slop.

So I’m gonna wipe the repo clean and start fresh. This time, I will be armed with a better plan. One that allows me to be more close to the action, one that keeps me focused and engaged with the architecture.

Let’s see how this goes.

Continue to the full article


We mourn our craft


đź”— a linked post to nolanlawson.com » — originally shared here on

We’ll miss the feeling of holding code in our hands and molding it like clay in the caress of a master sculptor. We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM. We’ll miss creating something we feel proud of, something true and right and good. We’ll miss the satisfaction of the artist’s signature at the bottom of the oil painting, the GitHub repo saying “I made this.”

I don’t celebrate the new world, but I also don’t resist it. The sun rises, the sun sets, I orbit helplessly around it, and my protests can’t stop it. It doesn’t care; it continues its arc across the sky regardless, moving but unmoved.

If you would like to grieve, I invite you to grieve with me. We are the last of our kind, and those who follow us won’t understand our sorrow. Our craft, as we have practiced it, will end up like some blacksmith’s tool in an archeological dig, a curio for future generations. It cannot be helped, it is the nature of all things to pass to dust, and yet still we can mourn. Now is the time to mourn the passing of our craft.

Last night, I started work on a project I’m calling Lunara. It’s my own personal iOS client for my Plex music library.

I basically rattled off a whole bunch of wishlist items at an LLM and had it organize it into a README.

I decided my goals for the project are two-fold:

  1. Experiment mightily. Use unfamiliar technologies in a domain I am no longer actively being paid to be an expert in.
  2. Use the LLMs to teach me how to work with them.

So far, in perhaps 2 hours of work, I’ve got a shell of an app that can communicate with my Plex library. All it can do right now is list out the albums, but that would’ve taken me a week or two of diligent troubleshooting before having Codex.

These “woe is my craft” posts make sense to me when you view them through the lens of an engineer who truly cares about the code.

But as someone who has never really cared much about the code, these are tools of liberation. I can come up with a hairbrained idea that’ll work specifically for me and prototype something into existence in a couple days.

There’s time to lament what once was. But like my friend Carrie used to say after losing a big race: you have twenty-four hours to feel bad for yourself. Then, you gotta get back up and keep moving forward.

LLMs are here. They enable a completely different form of developer: the homebaked variety.

Did you watch all those AI commericals yesterday during the Super Bowl? Almost all of them featured people doing their normal, boring jobs, but they were able to get computers to do all the things we, as engineers, take for granted.

This does mean us engineers won’t be paid as well as we once were, but that’s okay. Now we can go out and solve more complex problems!

You can either adapt or be relegated to the other myriad forms of artistry that hang their hat on their craft. You can hire a master woodworker to build you a table or you can go to IKEA and buy a cheap one.

Figure out what your problem is first. Then find the right approach (and tools) to solve that problem.

Continue to the full article


Prompt caching: 10x cheaper LLM tokens, but how?


đź”— a linked post to ngrok.com » — originally shared here on

What's going on in those vast oceans of GPUs that enables providers to give you a 10x discount on input tokens? What are they saving between requests? It's not a case of saving the response and re-using it if the same prompt is sent again, it's easy to verify that this isn't happening through the API. Write a prompt, send it a dozen times, notice that you get different responses each time even when the usage section shows cached input tokens.

Not satisfied with the answers in the vendor documentation, which do a good job of explaining how to use prompt caching but sidestep the question of what is actually being cached, I decided to go deeper. I went down the rabbit hole of how LLMs work until I understood the precise data providers cache, what it's used for, and how it makes everything faster and cheaper for everyone.

After reading the Joan Westenberg article I posted yesterday, I decided I’m going to read more technical articles and focus my attention on them.

This post from the ngrok blog was very helpful in explaining how LLMs work up through the attention phase, which is where prompt caching happens.

It also got me to go down a rabbit hole to remember how matrix multiplication works. I haven’t heard the phrase “dot product” since high school.

Continue to the full article


Why We've Tried to Replace Developers Every Decade Since 1969


đź”— a linked post to caimito.net » — originally shared here on

Here’s the paradox that makes this pattern particularly poignant. We’ve made extraordinary progress in software capabilities. The Apollo guidance computer had 4KB of RAM. Your smartphone has millions of times more computing power. We’ve built tools and frameworks that genuinely make many aspects of development easier.

Yet demand for software far exceeds our ability to create it. Every organization needs more software than it can build. The backlog of desired features and new initiatives grows faster than development teams can address it.

This tension—powerful tools yet insufficient capacity—keeps the dream alive. Business leaders look at the backlog and think, “There must be a way to go faster, to enable more people to contribute.” That’s a reasonable thought. It leads naturally to enthusiasm for any tool or approach that promises to democratize software creation.

The challenge is that software development isn’t primarily constrained by typing speed or syntax knowledge. It’s constrained by the thinking required to handle complexity well. Faster typing doesn’t help when you’re thinking through how to handle concurrent database updates. Simpler syntax doesn’t help when you’re reasoning about security implications.

Continue to the full article


The grief when AI writes most of the code


đź”— a linked post to blog.pragmaticengineer.com » — originally shared here on

I’m coming to terms with the high probability that AI will write most of my code which I ship to prod, going forward. It already does it faster, and with similar results to if I’d typed it out. For languages/frameworks I’m less familiar with, it does a better job than me.

It feels like something valuable is being taken away, and suddenly. It took a lot of effort to get good at coding and to learn how to write code that works, to read and understand complex code, and to debug and fix when code doesn’t work as it should.

It’s been a love-hate relationship, to be fair, based on the amount of focus needed to write complex code. Then there’s all the conflicts that time estimates caused: time passes differently when you’re locked in and working on a hard problem.

Now, all that looks like it will be history.

Early in my career, I helped start a company that conducted autonomous vehicle research. As increasingly complex driving tasks were able to be automated, I’d think about how this technology would one day render truck drivers useless. Which quickly turned into wondering when this tech would make me useless.

There’s no sitting still when it comes to software engineering. Every ten years or so, a new breakthrough comes along and requires folks to make a decision: do I evolve my engineering practice to stay up with the modern times, or do I double down on my current practice and focus on the fundamentals?

The choice comes down to what you value. Are you someone who enjoys artisanally crafting code, painstakingly optimizing each line to result in a beautiful tool? Are you someone who smashes things until they make the shape of a tool that helps someone accomplish a task?

When it comes to our economic structure, however, it doesn't matter what you value, it matters what someone is willing to pay you to solve their problem.

Some employers will value bespoke, artisanal ("clean") code, but I bet most will not care about what the code looks like. They will want whoever can quickly smash something into the shape of the tool that gets the job done.

As they say: don't hate the player, hate the game.

Continue to the full article


i ran Claude in a loop for three months, and it created a genz programming language called cursed


đź”— a linked post to ghuntley.com » — originally shared here on

The programming language is called "cursed". It's cursed in its lexical structure, it's cursed in how it was built, it's cursed that this is possible, it's cursed in how cheap this was, and it's cursed through how many times I've sworn at Claude.

Absolutely dying at this.

Continue to the full article


Fix the News issue 309


đź”— a linked post to fixthenews.com » — originally shared here on

I’ve cut social media almost entirely out of my life (10/10 recommend), but I still drop into LinkedIn every so often. And honestly? I get exhausted fast by all the heavy, depressing posts.

Yes, there’s a lot of real suffering and injustice in the world. If you’re in the thick of it right now, I hope you’re able to keep hanging in there.

But if you’d like a little break from the bleak hellscape that is 21st-century journalism, check out the latest issue of Fix the News. Or, if you just want the highlights, here are a few that stood out to me:

  • Billions of people have gained clean water, sanitation, and hygiene in the last nine years. (Billions with a B.)

  • In the 12 months prior to June, Africa imported over 15GW of solar panels. Sierra Leone alone imported enough to cover 65% of its entire generating capacity.

  • Google estimates the median LLM prompt uses 0.24 Wh (about nine seconds of TV), emitting 0.03 g of COâ‚‚ and five drops of water. (How many of you leave the TV on while doing chores?)

  • Wildfires are terrifying, but between 2002 and 2021, global burned area actually fell 26%.

A gentle reminder: news and social media are designed to keep you engaged by stoking fear, outrage, and anxiety. That cycle is hard to break, and a lot of my friends worry that looking away even for a moment means we will collectively slide into totalitarianism and ruin.

That’s a lot of weight to carry alone. Yes, we need to stay vigilant and hold leaders accountable, but we can’t live paralyzed by fear. There are countless good people stepping up, trying to make the world better (including many of you). Try to hold onto that truth alongside the bleak!

Continue to the full article


A Treatise on AI Chatbots Undermining the Enlightenment


đź”— a linked post to maggieappleton.com » — originally shared here on

I think we’ve barely scratched the surface of AI as intellectual partner and tool for thought . Neither the prompts, nor the model, nor the current interfaces – generic or tailored – enable it well. This is rapidly becoming my central research obsession, particularly the interface design piece. It’s a problem I need to work on in some form.

When I read Candide in my freshman humanities course, Voltaire might have been challenging me to question naïve optimism, but he wasn’t able to respond to me in real time, prodding me to go deeper into why it’s problematic, rethink my assumptions, or spawn dozens of research agents to read, synthesise, and contextualise everything written on Panglossian philosophy and Enlightenment ethics.

In fact, at eighteen, I didn’t get Candide at all. It wasn’t contextualised well by my professor or the curriculum, and the whole thing went right over my head. I lacked a tiny thinking partner in my pocket who could help me appreciate the text; a patient character to discuss, debate, and develop my own opinions with.

I can’t agree more. I would love to help as well in this area of research. It sounds extremely rewarding.

Continue to the full article


Christina Wodtke on AI exciting the old guard


đź”— a linked post to linkedin.com » — originally shared here on

The old timers who built the early web are coding with AI like it's 1995.

Think about it: They gave blockchain the sniff test and walked away. Ignored crypto (and yeah, we're not rich now). NFTs got a collective eye roll.

But AI? Different story. The same folks who hand-coded HTML while listening to dial-up modems sing are now vibe-coding with the kids. Building things. Breaking things. Giddy about it.

We Gen X'ers have seen enough gold rushes to know the real thing. This one's got all the usual crap—bad actors, inflated claims, VCs throwing money at anything with "AI" in the pitch deck. Gross behavior all around. Normal for a paradigm shift, but still gross.

The people who helped wire up the internet recognize what's happening. When the folks who've been through every tech cycle since gopher start acting like excited newbies again, that tells you something.

Really feels weird to link to a LinkedIn post, but if it’s good enough for Simon, it’s good enough for me!

It’s not just Gen Xers who feel it. I don’t think I’ve been as excited about any new technology in years.

Playing with LLMs locally is mind-blowingly awesome. There’s not much need to use ChatGPT when I can host my own models on my own machine without fearing what’ll happen to my private info.

Continue to the full article


Does AI Make Us Lazy?


đź”— a linked post to calnewport.com » — originally shared here on

Put simply, writing with AI reduces the maximum strain required from your brain. For many commentators responding to this article, this reality is self-evidently good.“The spreadsheet didn’t kill math; it built billion-dollar industries. Why should we want to keep our brains using the same resources for the same task?”

My response to this reality is split. On the one hand, I think there are contexts in which reducing the strain of writing is a clear benefit. Professional communication in email and reports comes to mind. The writing here is subservient to the larger goal of communicating useful information, so if there’s an easier way to accomplish this goal, then why not use it? 

But in the context of academia, cognitive offloading no longer seems so benign. In a learning environment, the feeling of strain is often a by-product of getting smarter. To minimize this strain is like using an electric scooter to make the marches easier in military boot camp; it will accomplish this goal in the short term, but it defeats the long-term conditioning purposes of the marches.

I wrote many a journal entry in college complaining about this exact point, except we were still arguing about graphing calculator and laptop use.

Now that I’m older, I understand the split that Cal talks about here.

When I’m writing software to accomplish a task for work, then it’s more important for me to spend my brain energy on building the context of the problem in my head.

When I’m writing an essay and trying to prove that I understand a concept, then it’s more important for me to get the words out of my head and onto paper. Then, I can use tools to help me clean it up later.

Maybe this points to a larger problem I’ve had with our education system. Imagine a spectrum of the intent of college. The left end of the spectrum represents “learning how to critically think about ideas”. The right end represents “learning skills that will help you survive in the real world”.

When someone makes fun of a film studies major, it’s because their evaluation of the spectrum is closer to the right end.

When someone makes fun of students using ChatGPT for writing their essays for them, it’s because their evaluation is closer to the left.

Continue to the full article