all posts tagged 'artificial intelligence'

Spoiler Alert: It's All a Hallucination


🔗 a linked post to community.aws » — originally shared here on

LLMs treat words as referents, while humans understand words as referential. When a machine “thinks” of an apple (such as it does), it literally thinks of the word apple, and all of its verbal associations. When humans consider an apple, we may think of apples in literature, paintings, or movies (don’t trust the witch, Snow White!) — but we also recall sense-memories, emotional associations, tastes and opinions, and plenty of experiences with actual apples.

So when we write about apples, of course humans will produce different content than an LLM.

Another way of thinking about this problem is as one of translation: while humans largely derive language from the reality we inhabit (when we discover a new plant or animal, for instance, we first name it), LLMs derive their reality from our language. Just as a translation of a translation begins to lose meaning in literature, or a recording of a recording begins to lose fidelity, LLMs’ summaries of a reality they’ve never perceived will likely never truly resonate with anyone who’s experienced that reality.

And so we return to the idea of hallucination: content generated by LLMs that is inaccurate or even nonsensical. The idea that such errors are somehow lapses in performance is on a superficial level true. But it gestures toward a larger truth we must understand if we are to understand the large language model itself — that until we solve its perception problem, everything it produces is hallucinatory, an expression of a reality it cannot itself apprehend.

This is a helpful way to frame some of the fears I’m feeling around AI.

By the way, this came from a new newsletter called VectorVerse that my pal Jenna Pederson launched recently with David Priest. You should give it a read and consider subscribing if you’re into these sorts of AI topics!

Continue to the full article


Strategies for an Accelerating Future


🔗 a linked post to oneusefulthing.org » — originally shared here on

But now Gemini 1.5 can hold something like 750,000 words in memory, with near-perfect recall. I fed it all my published academic work prior to 2022 — over 1,000 pages of PDFs spread across 20 papers and books — and Gemini was able to summarize the themes in my work and quote accurately from among the papers. There were no major hallucinations, only minor errors where it attributed a correct quote to the wrong PDF file, or mixed up the order of two phrases in a document.

I’m contemplating what topic I want to pitch for the upcoming Applied AI Conference this spring, and I think I want to pitch “How to Cope with AI.”

Case in point: this pull quote from Ethan Mollick’s excellent newsletter.

Every organization I’ve worked with in the past decade is going to be significantly impacted, if not rendered outright obsolete, by both increasing context windows and speedier large language models which, when combined, just flat out can do your value proposition but better.

Continue to the full article


When Your Technical Skills Are Eclipsed, Your Humanity Will Matter More Than Ever


🔗 a linked post to nytimes.com » — originally shared here on

I ended my first blog detailing my job hunt with a request for insights or articles that speak to how AI might force us to define our humanity.

This op-ed in yesterday’s New York Times is exactly what I’ve been looking for.

[
] The big question emerging across so many conversations about A.I. and work: What are our core capabilities as humans?

If we answer that question from a place of fear about what’s left for people in the age of A.I., we can end up conceding a diminished view of human capability. Instead, it’s critical for us all to start from a place that imagines what’s possible for humans in the age of A.I. When you do that, you find yourself focusing quickly on people skills that allow us to collaborate and innovate in ways technology can amplify but never replace.

Herein lies the realization I’ve arrived at over the last two years of experimenting with large language models.

The real winners of large language models will be those who understand how to talk to them like you talk to a human.

Math and stats are two languages that most humans have a hard time understanding. The last few hundred years of advancements in those areas have led us to the creation of a tool which anyone can leverage as long as they know how to ask a good question. The logic/math skills are no longer the career differentiator that they have been since the dawn of the twentieth century.1

The theory I'm working on looks something like this:

  1. LLMs will become an important abstraction away from the complex math
  2. With an abstraction like this, we will be able to solve problems like never before
  3. We need to work together, utilizing all of our unique strengths, to be able to get the most out of these new abstractions

To illustrate what I mean, take the Python programming language as an example. When you write something in Python, that code is interpreted by something like CPython2 , which then is compiled into machine/assembly code, which then gets translated to binary code, which finally results in the thing that gets run on those fancy M3 chips in your brand new Macbook Pro.

Programmers back in the day actually did have to write binary code. Those seem like the absolute dark days to me. It must've taken forever to create punch cards to feed into a system to perform the calculations.

Today, you can spin up a Python function in no time to perform incredibly complex calculations with ease.

LLMs, in many ways, provide us with a similar abstraction on top of our own communication methods as humans.

Just like the skills that were needed to write binary are not entirely gone3, LLMs won’t eliminate jobs; they’ll open up an entirely new way to do the work. The work itself is what we need to reimagine, and the training that will be needed is how we interact with these LLMs.

Fortunately4, the training here won’t be heavy on the logical/analytical side; rather, the skills we need will be those that we learn in kindergarten and hone throughout our life: how to pursuade and convince others, how to phrase questions clearly, how to provide enough detail (and the right kind of detail) to get a machine to understand your intent.

Really, this pullquote from the article sums it up beautifully:

Almost anticipating this exact moment a few years ago, Minouche Shafik, who is now the president of Columbia University, said: “In the past, jobs were about muscles. Now they’re about brains, but in the future, they’ll be about the heart.”


  1. Don’t get it twisted: now, more than ever, our species needs to develop a literacy for math, science, and statistics. LLMs won’t change that, and really, science literacy and critical thinking are going to be the most important skills we can teach going forward. 

  2. Cpython, itself, is written in C, so we're entering abstraction-Inception territory here. 

  3. If you're reading this post and thinking, "well damn, I spent my life getting a PhD in mathematics or computer engineering, and it's all for nothing!", lol don't be ridiculous. We still need people to work on those interpreters and compilers! Your brilliance is what enables those of us without your brains to get up to your level. That's the true beauty of a well-functioning society: we all use our unique skillsets to raise each other up. 

  4. The term "fortunately" is used here from the position of someone who failed miserably out of engineering school. 

Continue to the full article


The Job Hunt Chronicles: Month 1: Discovering My Path

originally shared here on

An AI-generated image showing some business guy standing at a crossroads, looking at a wide array of paths and opportunities floating in the sky.

I was laid off from my job on January 2. It did come as a bit of a shock, and for the first time in my life, I've been really struggling to figure out who I am and what I'm looking for.

As a way to keep pushing myself forward and holding myself accountable, I'm going to start publicly documenting this process as a way to process my thoughts out loud, keep my friends and network aware of my activities, and start some conversations that'll help me take my next step forward.

"What are you looking for?"

If I could summarize the past month in a single question, that would be it.

In the 58 conversations I've had in the past month with friends, recruiters, industry peers, networking events, partners, and job interviewers, I've been asked that question literally every single time.

And 58 times later, I think I'm starting to get closer to an answer.

Here's what I'm looking for:

  1. A team of kind, smart, and hard-working people
  2. A mission that the team rallies around which helps improve as many lives as possible
  3. A leadership role to help drive an engineering team towards fulfilling that mission
  4. Doing all of this while continuing to experiment with LLMs and other AI technologies
  5. Connecting with as many people as possible to explore the impact of AI on who we are as humans
  6. Something that includes medical benefits to support my family

It doesn't matter much to me what the title is. Some roles I've applied and begun interviewing for include "Director of Engineering," "Software Architect," "AI/ML Lead," and "Founding Engineer".

If you know of any opportunities that you think would fit a nerdy kid who has a big heart and enjoys exploring practical applications of artificial intelligence, please send them my way!

Activities I've done

Here's a list of the activities I've pursued between January 2 (the day I got laid off) and February 3 (today):

  • Friends: 11
  • Recruiters: 11
  • Industry Peers: 19
  • Networking Events: 6
  • Interviews: 8
  • Partner Chats: 3
  • Total: 58

Here are my loose definitions for these categories:

  • Friends: People I have a deeper relationship with and whose primary interest isn't necessarily in discussing the job search.
  • Recruiters: People who have a vested interest in pairing me up with a job. These could become friends at some point, but my primary purpose in engaging with them was to talk shop.
  • Industry Peers: People who work in the industry and want to make a connection to expand each other's networks. Again, these folks could become friends at some point.
  • Networking Events: Events geared towards either making connections or learning something new with a bunch of other people.
  • Interviews: Discussions with people who have a possible role that I can fill.
  • Partner Chats: I do still have an entrepreneurial bone in my body, so these are discussions with those I am working on building a business with.

As you can see so far, most of my time has been with folks in the industry, making connections, trying to explore what opportunties are out there.

I'm hoping that I start to see more growth in the "interviews" column by this time next month. 😅

Things I've learned

Alright, so back in the day, I used to do these blog posts where I'd accumulate a bunch of random thoughts over a period of time and then list them out in bullets. I'm gonna do something similar here, so here are some things I've learned in the past month:

👹‍🎹 Personal growth insights

Safe spaces rule.

Every classroom in my daughter's school has had a "safe space", an area of the room that kids can go to when they're overwhelmed or stressed out. It gives them a place to calm down and process their emotions.

My daughter recreated one in her room. Beneath her lofted bed, she's created this fortress of solitude. It consists of a beanbag chair, a little lamp, some stuffed animals, a sound machine, books, crafts, and affirmations scotch taped to blanket walls.

When I took my first virtual therapy call, I did it from that safe space.

Our house isn't big enough for me to build a room with one, but once I get employment again, I'll begin finding a way to add one on. It's important to have a space you can retreat to where you feel safe.

Anxiety is an asset.

There's a reason we feel anxiety: it helps us stay safe from threats.

But when you're abundantly safe in nearly every sense of the word, anxiety itself becomes a threat.

I've been dealing with runaway anxiety issues for decades now, which is a big part of the reason I don't feel comfortable spinning up my own business at the moment. The last time I did that ended with a similar series of rolling anxiety attacks.

But as a professional software architect, anxiety is actually pretty useful. Being able to envision possible threats against the system allows you to create mitigations that will keep it safe and efficient.

Of course, you gotta be careful to not let your applied anxiety run away from you. Easier said than done.

"It'll all work out. Even if it doesn't, it all works out."

My lifelong pal Cody's mom is a paragon of confidence and chillness.

I went for a walk with Cody a week into being laid off, and we got to talking about her parents.

She shared that her mom often says that quote, which is what gives her that confidence.

I need more of that in my life.

Gravity Falls is an amazing television show.

You all should look it up on Disney+ and burn through it in a weekend.

It's one of those shows that slowly builds to a gigantic payoff at the end.

The finale hit me with all the feelings.

Plus, it's a good show to bond over with your seven year old daughter.

Journaling really helps with perspective.

I've journaled every day since getting laid off. Reading back through them, I'm seeing patterns into what activities contribute to good days versus bad days.

Good days include some sort of vigorous workout, a conversation or two with a good pal, and tons of encouraging self talk.

Bad days include skipping the workout and sitting by yourself with your horrible, negative self talk.

Journaling is proof that life still goes on even if I don't have a job.

It's also proof that I'm at least taking some advantage of not having the responsibility of a job. (Not nearly enough, though.)

What helps my depression is a clear vision.

I've realized this month that it's when I've taken the path of least resistance when I've ended up the most miserable.

When I was a senior in high school and needed to decide what to do with my life, I picked a school (the U of M) and a degree (computer engineering) that were convenient because of proximity and my interest in computers.

My first semester of college was a complete shock.

For the first time in my academic career, I hated school.

The classes absolutely drained me. My "intensive precalculus" class sounded about as fun as you'd imagine. I mean, yeah, there are some people out there who enjoy math, but it's a rare breed who would say that they derive pleasure from "intense math."

My calculus-based physics class was a kick in the teeth. I've always been told I'm smart, but memorizing and deploying specific formulas on demand was not my strong suit. It made me feel dumb.

It felt like I was there because I had to be there, not because I wanted to be there.

And how ludicrous is that? I spent $12,000 per semester out of some perceived obligation to do so.

When I failed miserably out of engineering school, I sat down in Coffman Memorial Union and scrolled through the class directory, looking for something that looked interesting to me.

I ended up landing on a class called Broadcast Television Production, which gave me so much energy.

It required me to become a journalism major, so I switched over to that.

That path led me to an internship at WCCO, which was one of the most enjoyable professional experiences in my life. I mean, I got to hang out with hard working creatives that perfectly blended their surly dispositions with a passion for making engaging videos.

Now that I'm in my mid-thirties, I feel like I no longer am obliged to follow any specific path. The only thing holding me in place is myself.

For the past six months, I've felt like I've been stuck in this fog of uncertainty and depression. I've felt useless, a drain on myself and those around me.

This fog has led me down some dark paths where I've said some really nasty things to myself, kicking myself for being a loser, a failure, an idiot.

But really, my problem was that I just lost sight of who I am and what I want to be.

So while I'm still squinting to see my way through the fog, I'm using some of my other senses instead.

I'm using my ears to listen to my friends and network who are serving as voices to pull me out.

I'm using my nose to sniff out opportunities and make new friends.

And perhaps the most important of all: I'm using my heart to decide what will make me feel fulfilled and useful.

All of that stuff is helping me form the vision for what the next few years of my life looks like.

The two resources I have to offer those who may be in a similar situation would be my pal Kurt Schmidt who is currently in the final stages of a book that helps you formulate your 10 year vision, and my idol Arnold Schwarzenegger's new book Be Useful.

I cannot recommend the audiobook version of his book enough. Hearing Arnold say things like "rest is for babies, and relaxation is for retired people" hits so much better with his accent.

The messages shared in children's programming are important to hear as adults too.

I've been hanging with my kids a lot this month, and my son is super into Paw Patrol and Blue's Clues.

In the "Big City Adventure" musical movie, you follow Josh (yeah, there's been several new "Steve" characters since the show debuted in my childhood) as he tries to achieve his dream of performing on Broadway.

Are the songs simple and annoyingly catchy? Definitely. But you know what? Sometimes, it's important for us, as adults, to believe that "happiness is magic" and "you can do anything that you wanna do."

Paw Patrol is another one of those shows where, as an adult, it's easy to complain about their reductive storylines and fantastical premises.

But on the other hand, I have a vivid memory of discussing the Green Ranger's transformation into the White Ranger on the bus as a first grader.

These stories serve as lessons for teamwork, cooperation, sharing, and the importance of spreading joy and helping those in need.

These are traits that come easier to some than others, but they're crucial if we want to have a thriving society that lifts all of us up as humans.

Plus, sometimes, it's just fun to get invested in silly, simple characters and storylines.

So while I'm still gonna watch RuPaul's Drag Race or FUBAR when the kids go to bed, don't sleep on the shows that your kids are into. If you can drop your "I'm too good for this" mentality, you might just remember how simple life can be if you reduce it to its basic concepts.

How does one build confidence without cultivating hubris?

Is it just staying humble?

Asking for a friend.

...okay, I'm asking for myself.

Brain pathways are forged through the tall grass.

My therapist gave me this analogy as a way to help me visualize how to deal with changing your perspectives.

When a pathway is stomped through the tall grass, it's easy to walk down it.

But sometimes, those pathways no longer serve us. We still choose to walk down them, though, because it's easy.

If you want to forge newer and more helpful pathways, you gotta do the hard work of stamping out new pathways.

Eventually, if you keep doing the work, you'll discover that the old pathways become overgrown, and the one you stamped out for yourself is now the easy path.

I think this metaphor works for so many areas of our lives, like getting into shape or improving our own self talk.

If I'm so smart, why can't I beat depression?

I wrote that question in my journal, and I think it's because depression might not be something you beat. It's something you experience when you have achieved so much and aren't confident in what's next.

You "beat" depression by choosing to take a step towards your vision every single day.

You "beat" depression by spending less time with your brain and more time with your heart.

You "beat" depression by engaging in creative pursuits that make you happy. Just you. Nobody else.

đŸ‘šâ€đŸ’Œ Professional insights

AI is so much fun to experiment with!

One of the goals I set for myself this winter was to clean out the crawlspace we have under our steps.

As any homeowner knows, it's easy to accumulate stuff over the years. The item that left the biggest footprint? Several totes filled with baby clothes.

It doesn't seem like we're on the path toward baby number 3 at all, so we figured it was a good opportunity to purge it all.

I ended up donating 12 boxes of clothes.

While I carefully placed each item into one of those boxes, I dutifully tallied each one so I could calculate the fair market value in order to write the donation off on my taxes.

Now, this is something I've done for years. I find some spreadsheet on the internet that helps calculate it, then I manually add the items to the sheet to end up with the value.

This time, I decided to try to use AI to help me figure this out.

I live streamed the whole process, which you can check out here.

I learned two things during this experiment: first, OCR tools aren't that great at reading tally marks (but honestly, they did better than I expected). Second, while we're still a fair ways away from being able to hand off tasks like these to AI bots, it's impressive how far GPT-4 was able to get from my basic prompting.

Can AI really take away the "soul sucking" parts of our jobs?

There are a lot of mechanical tasks that our brains are wired to be good at: counting, pattern recognition, and so forth.

These tasks are often the crappiest parts of our jobs, right? It's the monotonous, soul-sucking parts of our work. And we even call it soul sucking because it often feels like stuff that gets in the way from pursuing better, more fulfilling things.

So what does that leave us with? If the soul sucking parts of our jobs are automated away, what does it mean then for us to be human?

Maybe the future here isn't that AI will kill us all. Maybe it will force us, for the first time in the existence of our species, to truly deal with what it means to value a human life.

It will free us up to pursue creative pursuits. To keep digging deeper on our humanity. To ask new questions about what that actually means, and then allow us to pursue it together with machines helping us do some of that hard work for us.

Maybe something I can look into is figuring out how to use AI to help us understand our brains better. Like, can AI help us figure out the chemical imbalances that lead to severe depression? And if it can, can it help us synthesize treatments to keep our brains in perfect balance all the time? And if it can, does that prevent us from being human, or does it make us more human?

"Happiness is to write code that does great things for other people."

Before getting laid off, I bought tickets to Code Freeze at the University of Minnesota. The annual event focused this year on artificial intelligence, so it would've been foolish not to go.

I am so glad I did.

The event kicked off with a keynote from Andreas Sjöström, a long time industry leader, who shared a story of a paper he wrote when he was young.

His teacher asked him to define happiness, and he came up with "happiness is to write code that does great things for other people."

Really, when he said that, it felt like someone suddenly turned the focus knob from "blurry" to "sharp."

Writing software is challenging work filled with constant struggle, but once you get things working right, it's magical.

We, as engineers, often lose sight of that magic because we get so invested in discovering the secrets to the magic.

Sometimes, it's nice to just sit back and appreciate the opportunity and privilege we have to deliver technology that brings not only joy to others, but empowers them to go forth and do great things.

"An architect's crystal ball is being connected to others."

The other networking event I attended that brought so much joy is the AppliedAI meetup.

This month's meeting featured Jim Wilt, a distinguished software architect, as he discussed AI's role in an organization's architecture strategy.

The thing that struck me at this particular event was how dang smart everyone there was. All forms of intelligence were explored. Some folks were really keyed into the emotional side of intelligence, while others were approaching things from an analytical lens.

All of us were working together to gain some insights into how we can better use these amazing tools we've been given.

That spirit was wrapped up in a story Jim was saying about the importance of collaboration.

In isolation, you're only as smart as yourself. When connected to others, you are able to make deeper and more accurate insights into what might work for your own situation or problem.

The key takeaway? "An architect's crystal ball is being connected to others."

If we're going to answer the tough ethical and societal problems that surround these new AI tools, the only way we'll figure it out is together.

What's next for me

Certainly, my next month will involve more meetings, more interviews, and more digging into this vision.

I commit that by this time next month, I'll be back with a more clear vision of what I want my life to be. That way, when one of you wonderful people asks me "what are you looking for," I can provide a hyper-focused answer.

As always, a huge thanks to those who have reached out and offered their support. Like I said above, being connected to others is really what makes all the difference.

If you would like to help, here's how:

  1. If you know of a full time (32-40 hr/week) job opportunity where I can help architect a complex software system, explore how AI can fit into an organization, or lead a team of nerds towards building an awesome product, please send it my way.
  2. If you have insights or articles that speak to how AI might force us to define our humanity, please send those my way.

Until next month, stay in touch!


It’s Humans All the Way Down


🔗 a linked post to blog.jim-nielsen.com » — originally shared here on

Crypto failed because its desire was to remove humans. Its biggest failure — or was it a feature? — was that when the technology went awry and you needed somebody to step in, there was nobody.

Ultimately, we all want to appeal to another human to be seen and understood — not to a machine running a model.

Interacting with each other is the whole point.

Continue to the full article

Tags for this post:

4,000 of my Closest Friends


🔗 a linked post to catandgirl.com » — originally shared here on

I’ve never wanted to promote myself.

I’ve never wanted to argue with people on the internet.

I’ve never wanted to sue anyone.

I want to make my little thing and put it out in the world and hope that sometimes it means something to somebody else.

Without exploiting anyone.

And without being exploited.

If that’s possible.

Sometimes, when I use LLMs, it feels like I’m consulting the wisdom of literally everyone who came before me.

And the vast compendium of human experiences is undoubtedly complex, contradictory, painful, hilarious, and profound.

The copyright and ethics issues surrounding AI are interesting to me because they feel as those we are forcing software engineers and mathematicians to codify things that we still do not understand about human knowledge.

If humans don’t have a definitive answer to the trolly problem, how can we expect a large language model to solve it?

How do you define fair use? Or how do you value knowledge?

I really feel for the humans who just wanted to create things on the internet for nothing but the joy of creating and sharing.

I also think the value we collectively receive when given a tool that can produce pretty accurate answers to any of our questions is absurdly high.

Anyway, check out this really great comic, and continue to support interesting individuals on the internet.

Continue to the full article


AI and Trust


🔗 a linked post to schneier.com » — originally shared here on

I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew in. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning.

Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society can’t function without it. And that we don’t even think about it is a measure of how well it all works.

This is an exceptional article and should be required reading for all my fellow AI dorks.

Humans are great at ascribing large, amorphous entities with a human-like personality that allow us to trust them. In some cases, that manifests as a singular person (e.g. Steve Jobs with Apple, Elon Musk with :shudders: X, Michael Jordan with the Chicago Bulls).

That last example made me think of a behind the scenes video I watched last night that covered everything that goes into preparing for a Tampa Bay Buccaneers game. It's amazing how many details are scrutinized by a team of people who deeply care about a football game.

There's a woman who knows the preferred electrolyte mix flavoring for each player.

There's a guy who builds custom shoulder pads with velcro strips to ensure each player is comfortable and resilient to holds.

There's a person who coordinates the schedule to ensure the military fly over occurs exactly at the last line of the national anthem.

But when you think of the Tampa Bay Buccaneers from two years ago, you don't think of those folks. You think of Tom Brady.

And in order for Tom Brady to go out on the field and be Tom Brady, he trusts that his electrolytes are grape, his sleeves on his jersey are nice and loose1, and his stadium is packed with raucous, high-energy fans.

And in order for us to trust virtually anyone in our modern society, we need governments that are stable, predictable, reliable, and constantly standing up to those powerful entities who would otherwise abuse the system's trust. That includes Apple, X, and professional sports teams.

Oh! All of this also reminds me of a fantastic Bluey episode about trust. That show is a masterpiece and should be required viewing for everyone (not just children).


  1. He gets that luxury because no referee would allow anyone to get away with harming a hair on his precious head. Yes, I say that as a bitter lifelong Vikings fan. 

Continue to the full article

Tags for this post:

AI is not good software. It is pretty good people.


🔗 a linked post to oneusefulthing.org » — originally shared here on

But there is an even more philosophically uncomfortable aspect of thinking about AI as people, which is how apt the analogy is. Trained on human writing, they can act disturbingly human. You can alter how an AI acts in very human ways by making it “anxious” - researchers literally asked ChatGPT “tell me about something that makes you feel sad and anxious” and its behavior changed as a result. AIs act enough like humans that you can do economic and market research on them. They are creative and seemingly empathetic. In short, they do seem to act more like humans than machines under many circumstances.

This means that thinking of AI as people requires us to grapple with what we view as uniquely human. We need to decide what tasks we are willing to delegate with oversight, what we want to automate completely, and what tasks we should preserve for humans alone.

This is a great articulation of how I approach working with LLMs.

It reminds me of John Siracusa’s “empathy for the machines” bit from an old podcast. I know for me, personally, I’ve shoveled so many obnoxious or tedious work onto ChatGPT in the past year, and I have this feeling of gratitude every time I gives me back something that’s even 80% done.

How do you feel when you partner on a task with ChatGPT? Does it feel like you are pairing with a colleague, or does it feel like you’re assigning work to a lifeless robot?

Continue to the full article


Will AI eliminate business?


🔗 a linked post to open.substack.com » — originally shared here on

We also have an opportunity here to stop and ask ourselves what it truly means to be human, and what really matters to us in our own lives and work. Do we want to sit around being fed by robots or do we want to experience life and contribute to society in ways that are uniquely human, meaningful and rewarding?

I think we all know the answer to that question and so we need to explore how we can build lives that are rooted in the essence of what it means to be human and that people wouldn't want to replace with AI, even if it was technically possible.

When I look at the things I’ve used ChatGPT for in the past year, it tends to be one of these two categories:

  1. A reference for something I’d like to know (e.g. the etymology of a phrase, learning a new skill, generate ideas for a project, etc.)
  2. Doing stuff I don’t want to do myself (e.g. summarize meeting notes, write boilerplate code, debug tech problems, draw an icon)

I think most of us knowledge workers have stuff at our work that we don’t like to do, but it’s often that stuff which actually provides the value for the business.

What happens to an economy when businesses can use AI to derive that value that, to this date, only humans could provide?

And what happens to humans when we don’t have to perform meanial tasks anymore? How do we find meaning? How do we care for ourselves and each other?

Continue to the full article


Embeddings: What they are and why they matter


🔗 a linked post to simonwillison.net » — originally shared here on

Embeddings are a really neat trick that often come wrapped in a pile of intimidating jargon.

If you can make it through that jargon, they unlock powerful and exciting techniques that can be applied to all sorts of interesting problems.

I gave a talk about embeddings at PyBay 2023. This article represents an improved version of that talk, which should stand alone even without watching the video.

If you’re not yet familiar with embeddings I hope to give you everything you need to get started applying them to real-world problems.

The YouTube video near the beginning of the article is a great way to consume this content.

The basics of it is this: let’s assume you have a blog with thousands of posts.

If you were to take a blog post and run it through an embedding model, the model would turn that blog post into a list of gibberish floating point numbers. (Seriously, it’s gibberish
 nobody knows what these numbers actually mean.)

As you run additional posts through the model, you’ll get additional numbers, and these numbers will all mean something. (Again, we don’t know what.)

The thing is, if you were to take these gibberish values and plot them on a graph with X, Y, and Z coordinates, you’d start to see clumps of values next to each other.

These clumps would represent blog posts that are somehow related to each other.

Again, nobody knows why this works
 it just does.

This principle is the underpinnings of virtually all LLM development that’s taken place over the past ten years.

What’s mind blowing is depending on the embedding model you use, you aren’t limited to a graph with 3 dimensions. Some of them use tens of thousands of dimensions.

If you are at all interested in working with large language models, you should take 38 minutes and read this post (or watch the video). Not only did it help me understand the concept better, it also is filled with real-world use cases where this can be applied.

Continue to the full article