all posts tagged 'empathy'

Anti-AI sentiment gets big applause at SXSW 2024 as moviemaker dubs AI cheerleading as ā€˜terrifying bullsh**’


šŸ”— a linked post to techcrunch.com » — originally shared here on

I gotta find the video from this and watch it myself, because essentially every single thing mentioned in this article is what I wanna build a podcast around.

Let’s start with this:

As Kwan first explained, modern capitalism only worked because we compelled people to work, rather than forced them to do so.

ā€œWe had to change the story we told ourselves and say that ā€˜your value is your job,ā€ he told the audience. ā€œYou are only worth what you can do, and we are no longer beings with an inherent worth. And this is why it’s so hard to find fulfillment in this current system. The system works best when you’re not fulfilled.ā€

Boy, this cuts to the heart of the depressive conversations I’ve had with myself this past year.

Finding a job sucks because you have to basically find a way to prove to someone that you are worth something. It can be empowering to some, sure, but I am finding the whole process to be extremely demoralizing and dehumanizing.

ā€œAre you trying to use [AI] to create the world you want to live in? Are you trying to use it to increase value in your life and focus on the things that you really care about? Or are you just trying to, like, make some money for the billionaires, you know?ā€Ā  Scheinert asked the audience. ā€œAnd if someone tells you, there’s no side effect. It’s totally great, ā€˜get on board’ — I just want to go on the record and say that’s terrifying bullshit. That’s not true. And we should be talking really deeply about how to carefully, carefully deploy this stuff,ā€ he said.

I’ve literally said the words, ā€œI don’t want to make rich people richerā€ no fewer than a hundred times since January.

There is so much to unpack around this article, but I think I’m sharing it now as a stand in for a thesis around the podcast I am going to start in the next month.

We need to be having this conversation more often and with as many people as possible. Let’s do our best right now at the precipice of these new technologies to make them useful for ourselves, and not just perpetuate the worst parts of our current systems.

Continue to the full article


Captain's log: the irreducible weirdness of prompting AIs


šŸ”— a linked post to oneusefulthing.org » — originally shared here on

There are still going to be situations where someone wants to write prompts that are used at scale, and, in those cases, structured prompting does matter. Yet we need to acknowledge that this sort of ā€œprompt engineeringā€ is far from an exact science, and not something that should necessarily be left to computer scientists and engineers.

At its best, it often feels more like teaching or managing, applying general principles along with an intuition for other people, to coach the AI to do what you want.

As I have written before, there is no instruction manual, but with good prompts, LLMs are often capable of far more than might be initially apparent.

If you had to guess before reading this article what prompt yields the best performance on mathematic problems, you would almost certainly be wrong.

I love the concept of prompt engineering because I feel like one of my key strengths is being able to articulate my needs to any number of receptive audiences.

I’ve often told people that programming computers is my least favorite part of being a computer engineer, and it’s because writing code is often a frustrating, demoralizing endeavor.

But with LLMs, we are quickly approaching a time where we can simply ask the computer to do something for us, and it will.

Which, I think, is something that gets to the core of my recent mental health struggles: if I’m not the guy who can get computers to do the thing you want them to do, who am I?

And maybe I’m overreacting. Maybe ā€œnormal peopleā€ will still hate dealing with technology in ten years, and there will still be a market for nerds like me who are willing to do the frustrating work of getting computers to be useful.

But today, I spent three hours rebuilding the backend of this blog from the bottom up using Next.JS, a JavaScript framework I’ve never used before.

In three hours, I was able to have a functioning system. Both front and backend. And it looked better than anything I’ve ever crafted myself.

I was able to do all that with a potent combination of a YouTube tutorial and ChatGPT+.

Soon enough, LLMs and other AGI tools will be able to infer all that from even rudimentary prompts.

So what good can I bring to the world?

Continue to the full article


Strategies for an Accelerating Future


šŸ”— a linked post to oneusefulthing.org » — originally shared here on

But now Gemini 1.5 can hold something like 750,000 words in memory, with near-perfect recall. I fed it all my published academic work prior to 2022 — over 1,000 pages of PDFs spread across 20 papers and books — and Gemini was able to summarize the themes in my work and quote accurately from among the papers. There were no major hallucinations, only minor errors where it attributed a correct quote to the wrong PDF file, or mixed up the order of two phrases in a document.

I’m contemplating what topic I want to pitch for the upcoming Applied AI Conference this spring, and I think I want to pitch ā€œHow to Cope with AI.ā€

Case in point: this pull quote from Ethan Mollick’s excellent newsletter.

Every organization I’ve worked with in the past decade is going to be significantly impacted, if not rendered outright obsolete, by both increasing context windows and speedier large language models which, when combined, just flat out can do your value proposition but better.

Continue to the full article


When Your Technical Skills Are Eclipsed, Your Humanity Will Matter More Than Ever


šŸ”— a linked post to nytimes.com » — originally shared here on

I ended my first blog detailing my job hunt with a request for insights or articles that speak to how AI might force us to define our humanity.

This op-ed in yesterday’s New York Times is exactly what I’ve been looking for.

[…] The big question emerging across so many conversations about A.I. and work: What are our core capabilities as humans?

If we answer that question from a place of fear about what’s left for people in the age of A.I., we can end up conceding a diminished view of human capability. Instead, it’s critical for us all to start from a place that imagines what’s possible for humans in the age of A.I. When you do that, you find yourself focusing quickly on people skills that allow us to collaborate and innovate in ways technology can amplify but never replace.

Herein lies the realization I’ve arrived at over the last two years of experimenting with large language models.

The real winners of large language models will be those who understand how to talk to them like you talk to a human.

Math and stats are two languages that most humans have a hard time understanding. The last few hundred years of advancements in those areas have led us to the creation of a tool which anyone can leverage as long as they know how to ask a good question. The logic/math skills are no longer the career differentiator that they have been since the dawn of the twentieth century.1

The theory I'm working on looks something like this:

  1. LLMs will become an important abstraction away from the complex math
  2. With an abstraction like this, we will be able to solve problems like never before
  3. We need to work together, utilizing all of our unique strengths, to be able to get the most out of these new abstractions

To illustrate what I mean, take the Python programming language as an example. When you write something in Python, that code is interpreted by something like CPython2 , which then is compiled into machine/assembly code, which then gets translated to binary code, which finally results in the thing that gets run on those fancy M3 chips in your brand new Macbook Pro.

Programmers back in the day actually did have to write binary code. Those seem like the absolute dark days to me. It must've taken forever to create punch cards to feed into a system to perform the calculations.

Today, you can spin up a Python function in no time to perform incredibly complex calculations with ease.

LLMs, in many ways, provide us with a similar abstraction on top of our own communication methods as humans.

Just like the skills that were needed to write binary are not entirely gone3, LLMs won’t eliminate jobs; they’ll open up an entirely new way to do the work. The work itself is what we need to reimagine, and the training that will be needed is how we interact with these LLMs.

Fortunately4, the training here won’t be heavy on the logical/analytical side; rather, the skills we need will be those that we learn in kindergarten and hone throughout our life: how to pursuade and convince others, how to phrase questions clearly, how to provide enough detail (and the right kind of detail) to get a machine to understand your intent.

Really, this pullquote from the article sums it up beautifully:

Almost anticipating this exact moment a few years ago, Minouche Shafik, who is now the president of Columbia University, said: ā€œIn the past, jobs were about muscles. Now they’re about brains, but in the future, they’ll be about the heart.ā€


  1. Don’t get it twisted: now, more than ever, our species needs to develop a literacy for math, science, and statistics. LLMs won’t change that, and really, science literacy and critical thinking are going to be the most important skills we can teach going forward. 

  2. Cpython, itself, is written in C, so we're entering abstraction-Inception territory here. 

  3. If you're reading this post and thinking, "well damn, I spent my life getting a PhD in mathematics or computer engineering, and it's all for nothing!", lol don't be ridiculous. We still need people to work on those interpreters and compilers! Your brilliance is what enables those of us without your brains to get up to your level. That's the true beauty of a well-functioning society: we all use our unique skillsets to raise each other up. 

  4. The term "fortunately" is used here from the position of someone who failed miserably out of engineering school. 

Continue to the full article


It’s Humans All the Way Down


šŸ”— a linked post to blog.jim-nielsen.com » — originally shared here on

Crypto failed because its desire was to remove humans. Its biggest failure — or was it a feature? — was that when the technology went awry and you needed somebody to step in, there was nobody.

Ultimately, we all want to appeal to another human to be seen and understood — not to a machine running a model.

Interacting with each other is the whole point.

Continue to the full article


4,000 of my Closest Friends


šŸ”— a linked post to catandgirl.com » — originally shared here on

I’ve never wanted to promote myself.

I’ve never wanted to argue with people on the internet.

I’ve never wanted to sue anyone.

I want to make my little thing and put it out in the world and hope that sometimes it means something to somebody else.

Without exploiting anyone.

And without being exploited.

If that’s possible.

Sometimes, when I use LLMs, it feels like I’m consulting the wisdom of literally everyone who came before me.

And the vast compendium of human experiences is undoubtedly complex, contradictory, painful, hilarious, and profound.

The copyright and ethics issues surrounding AI are interesting to me because they feel as those we are forcing software engineers and mathematicians to codify things that we still do not understand about human knowledge.

If humans don’t have a definitive answer to the trolly problem, how can we expect a large language model to solve it?

How do you define fair use? Or how do you value knowledge?

I really feel for the humans who just wanted to create things on the internet for nothing but the joy of creating and sharing.

I also think the value we collectively receive when given a tool that can produce pretty accurate answers to any of our questions is absurdly high.

Anyway, check out this really great comic, and continue to support interesting individuals on the internet.

Continue to the full article


AI and Trust


šŸ”— a linked post to schneier.com » — originally shared here on

I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew in. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning.

Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society can’t function without it. And that we don’t even think about it is a measure of how well it all works.

This is an exceptional article and should be required reading for all my fellow AI dorks.

Humans are great at ascribing large, amorphous entities with a human-like personality that allow us to trust them. In some cases, that manifests as a singular person (e.g. Steve Jobs with Apple, Elon Musk with :shudders: X, Michael Jordan with the Chicago Bulls).

That last example made me think of a behind the scenes video I watched last night that covered everything that goes into preparing for a Tampa Bay Buccaneers game. It's amazing how many details are scrutinized by a team of people who deeply care about a football game.

There's a woman who knows the preferred electrolyte mix flavoring for each player.

There's a guy who builds custom shoulder pads with velcro strips to ensure each player is comfortable and resilient to holds.

There's a person who coordinates the schedule to ensure the military fly over occurs exactly at the last line of the national anthem.

But when you think of the Tampa Bay Buccaneers from two years ago, you don't think of those folks. You think of Tom Brady.

And in order for Tom Brady to go out on the field and be Tom Brady, he trusts that his electrolytes are grape, his sleeves on his jersey are nice and loose1, and his stadium is packed with raucous, high-energy fans.

And in order for us to trust virtually anyone in our modern society, we need governments that are stable, predictable, reliable, and constantly standing up to those powerful entities who would otherwise abuse the system's trust. That includes Apple, X, and professional sports teams.

Oh! All of this also reminds me of a fantastic Bluey episode about trust. That show is a masterpiece and should be required viewing for everyone (not just children).


  1. He gets that luxury because no referee would allow anyone to get away with harming a hair on his precious head. Yes, I say that as a bitter lifelong Vikings fan. 

Continue to the full article


AI is not good software. It is pretty good people.


šŸ”— a linked post to oneusefulthing.org » — originally shared here on

But there is an even more philosophically uncomfortable aspect of thinking about AI as people, which is how apt the analogy is. Trained on human writing, they can act disturbingly human. You can alter how an AI acts in very human ways by making it ā€œanxiousā€ - researchers literally asked ChatGPT ā€œtell me about something that makes you feel sad and anxiousā€ and its behavior changed as a result. AIs act enough like humans that you can do economic and market research on them. They are creative and seemingly empathetic. In short, they do seem to act more like humans than machines under many circumstances.

This means that thinking of AI as people requires us to grapple with what we view as uniquely human. We need to decide what tasks we are willing to delegate with oversight, what we want to automate completely, and what tasks we should preserve for humans alone.

This is a great articulation of how I approach working with LLMs.

It reminds me of John Siracusa’s ā€œempathy for the machinesā€ bit from an old podcast. I know for me, personally, I’ve shoveled so many obnoxious or tedious work onto ChatGPT in the past year, and I have this feeling of gratitude every time I gives me back something that’s even 80% done.

How do you feel when you partner on a task with ChatGPT? Does it feel like you are pairing with a colleague, or does it feel like you’re assigning work to a lifeless robot?

Continue to the full article


Will AI eliminate business?


šŸ”— a linked post to open.substack.com » — originally shared here on

We also have an opportunity here to stop and ask ourselves what it truly means to be human, and what really matters to us in our own lives and work. Do we want to sit around being fed by robots or do we want to experience life and contribute to society in ways that are uniquely human, meaningful and rewarding?

I think we all know the answer to that question and so we need to explore how we can build lives that are rooted in the essence of what it means to be human and that people wouldn't want to replace with AI, even if it was technically possible.

When I look at the things I’ve used ChatGPT for in the past year, it tends to be one of these two categories:

  1. A reference for something I’d like to know (e.g. the etymology of a phrase, learning a new skill, generate ideas for a project, etc.)
  2. Doing stuff I don’t want to do myself (e.g. summarize meeting notes, write boilerplate code, debug tech problems, draw an icon)

I think most of us knowledge workers have stuff at our work that we don’t like to do, but it’s often that stuff which actually provides the value for the business.

What happens to an economy when businesses can use AI to derive that value that, to this date, only humans could provide?

And what happens to humans when we don’t have to perform meanial tasks anymore? How do we find meaning? How do we care for ourselves and each other?

Continue to the full article


You’re a Developer Now


šŸ”— a linked post to every.to » — originally shared here on

ChatGPT is not a total panacea, and it doesn’t negate the skill and intelligence required to be a great developer. There are significant benefits to reap from much of traditional programming education.

But this objection is missing the point. People who couldn’t build anything at all can now build things that work. And the tool that enables this is just getting started. In five years, what will novice developers be able to achieve?Ā 

A heck of a lot.Ā 

See, now this is the sort of insight that would’ve played well in a TEDx speech.

Continue to the full article