all posts tagged 'empathy'

AI isn't useless. But is it worth it?


šŸ”— a linked post to citationneeded.news » — originally shared here on

There are an unbelievable amount of points Molly White makes with which I found myself agreeing.

In fact, I feel like this is an exceptionally accurate perspective of the current state of AI and LLMs in particular. If youā€™re curious about AI, give this article a read.

A lot of my personal fears about the potential power of these tools comes from speculation that the LLM CEOs make about their forthcoming updates.

And I donā€™t think that fear is completely unfounded. I mean, look at what tools we had available in 2021 compared to April 2024. Weā€™ve come a long way in three years.

But right now, these tools are quite hard to use without spending a ton of time to learn their intricacies.

The best way to fight fear is with knowledge. Knowing how to wield these tools helps me deal with my fears, and I enjoy showing others how to do the same.

One point Molly makes about the generated text got me to laugh out loud:

I particularly like how, when I ask them to try to sound like me, or to at least sound less like a chatbot, they adopt a sort of "cool teacher" persona, as if they're sitting backwards on a chair to have a heart-to-heart. Back when I used to wait tables, the other waitresses and I would joke to each other about our "waitress voice", which were the personas we all subconsciously seemed to slip into when talking to customers. They varied somewhat, but they were all uniformly saccharine, with slightly higher-pitched voices, and with the general demeanor as though you were talking to someone you didn't think was very bright. Every LLM's writing "voice" reminds me of that.

ā€œWaitress voiceā€ is how I will classify this phenomenon from now on.

You know how I can tell when my friends have used AI to make LinkedIn posts?

When all of a sudden, they use emoji and phrases like ā€œExciting news!ā€

Itā€™s not even that waitress voice is a negative thing. After all, itā€™s expected to communicate with our waitress voices in social situations when we donā€™t intimately know somebody.

Calling a customer support hotline? Shopping in person for something? Meeting your kidā€™s teacher for the first time? New coworker in their first meeting?

All of these are situations in which I find myself using my own waitress voice.

Itā€™s a safe play for the LLMs to use it as well when they donā€™t know us.

But I find one common thread among the things AI tools are particularly suited to doing: do we even want to be doing these things? If all you want out of a meeting is the AI-generated summary, maybe that meeting could've been an email. If you're using AI to write your emails, and your recipient is using AI to read them, could you maybe cut out the whole thing entirely? If mediocre, auto-generated reports are passing muster, is anyone actually reading them? Or is it just middle-management busywork?

This is what I often brag about to people when I speak highly of LLMs.

These systems are incredible at the BS work. But theyā€™re currently terrible with the stuff humans are good at.

I would love to live in a world where the technology industry widely valued making incrementally useful tools to improve peoples' lives, and were honest about what those tools could do, while also carefully weighing the technology's costs. But that's not the world we live in. Instead, we need to push back against endless tech manias and overhyped narratives, and oppose the "innovation at any cost" mindset that has infected the tech sector.

Again, thank you Molly White for printing such a poignant manifesto, seeing as I was having trouble articulating one of my own.

Innovation and growth at any cost are concepts which have yet to lead to a markedly better outcome for us all.

Letā€™s learn how to use these tools to make all our lives better, then letā€™s go live our lives.

Continue to the full article


Happy 20th Anniversary, Gmail. Iā€™m Sorry Iā€™m Leaving You.


šŸ”— a linked post to nytimes.com » — originally shared here on

I am grateful ā€” genuinely ā€” for what Google and Apple and others did to make digital life easy over the past two decades. But too much ease carries a cost. I was lulled into the belief that I didnā€™t have to make decisions. Now my digital life is a series of monuments to the cost of combining maximal storage with minimal intention.

I have thousands of photos of my children but few that Iā€™ve set aside to revisit. I have records of virtually every text Iā€™ve sent since I was in college but no idea how to find the ones that meant something. I spent years blasting my thoughts to millions of people on X and Facebook even as I fell behind on correspondence with dear friends. I have stored everything and saved nothing.

This is an example of what AI, in its most optimistic state, could help us with.

We already see companies doing this. In the Apple ecosystem, the Photos widget is perhaps the best piece of software theyā€™ve produced in years.

Every single day, I am presented with a slideshow of a friend who is celebrating their birthday, a photo of my kids from this day in history, or a memory that fits with an upcoming event.

All of that is powered by rudimentary1 AI.

Imagine what could be done when you unleash a tuned large language model on our text histories. On our photos. On our app usage.

AI is only as good as the data it is provided. Weā€™ve been trusting our devices with our most intimidate and vulnerable parts of ourselves for two decades.

This is supposed to be the payoff for the last twenty years of surveillance capitalism, I think?

All those secrets we share, all of those activities weā€™ve done online for the last twenty years, this will be used to somehow make our lives better?

The optimistic take is that weā€™ll receive better auto suggestions for text responses to messages that sound more like us. Weā€™ll receive tailored traffic suggestions based on the way we drive. Weā€™ll receive a ā€œlong lostā€ photo of our kid from a random trip to the museum.

The pessimistic take is that weā€™ll give companies the exact words which will cause us to take action. Our own words will be warped to get us to buy something weā€™ve convinced ourselves we need.

My hunch is that both takes will be true. We need to be smart enough to know how to use these tools to help ourselves and when to put them down.

I havenā€™t used Gmail as my primary email for years now2, but this article is giving me more motivation to finally pull the plug and shrink my digital footprint.

This is not something the corporations did to me. This is something I did to myself. But I am looking now for software that insists I make choices rather than whispers that none are needed. I donā€™t want my digital life to be one shame closet after another. A new metaphor has taken hold for me: I want it to be a garden I tend, snipping back the weeds and nourishing the plants.

My wife and I spent the last week cleaning out our garage. It reached the point where the clutter accumulated so much that you could only park one car in it, strategically aligned so you could squeeze through a narrow pathway and open a door.

As of this morning, we donated ten boxes of items and are able to comfortably move around the space. While there is more to be done, the garage now feels more livable, useful, and enjoyable to be inside.

I was able to clear off my work bench and mount a pendant above it. The pendant is autographed by the entire starting defensive line of the 1998 Minnesota Vikings.

Every time I walk through my garage, I see it hanging there and it makes me so happy.

Our digital lives should be the same way.

My shame closet is a 4 terabyte hard drive containing every school assignment since sixth grade, every personal webpage Iā€™ve ever built, multiple sporadic backups of various websites I am no longer in charge of, and scans of documents that ostensibly may mean something to me some day.

Scrolling through my drive, Iā€™m presented with a completely chaotic list that is too overwhelming to sort through.

Just like how I cleaned out my garage, I aught to do the same to this junk drawer.

Iā€™ll revert to Ezraā€™s garden metaphor here: keep a small, curated garden that contains the truly important and meaningful digital items to you. Prune the rest.

(Shout out to my friend Dana for sharing this with me. I think she figured out my brand.)


  1. By todayā€™s standards. 

  2. I use Fastmail. You should give it a try (that link is an affiliate link)! 

Continue to the full article


npm install everything, and the complete and utter chaos that follows


šŸ”— a linked post to boehs.org » — originally shared here on

We tried to hang a pretty picture on a wall, but accidentally opened a small hole. This hole caused the entire building to collapse. While we did not intend to create a hole, and feel terrible for all the people impacted by the collapse, we believe itā€™s also worth investigating what failures of compliance testing & building design could allow such a small hole to cause such big damage.

Multiple parties involved, myself included, are still students and/or do not code professionally. How could we have been allowed to do this by accident?

Itā€™s certainly no laughing matter, neither to the people who rely on npm nor the kids who did this.

But man, it is comical to see the Law of Unintended Consequences when it decides to rear its ugly head.

I applaud the students who had the original idea and decided to see what would happen if you installed every single npm package at once. Itā€™s a good question, to which the answer is: uncover a fairly significant issue with how npm maintains integrity across all of its packages.

But I guess the main reason Iā€™m sharing this article is as a case study on how hard it is to moderate a system.

Iā€™m still a recovering perfectionist, and the older I get, the more I come across examples (both online like this and also in my real life) where you can do everything right and still end up losing big.

The best thing you can do when you see something like this is to pat your fellow human on the back and say, ā€œman, that really sucks, Iā€™m sorry.ā€

The worst thing you can do, as evidenced in this story, is to cuss out some teenagers.

Continue to the full article


Anti-AI sentiment gets big applause at SXSW 2024 as moviemaker dubs AI cheerleading as ā€˜terrifying bullsh**ā€™


šŸ”— a linked post to techcrunch.com » — originally shared here on

I gotta find the video from this and watch it myself, because essentially every single thing mentioned in this article is what I wanna build a podcast around.

Letā€™s start with this:

As Kwan first explained, modern capitalism only worked because we compelled people to work, rather than forced them to do so.

ā€œWe had to change the story we told ourselves and say that ā€˜your value is your job,ā€ he told the audience. ā€œYou are only worth what you can do, and we are no longer beings with an inherent worth. And this is why itā€™s so hard to find fulfillment in this current system. The system works best when youā€™re not fulfilled.ā€

Boy, this cuts to the heart of the depressive conversations Iā€™ve had with myself this past year.

Finding a job sucks because you have to basically find a way to prove to someone that you are worth something. It can be empowering to some, sure, but I am finding the whole process to be extremely demoralizing and dehumanizing.

ā€œAre you trying to use [AI] to create the world you want to live in? Are you trying to use it to increase value in your life and focus on the things that you really care about? Or are you just trying to, like, make some money for the billionaires, you know?ā€Ā  Scheinert asked the audience. ā€œAnd if someone tells you, thereā€™s no side effect. Itā€™s totally great, ā€˜get on boardā€™ ā€” I just want to go on the record and say thatā€™s terrifying bullshit. Thatā€™s not true. And we should be talking really deeply about how to carefully, carefully deploy this stuff,ā€ he said.

Iā€™ve literally said the words, ā€œI donā€™t want to make rich people richerā€ no fewer than a hundred times since January.

There is so much to unpack around this article, but I think Iā€™m sharing it now as a stand in for a thesis around the podcast I am going to start in the next month.

We need to be having this conversation more often and with as many people as possible. Letā€™s do our best right now at the precipice of these new technologies to make them useful for ourselves, and not just perpetuate the worst parts of our current systems.

Continue to the full article


Captain's log: the irreducible weirdness of prompting AIs


šŸ”— a linked post to oneusefulthing.org » — originally shared here on

There are still going to be situations where someone wants to write prompts that are used at scale, and, in those cases, structured prompting does matter. Yet we need to acknowledge that this sort of ā€œprompt engineeringā€ is far from an exact science, and not something that should necessarily be left to computer scientists and engineers.

At its best, it often feels more like teaching or managing, applying general principles along with an intuition for other people, to coach the AI to do what you want.

As I have written before, there is no instruction manual, but with good prompts, LLMs are often capable of far more than might be initially apparent.

If you had to guess before reading this article what prompt yields the best performance on mathematic problems, you would almost certainly be wrong.

I love the concept of prompt engineering because I feel like one of my key strengths is being able to articulate my needs to any number of receptive audiences.

Iā€™ve often told people that programming computers is my least favorite part of being a computer engineer, and itā€™s because writing code is often a frustrating, demoralizing endeavor.

But with LLMs, we are quickly approaching a time where we can simply ask the computer to do something for us, and it will.

Which, I think, is something that gets to the core of my recent mental health struggles: if Iā€™m not the guy who can get computers to do the thing you want them to do, who am I?

And maybe Iā€™m overreacting. Maybe ā€œnormal peopleā€ will still hate dealing with technology in ten years, and there will still be a market for nerds like me who are willing to do the frustrating work of getting computers to be useful.

But today, I spent three hours rebuilding the backend of this blog from the bottom up using Next.JS, a JavaScript framework Iā€™ve never used before.

In three hours, I was able to have a functioning system. Both front and backend. And it looked better than anything Iā€™ve ever crafted myself.

I was able to do all that with a potent combination of a YouTube tutorial and ChatGPT+.

Soon enough, LLMs and other AGI tools will be able to infer all that from even rudimentary prompts.

So what good can I bring to the world?

Continue to the full article


Strategies for an Accelerating Future


šŸ”— a linked post to oneusefulthing.org » — originally shared here on

But now Gemini 1.5 can hold something like 750,000 words in memory, with near-perfect recall. I fed it all my published academic work prior to 2022 ā€” over 1,000 pages of PDFs spread across 20 papers and books ā€” and Gemini was able to summarize the themes in my work and quote accurately from among the papers. There were no major hallucinations, only minor errors where it attributed a correct quote to the wrong PDF file, or mixed up the order of two phrases in a document.

Iā€™m contemplating what topic I want to pitch for the upcoming Applied AI Conference this spring, and I think I want to pitch ā€œHow to Cope with AI.ā€

Case in point: this pull quote from Ethan Mollickā€™s excellent newsletter.

Every organization Iā€™ve worked with in the past decade is going to be significantly impacted, if not rendered outright obsolete, by both increasing context windows and speedier large language models which, when combined, just flat out can do your value proposition but better.

Continue to the full article


When Your Technical Skills Are Eclipsed, Your Humanity Will Matter More Than Ever


šŸ”— a linked post to nytimes.com » — originally shared here on

I ended my first blog detailing my job hunt with a request for insights or articles that speak to how AI might force us to define our humanity.

This op-ed in yesterdayā€™s New York Times is exactly what Iā€™ve been looking for.

[ā€¦] The big question emerging across so many conversations about A.I. and work: What are our core capabilities as humans?

If we answer that question from a place of fear about whatā€™s left for people in the age of A.I., we can end up conceding a diminished view of human capability. Instead, itā€™s critical for us all to start from a place that imagines whatā€™s possible for humans in the age of A.I. When you do that, you find yourself focusing quickly on people skills that allow us to collaborate and innovate in ways technology can amplify but never replace.

Herein lies the realization Iā€™ve arrived at over the last two years of experimenting with large language models.

The real winners of large language models will be those who understand how to talk to them like you talk to a human.

Math and stats are two languages that most humans have a hard time understanding. The last few hundred years of advancements in those areas have led us to the creation of a tool which anyone can leverage as long as they know how to ask a good question. The logic/math skills are no longer the career differentiator that they have been since the dawn of the twentieth century.1

The theory I'm working on looks something like this:

  1. LLMs will become an important abstraction away from the complex math
  2. With an abstraction like this, we will be able to solve problems like never before
  3. We need to work together, utilizing all of our unique strengths, to be able to get the most out of these new abstractions

To illustrate what I mean, take the Python programming language as an example. When you write something in Python, that code is interpreted by something like CPython2 , which then is compiled into machine/assembly code, which then gets translated to binary code, which finally results in the thing that gets run on those fancy M3 chips in your brand new Macbook Pro.

Programmers back in the day actually did have to write binary code. Those seem like the absolute dark days to me. It must've taken forever to create punch cards to feed into a system to perform the calculations.

Today, you can spin up a Python function in no time to perform incredibly complex calculations with ease.

LLMs, in many ways, provide us with a similar abstraction on top of our own communication methods as humans.

Just like the skills that were needed to write binary are not entirely gone3, LLMs wonā€™t eliminate jobs; theyā€™ll open up an entirely new way to do the work. The work itself is what we need to reimagine, and the training that will be needed is how we interact with these LLMs.

Fortunately4, the training here wonā€™t be heavy on the logical/analytical side; rather, the skills we need will be those that we learn in kindergarten and hone throughout our life: how to pursuade and convince others, how to phrase questions clearly, how to provide enough detail (and the right kind of detail) to get a machine to understand your intent.

Really, this pullquote from the article sums it up beautifully:

Almost anticipating this exact moment a few years ago, Minouche Shafik, who is now the president of Columbia University, said: ā€œIn the past, jobs were about muscles. Now theyā€™re about brains, but in the future, theyā€™ll be about the heart.ā€


  1. Donā€™t get it twisted: now, more than ever, our species needs to develop a literacy for math, science, and statistics. LLMs wonā€™t change that, and really, science literacy and critical thinking are going to be the most important skills we can teach going forward. 

  2. Cpython, itself, is written in C, so we're entering abstraction-Inception territory here. 

  3. If you're reading this post and thinking, "well damn, I spent my life getting a PhD in mathematics or computer engineering, and it's all for nothing!", lol don't be ridiculous. We still need people to work on those interpreters and compilers! Your brilliance is what enables those of us without your brains to get up to your level. That's the true beauty of a well-functioning society: we all use our unique skillsets to raise each other up. 

  4. The term "fortunately" is used here from the position of someone who failed miserably out of engineering school. 

Continue to the full article


Itā€™s Humans All the Way Down


šŸ”— a linked post to blog.jim-nielsen.com » — originally shared here on

Crypto failed because its desire was to remove humans. Its biggest failure ā€” or was it a feature? ā€” was that when the technology went awry and you needed somebody to step in, there was nobody.

Ultimately, we all want to appeal to another human to be seen and understood ā€” not to a machine running a model.

Interacting with each other is the whole point.

Continue to the full article

Tags for this post:

4,000 of my Closest Friends


šŸ”— a linked post to catandgirl.com » — originally shared here on

Iā€™ve never wanted to promote myself.

Iā€™ve never wanted to argue with people on the internet.

Iā€™ve never wanted to sue anyone.

I want to make my little thing and put it out in the world and hope that sometimes it means something to somebody else.

Without exploiting anyone.

And without being exploited.

If thatā€™s possible.

Sometimes, when I use LLMs, it feels like Iā€™m consulting the wisdom of literally everyone who came before me.

And the vast compendium of human experiences is undoubtedly complex, contradictory, painful, hilarious, and profound.

The copyright and ethics issues surrounding AI are interesting to me because they feel as those we are forcing software engineers and mathematicians to codify things that we still do not understand about human knowledge.

If humans donā€™t have a definitive answer to the trolly problem, how can we expect a large language model to solve it?

How do you define fair use? Or how do you value knowledge?

I really feel for the humans who just wanted to create things on the internet for nothing but the joy of creating and sharing.

I also think the value we collectively receive when given a tool that can produce pretty accurate answers to any of our questions is absurdly high.

Anyway, check out this really great comic, and continue to support interesting individuals on the internet.

Continue to the full article


AI and Trust


šŸ”— a linked post to schneier.com » — originally shared here on

I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew in. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chainā€”any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning.

Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society canā€™t function without it. And that we donā€™t even think about it is a measure of how well it all works.

This is an exceptional article and should be required reading for all my fellow AI dorks.

Humans are great at ascribing large, amorphous entities with a human-like personality that allow us to trust them. In some cases, that manifests as a singular person (e.g. Steve Jobs with Apple, Elon Musk with :shudders: X, Michael Jordan with the Chicago Bulls).

That last example made me think of a behind the scenes video I watched last night that covered everything that goes into preparing for a Tampa Bay Buccaneers game. It's amazing how many details are scrutinized by a team of people who deeply care about a football game.

There's a woman who knows the preferred electrolyte mix flavoring for each player.

There's a guy who builds custom shoulder pads with velcro strips to ensure each player is comfortable and resilient to holds.

There's a person who coordinates the schedule to ensure the military fly over occurs exactly at the last line of the national anthem.

But when you think of the Tampa Bay Buccaneers from two years ago, you don't think of those folks. You think of Tom Brady.

And in order for Tom Brady to go out on the field and be Tom Brady, he trusts that his electrolytes are grape, his sleeves on his jersey are nice and loose1, and his stadium is packed with raucous, high-energy fans.

And in order for us to trust virtually anyone in our modern society, we need governments that are stable, predictable, reliable, and constantly standing up to those powerful entities who would otherwise abuse the system's trust. That includes Apple, X, and professional sports teams.

Oh! All of this also reminds me of a fantastic Bluey episode about trust. That show is a masterpiece and should be required viewing for everyone (not just children).


  1. He gets that luxury because no referee would allow anyone to get away with harming a hair on his precious head. Yes, I say that as a bitter lifelong Vikings fan. 

Continue to the full article

Tags for this post: