all posts tagged 'artificial intelligence'

Generative AI is for the idea guys


šŸ”— a linked post to rachsmith.com » — originally shared here on

When I started working in tech in the early 2010s I began running in to idea guys. Upon discovering I was a developer, they would get very excited and pitch me their big idea for an app/website/service to see what I thought.

After receiving a polite reaction they would often say something like: ā€œI think it would do really well, if I could only figure out how to get it madeā€. Like the idea was the main thing that mattered, and the pesky task of actually building it was an insignificant little bump along the road to success. At this point I would usually tell them that ideas are worth nothing, until they are implemented.

This post is brilliant. Sometimes, I use generative AI like cardboard to prototype an idea.

Testing out an idea is the first step toward doing the countless hours of work needed to make an idea a real thing.

Since I donā€™t think Iā€™ve said it on this blog yet: ā€œIdeas are worthless. Execution is everything.ā€

(I also love Rachelā€™s footnote after the word ā€œguysā€, saying ā€œit was literally always guysā€ šŸ˜‚)

Continue to the full article


A Plea for Sober AI


šŸ”— a linked post to dbreunig.com » — originally shared here on

Below all this hype, thereā€™s a quiet revolution happening. I keep meeting new companies and seeing new products that make practical use of LLMs. They use them to solve narrow problems and prioritize consistency and efficiency over giant, all-singing, all-dancing models. I keep meeting people who are amplifying their capacity and abilities by handing simple, mundane tasks to AIs, which they then refine and improve. AI-assisted coding has been a boon for early-stage start ups, which are shipping products with a maturity and polish beyond their years.

This is the world of Sober AI.

Sober AI is an incredible concept.

Thereā€™s so much progress that people can make using this tool. While it might not be able to solve all our problems, knowing how to wield it is a superpower in this economy.

Continue to the full article


AI is not like you and me


šŸ”— a linked post to zachseward.com » — originally shared here on

Aristotle, who had a few things to say about human nature, once declared, "The greatest thing by far is to have a command of metaphor," but academics studying the personification of tech have long observed that metaphor can just as easily command us. Metaphors shape how we think about a new technology, how we feel about it, what we expect of it, and ultimately how we use it.

I love metaphors. I gotta reflect on this idea a bit more.

There is something kind of pathological going on here. One of the most exciting advances in computer science ever achieved, with so many promising uses, and we can't think beyond the most obvious, least useful application? What, because we want to see ourselves in this technology?

Meanwhile, we are under-investing in more precise, high-value applications of LLMs that treat generative A.I. models not as people but as tools. A powerful wrench to create sense out of unstructured prose. The glue of an application handling messy, real-word data. Or a drafting table for creative brainstorming, where a little randomness is an asset not a liability. If there's a metaphor to be found in today's AI, you're most likely to find it on a workbench.

Bingo! AI is a tool, not a person.

The other day, I made a joke on LinkedIn about the easiest way for me to spot a social media post that was written with generative AI: the phrase ā€œExciting News!ā€ alongside one of these emojis: šŸš€, šŸŽ‰, or šŸšØ.

Itā€™s not that everyone who uses those things certainly used ChatGPT.

Itā€™s more like how I would imagine a talented woodworker would be able to spot a rookie mistake in a noviceā€™s first attempt at a chair.

And here I go, using a metaphor again!

Continue to the full article


AI isn't useless. But is it worth it?


šŸ”— a linked post to citationneeded.news » — originally shared here on

There are an unbelievable amount of points Molly White makes with which I found myself agreeing.

In fact, I feel like this is an exceptionally accurate perspective of the current state of AI and LLMs in particular. If youā€™re curious about AI, give this article a read.

A lot of my personal fears about the potential power of these tools comes from speculation that the LLM CEOs make about their forthcoming updates.

And I donā€™t think that fear is completely unfounded. I mean, look at what tools we had available in 2021 compared to April 2024. Weā€™ve come a long way in three years.

But right now, these tools are quite hard to use without spending a ton of time to learn their intricacies.

The best way to fight fear is with knowledge. Knowing how to wield these tools helps me deal with my fears, and I enjoy showing others how to do the same.

One point Molly makes about the generated text got me to laugh out loud:

I particularly like how, when I ask them to try to sound like me, or to at least sound less like a chatbot, they adopt a sort of "cool teacher" persona, as if they're sitting backwards on a chair to have a heart-to-heart. Back when I used to wait tables, the other waitresses and I would joke to each other about our "waitress voice", which were the personas we all subconsciously seemed to slip into when talking to customers. They varied somewhat, but they were all uniformly saccharine, with slightly higher-pitched voices, and with the general demeanor as though you were talking to someone you didn't think was very bright. Every LLM's writing "voice" reminds me of that.

ā€œWaitress voiceā€ is how I will classify this phenomenon from now on.

You know how I can tell when my friends have used AI to make LinkedIn posts?

When all of a sudden, they use emoji and phrases like ā€œExciting news!ā€

Itā€™s not even that waitress voice is a negative thing. After all, itā€™s expected to communicate with our waitress voices in social situations when we donā€™t intimately know somebody.

Calling a customer support hotline? Shopping in person for something? Meeting your kidā€™s teacher for the first time? New coworker in their first meeting?

All of these are situations in which I find myself using my own waitress voice.

Itā€™s a safe play for the LLMs to use it as well when they donā€™t know us.

But I find one common thread among the things AI tools are particularly suited to doing: do we even want to be doing these things? If all you want out of a meeting is the AI-generated summary, maybe that meeting could've been an email. If you're using AI to write your emails, and your recipient is using AI to read them, could you maybe cut out the whole thing entirely? If mediocre, auto-generated reports are passing muster, is anyone actually reading them? Or is it just middle-management busywork?

This is what I often brag about to people when I speak highly of LLMs.

These systems are incredible at the BS work. But theyā€™re currently terrible with the stuff humans are good at.

I would love to live in a world where the technology industry widely valued making incrementally useful tools to improve peoples' lives, and were honest about what those tools could do, while also carefully weighing the technology's costs. But that's not the world we live in. Instead, we need to push back against endless tech manias and overhyped narratives, and oppose the "innovation at any cost" mindset that has infected the tech sector.

Again, thank you Molly White for printing such a poignant manifesto, seeing as I was having trouble articulating one of my own.

Innovation and growth at any cost are concepts which have yet to lead to a markedly better outcome for us all.

Letā€™s learn how to use these tools to make all our lives better, then letā€™s go live our lives.

Continue to the full article


Happy 20th Anniversary, Gmail. Iā€™m Sorry Iā€™m Leaving You.


šŸ”— a linked post to nytimes.com » — originally shared here on

I am grateful ā€” genuinely ā€” for what Google and Apple and others did to make digital life easy over the past two decades. But too much ease carries a cost. I was lulled into the belief that I didnā€™t have to make decisions. Now my digital life is a series of monuments to the cost of combining maximal storage with minimal intention.

I have thousands of photos of my children but few that Iā€™ve set aside to revisit. I have records of virtually every text Iā€™ve sent since I was in college but no idea how to find the ones that meant something. I spent years blasting my thoughts to millions of people on X and Facebook even as I fell behind on correspondence with dear friends. I have stored everything and saved nothing.

This is an example of what AI, in its most optimistic state, could help us with.

We already see companies doing this. In the Apple ecosystem, the Photos widget is perhaps the best piece of software theyā€™ve produced in years.

Every single day, I am presented with a slideshow of a friend who is celebrating their birthday, a photo of my kids from this day in history, or a memory that fits with an upcoming event.

All of that is powered by rudimentary1 AI.

Imagine what could be done when you unleash a tuned large language model on our text histories. On our photos. On our app usage.

AI is only as good as the data it is provided. Weā€™ve been trusting our devices with our most intimidate and vulnerable parts of ourselves for two decades.

This is supposed to be the payoff for the last twenty years of surveillance capitalism, I think?

All those secrets we share, all of those activities weā€™ve done online for the last twenty years, this will be used to somehow make our lives better?

The optimistic take is that weā€™ll receive better auto suggestions for text responses to messages that sound more like us. Weā€™ll receive tailored traffic suggestions based on the way we drive. Weā€™ll receive a ā€œlong lostā€ photo of our kid from a random trip to the museum.

The pessimistic take is that weā€™ll give companies the exact words which will cause us to take action. Our own words will be warped to get us to buy something weā€™ve convinced ourselves we need.

My hunch is that both takes will be true. We need to be smart enough to know how to use these tools to help ourselves and when to put them down.

I havenā€™t used Gmail as my primary email for years now2, but this article is giving me more motivation to finally pull the plug and shrink my digital footprint.

This is not something the corporations did to me. This is something I did to myself. But I am looking now for software that insists I make choices rather than whispers that none are needed. I donā€™t want my digital life to be one shame closet after another. A new metaphor has taken hold for me: I want it to be a garden I tend, snipping back the weeds and nourishing the plants.

My wife and I spent the last week cleaning out our garage. It reached the point where the clutter accumulated so much that you could only park one car in it, strategically aligned so you could squeeze through a narrow pathway and open a door.

As of this morning, we donated ten boxes of items and are able to comfortably move around the space. While there is more to be done, the garage now feels more livable, useful, and enjoyable to be inside.

I was able to clear off my work bench and mount a pendant above it. The pendant is autographed by the entire starting defensive line of the 1998 Minnesota Vikings.

Every time I walk through my garage, I see it hanging there and it makes me so happy.

Our digital lives should be the same way.

My shame closet is a 4 terabyte hard drive containing every school assignment since sixth grade, every personal webpage Iā€™ve ever built, multiple sporadic backups of various websites I am no longer in charge of, and scans of documents that ostensibly may mean something to me some day.

Scrolling through my drive, Iā€™m presented with a completely chaotic list that is too overwhelming to sort through.

Just like how I cleaned out my garage, I aught to do the same to this junk drawer.

Iā€™ll revert to Ezraā€™s garden metaphor here: keep a small, curated garden that contains the truly important and meaningful digital items to you. Prune the rest.

(Shout out to my friend Dana for sharing this with me. I think she figured out my brand.)


  1. By todayā€™s standards. 

  2. I use Fastmail. You should give it a try (that link is an affiliate link)! 

Continue to the full article


Confessions of a Viral AI Writer


šŸ”— a linked post to wired.com » — originally shared here on

Lately, Iā€™ve sometimes turned to ChatGPT for research. But Iā€™ve stopped having it generate prose to stand in for my own. If my writing is an expression of my particular consciousness, Iā€™m the only one capable of it. This applies, to be clear, to GPT-3ā€™s line about holding hands with my sister. In real life, she and I were never so sentimental. Thatā€™s precisely why I kept writing over the AIā€™s words with my own: The essay is equally about what AI promises us and how it falls short. As for Sudowriteā€™s proposal to engineer an entire novel from a few keywords, forget it. If I wanted a product to deliver me a story on demand, Iā€™d just go to a bookstore.

But what if I, the writer, donā€™t matter? I joined a Slack channel for people using Sudowrite and scrolled through the comments. One caught my eye, posted by a mother who didnā€™t like the bookstore options for stories to read to her little boy. She was using the product to compose her own adventure tale for him. Maybe, I realized, these products that are supposedly built for writers will actually be of more interest to readers.

I can imagine a world in which many of the people employed as authors, people like me, limit their use of AI or decline to use it altogether. I can also imagine a worldā€”and maybe weā€™re already in itā€”in which a new generation of readers begins using AI to produce the stories they want. If this type of literature satisfies readers, the question of whether it can match human-produced writing might well be judged irrelevant.

I think this is a good example of exactly why I think software engineers are in trouble for many roles.

People hire nerds to get computers to do things for them. Things that are not especially novel to those of us who program computers, but extremely valuable to those who cannot.

Will AI be able to create bespoke, artisanal software better than a creative engineer?

Will AI be able to create bespoke, artisanal music better than a create musician?

Maybe. Maybe not.

But it will surely be able to create adequate software and music faster than humans can.

Iā€™m afraid the days of being paid insane amounts of money because I can get computers to do what I want are approaching an end.

Continue to the full article


What I Do When I Canā€™t Sleep


šŸ”— a linked post to every.to » — originally shared here on

Listening to him name exactly what he liked about the wine turned up the hairs on the back of my neck. Suddenly, my own experience of the wine felt deeper and more satisfying. I wanted to know: How was he able to identify these flavors?

He told me the secret was to blindly smell things and try to label them. You see, the part of your brain thatā€™s responsible for smells is naturally mute. Itā€™s called the olfactory bulb, and itā€™s an ancient fist of neurons just behind your eyes. It has only indirect backroad connections to the parts of the brain that control language, like Brocaā€™s area. So, even though you might have an easy time knowing whether or not you like a scent, itā€™s not easy to label that scent with a word. Itā€™ll feel like groping through a dark closet for something you know is there, but canā€™t quite find.

I did not know this, and now I will be nerding out on ways to train my taste buds.

But this article isnā€™t just about developing your sense of taste. Itā€™s also about defining your style.

In much the same way the author describes how he is, for the first time in his life, articulating what he likes as a writer, I am starting to do the same with several areas of my life.

Case in point: I have a huge collection of quotes that you can see on the main page of this site.

I just fed all of them through ChatGPT with this prompt:

Iā€™ve attached a series of quotes which are meaningful to me. First off, tell me the consistent themes you recognize across all of the quotes. Then, tell me what this says about the things that inspire me.

Here is the result:

Across all the quotes, several consistent themes emerge:

  1. Growth and Self-improvement: Many quotes focus on the importance of personal growth, overcoming challenges, and continuous improvement.
  2. Perseverance and Resilience: There's a strong theme of enduring hardship, staying strong in the face of adversity, and the importance of perseverance.
  3. Wisdom and Perspective: The quotes often reflect on acquiring wisdom, gaining perspective, and understanding life's deeper meanings and truths.
  4. Happiness and Contentment: There's an emphasis on finding happiness, being content with what one has, and the pursuit of joy in life.
  5. Community and Relationships: Several quotes highlight the value of community, the importance of relationships, and the strength that comes from vulnerability and connection with others.

These themes suggest that you are inspired by ideas related to personal growth, resilience in the face of adversity, the pursuit of wisdom and understanding, the importance of happiness and contentment, and the value of community and relationships. It paints a picture of someone who values both internal development and external connections, seeking both personal fulfillment and meaningful interactions with others.

:extremely Marge Simpson voice: ā€œWell, duh.ā€

(For real though, pretty awesome that we have a tool that is exceptionally good at summarizing large amounts of data and picking out patterns throughout the dataset.)

Continue to the full article


The Robot Report #1 ā€” Reveries


šŸ”— a linked post to randsinrepose.com » — originally shared here on

Whenever I talk about a knowledge win via robots on the socials or with humans, someone snarks, ā€œWell, how do you know itā€™s true? How do you know the robot isnā€™t hallucinating?ā€ Before I explain my process, I want to point out that I donā€™t believe humans are snarking because they want to know the actual answer; I think they are scared. They are worried about AI taking over the world or folks losing their job, and while these are valid worries, itā€™s not the robotā€™s responsibility to tell the truth; itā€™s your job to understand what is and isnā€™t true.

Youā€™re being changed by the things you see and read for your entire life, and hopefully, youā€™ve developed a filter through which this information passes. Sometimes, it passes through without incident, but other times, itā€™s stopped, and you wonder, ā€œIs this true?ā€

Knowing when to question truth is fundamental to being a human. Unfortunately, weā€™ve spent the last forty years building networks of information that have made it pretty easy to generate and broadcast lies at scale. When you combine the internet with the fact that many humans just want their hopes and fears amplified, you can understand why the real problem isnā€™t robots doing it better; itā€™s the humans getting worse.

Iā€™m working on an extended side quest and in the past few hours of pairing with ChatGPT, Iā€™ve found myself constantly second guessing a large portion of the decisions and code that the AI produced.

This article pairs well with this one I read today about a possible social exploit that relies on frequently hallucinated package names.

Simon Willison writes:

Bar Lanyado noticed that LLMs frequently hallucinate the names of packages that donā€™t exist in their answers to coding questions, which can be exploited as a supply chain attack.

He gathered 2,500 questions across Python, Node.js, Go, .NET and Ruby and ran them through a number of different LLMs, taking notes of any hallucinated packages and if any of those hallucinations were repeated.

One repeat example was ā€œpip install huggingface-cliā€ (the correct package is ā€œhuggingface[cli]ā€). Bar then published a harmless package under that name in January, and observebd 30,000 downloads of that package in the three months that followed.

Iā€™ll be honest: during my side quest here, Iā€™ve 100% blindly run npm install on packages without double checking official documentation.

These large language models truly are mirrors to our minds, showing all sides of our personalities from our most fit to our most lazy.

Continue to the full article


Claude and ChatGPT for ad-hoc sidequests


šŸ”— a linked post to simonwillison.net » — originally shared here on

Iā€™m an unabashed fan of Simon Willisonā€™s blog. Some of his posts admittedly go over my head, but I needed to share this post because it gets across the point I have been trying to articulate myself about AI and how I use it.

In the post, Simon talks about wanting to get a polygon object created that represents the boundary of Adirondack Park, the largest park in the United States (which occupies a fifth of the whole state!).

That part in and of itself is nerdy and a fun read, but this section here made my neck hurt from nodding aggressively in agreement:

Isnā€™t this a bit trivial? Yes it is, and thatā€™s the point. This was a five minute sidequest. Writing about it here took ten times longer than the exercise itself.

I take on LLM-assisted sidequests like this one dozens of times a week. Many of them are substantially larger and more useful. They are having a very material impact on my work: I can get more done and solve much more interesting problems, because Iā€™m not wasting valuable cycles figuring out ogr2ogr invocations or mucking around with polygon libraries.

Not to mention that I find working this way fun! It feels like science fiction every time I do it. Our AI-assisted future is here right now and Iā€™m still finding it weird, fascinating and deeply entertaining.

Frequent readers of this blog know that a big part of the work Iā€™ve been doing since being laid off is in reflecting on what brings me joy and happiness.

Work over the last twelve years of my life represented a small portion of something that used to bring me a ton of joy (building websites and apps). But somewhere along the way, building websites was no longer enjoyable to me.

I used to love learning new frameworks, expanding the arsenal of tools in my toolbox to solve an ever expanding set of problems. But spending my free time developing a new skill with a new tool began to feel like I was working but not getting paid.

And that notion really doesnā€™t sit well with me. I still love figuring out how computers work. Itā€™s just nice to do so without the added pressure of building something to make someone else happy.

Which brings me to the ā€œside questā€ concept Simon describes in this post, which is something I find myself doing nearly every day with ChatGPT.

When I was going through my album artwork on Plex, my first instinct was to go to ChatGPT and have it help me parse through Plexā€™s internal thumbnail database to build me a view which shows all the artwork on a single webpage.

It took me maybe 10 minutes of iterating with ChatGPT, and now I know more about the internal workings of Plexā€™s internal media caching database than I ever would have before.

Before ChatGPT, I wouldā€™ve had to spend several hours pouring over open source code or out of date documentation. In other words: I wouldā€™ve given up after the first Google search.

It feels like another application of Morovecā€™s paradox. Like Gary Casparov observed with chess bots, it feels like the winning approach here is one where LLMs and humans work in tandem.

Simon ends his post with this:

One of the greatest misconceptions concerning LLMs is the idea that they are easy to use. They really arenā€™t: getting great results out of them requires a great deal of experience and hard-fought intuition, combined with deep domain knowledge of the problem you are applying them to. I use these things every day. They help me take on much more interesting and ambitious problems than I could otherwise. I would miss them terribly if they were no longer available to me.

I could not agree more.

I find it hard to explain to people how to use LLMs without more than an hour of sitting down and going through a bunch of examples of how they work.

These tools are insanely cool and insanely powerful when you bring your own knowledge to them.

They simply parrot back what it believes to be the most statistically correct response to whatever prompt was provided.

I havenā€™t been able to come up with a good analogy for that sentiment yet, because the closest I can come up with is ā€œitā€™s like a really good personal assistantā€, which feels like the same analogy the tech industry always uses to market any new tool.

You wouldnā€™t just send a personal assistant off to go do your job for you. A great assistant is there to compile data, to make suggestions, to be a sounding board, but at the end of the day, you are the one accountable for the final output.

If you copy and paste ChatGPTā€™s responses into a court brief and it contains made up cases, thatā€™s on you.

If you deploy code that contains glaring vulnerabilities, thatā€™s on you.

Maybe I shouldnā€™t be lamenting that I lost my joy of learning new things about computers, because I sure have been filled with joy learning how to best use LLMs these past couple years.

Continue to the full article


Anti-AI sentiment gets big applause at SXSW 2024 as moviemaker dubs AI cheerleading as ā€˜terrifying bullsh**ā€™


šŸ”— a linked post to techcrunch.com » — originally shared here on

I gotta find the video from this and watch it myself, because essentially every single thing mentioned in this article is what I wanna build a podcast around.

Letā€™s start with this:

As Kwan first explained, modern capitalism only worked because we compelled people to work, rather than forced them to do so.

ā€œWe had to change the story we told ourselves and say that ā€˜your value is your job,ā€ he told the audience. ā€œYou are only worth what you can do, and we are no longer beings with an inherent worth. And this is why itā€™s so hard to find fulfillment in this current system. The system works best when youā€™re not fulfilled.ā€

Boy, this cuts to the heart of the depressive conversations Iā€™ve had with myself this past year.

Finding a job sucks because you have to basically find a way to prove to someone that you are worth something. It can be empowering to some, sure, but I am finding the whole process to be extremely demoralizing and dehumanizing.

ā€œAre you trying to use [AI] to create the world you want to live in? Are you trying to use it to increase value in your life and focus on the things that you really care about? Or are you just trying to, like, make some money for the billionaires, you know?ā€Ā  Scheinert asked the audience. ā€œAnd if someone tells you, thereā€™s no side effect. Itā€™s totally great, ā€˜get on boardā€™ ā€” I just want to go on the record and say thatā€™s terrifying bullshit. Thatā€™s not true. And we should be talking really deeply about how to carefully, carefully deploy this stuff,ā€ he said.

Iā€™ve literally said the words, ā€œI donā€™t want to make rich people richerā€ no fewer than a hundred times since January.

There is so much to unpack around this article, but I think Iā€™m sharing it now as a stand in for a thesis around the podcast I am going to start in the next month.

We need to be having this conversation more often and with as many people as possible. Letā€™s do our best right now at the precipice of these new technologies to make them useful for ourselves, and not just perpetuate the worst parts of our current systems.

Continue to the full article