blog

My "bicycle of the mind" moment with LLMs


🔗 a linked post to birchtree.me » — originally shared here on

So yes, the same jokers who want to show you how to get rich quick with the latest fad are drawn to this year’s trendiest technology, just like they were to crypto and just like they will be to whatever comes next. All I would suggest is that you look back on the history of Birchtree where I absolutely roasted crypto for a year before it just felt mean to beat a clearly dying horse, and recognize that the people who are enthusiastic about LLMs aren’t just fad-chasing hype men.

Continue to the full article


The Year in Quiet Quitting


🔗 a linked post to newyorker.com » — originally shared here on

As we approach the sixth month of debate over this topic, what’s interesting to me is not the details of quiet quitting, or even the question of how widespread the phenomenon actually is, but our collective reaction to its provocations: we’re simultaneously baffled and enthusiastic. To understand this complicated reality, it helps to adopt a generational lens.

Though quiet quitting has gathered diverse adherents, its core energy comes from knowledge workers who are members of Generation Z (born between 1997 and 2012). This is reflected in the movement’s emergence on TikTok, and in the survey data.

Indeed, a look backward reveals that knowledge workers in every previous generation seem to have experienced a similar pattern of work crisis followed by reconceptualization.

It’s probably no surprise to readers of this site that I am a Cal Newport fan, but I really appreciate his summary of the quiet quitting movement.

The interesting part of this article is how he discusses how each generation views employment. It appears every generation since WWII has a similar crisis.

Continue to the full article


A Coder's Sprint: Behind The Scenes of the Twin Cities Marathon Graphics

originally shared here on

a road along a river with full fall foliage where many people are running a race

(Editor's note: That graphic is Midjourney's interpretation of what the Twin Cities Marathon looks like. Can you imagine if the Twin Cities Marathon actually looked like that? Running on top of the Mississippi River lmao)

Growing up, I took every chance I could get to be around live TV production.

The thing that keeps drawing me back to the medium is that you basically get one chance to tell a story to which there is an uncertain conclusion. The pressure to get it right is exhilarating.

Even though I haven't been part of a live production in roughly a decade, I had a unique opportunity this past weekend to be part of the live coverage of the Medtronic Twin Cities Marathon.

My role was to be the liaison between the marathon and the production crew who was filming, directing, and producing the show that was to ultimately be broadcast on KARE 11 (the local NBC station). I was to watch the race unfold and keep the crew informed of any interesting moves that we should mention on air.

I also was the liaison between the production crew and the timing crew. I would take periodic data dumps from the timing team, run them through a script I wrote, and pump out some graphics to help keep the audience up to date with the current leaders.

As you may or may not know, the race itself was unfortunately cancelled, so our collective efforts were not able to be showcased.

But even though we didn't get to try out our system live, I wanted to share some of the behind the scenes process for how I was able to get all this stuff to speak to each other. I'm mostly writing this for myself for the coming year, as I'd like to keep improving this process so the 2024 version of the race is chock full of awesome graphics that help to tell the story of the race.

The final product

Every good post should show the results first, right? Well, here's the two graphics I was able to get built in about 72 hours:

A large leaderboard graphic for television

This is a leaderboard intended to be a full-screen graphic, likely to be used with a blurred static shot under it.

A small leaderboard graphic for television

This is a leaderboard intended to be used while on top of a single shot with the leader in full frame.

The timing data

I was fortunate to spend the beginning part of my career working with the crew at Mtec Results. They are the team that helps time many of the major races around the country, most notably the Twin Cities Marathon and Grandma's Marathon, but they also are often called on to help out with other high-profile races like the marathons in Boston and New York City.

It took about 3 minutes of explaining the idea of using "real time data"[^tcm-2023-recap-1] to the team before it was met with a resounding "how can we help?"

We went back and forth around file formats and specs, and after we worked our way through uninteresting technical challenges[^tcm-2023-recap-2], we ultimately settled on a CSV format that looked something like this:

BIB,FIRST NAME,LAST NAME,GENDER,AGE,CITY,STATE,NATIONALITY,TEAM,TEAMTYPE,TIME OF DAY FINISH,GUN TIME,NET TIME,5K,10K,15K,20K,HALF,25K,30K,35K,40K,FIRST_HALF,SECOND_HALF,5 MILE
103,Rosalynne,Sidney,F,31,Burnsville,MN,USA,,,10:33:07.73,2:33:09,2:33:09,18:10,36:24,54:43,1:12:48,1:16:51,1:30:56,1:48:57,2:07:38,2:25:40,1:16:51,,

We decided given our time constraints, we would just keep that CSV in a shared Dropbox folder, and that file would get periodically updated throughout the race.

The graphics

The production team at Freestyle Productions uses an open source tool called SPX Graphics to generate and play back graphics during broadcasts.

SPX Graphics is a fascinating tool that uses HTML, JS, and CSS along with layers to help display all sorts of useful graphics like bugs, lower thirds, and crawls.

It took a little troubleshooting to understand the template structure that SPX uses, but in conjunction with ChatGPT, I was able to build out some basic HTML to create a table that I could dynamically populate:[^tcm-2023-recap-3]

<body>
  <div id="viewport">
    <div id="spxTable">
      <header>
        <div class="logo" id="marathon-logo">
          <img src="./TCM/tcm-logo.png">
        </div>
        <div class="logo" id="ten-mile-logo" style="display: none;">
          <img src="./TCM/ten-mile-logo.png">
        </div>
        <div id="title-container">
          MEN-FINISH
        </div>
      </header>

      <section class="table-body">
        <div class="table-row">
          <div>1</div>
          <div>Rosalynne SIDNEY</div>
          <div>USA</div>
          <div>🇺🇸</div>
          <div>2:33:09</div>
          <div>--</div>
        </div>
        <!-- Add more table-row divs as needed -->
      </section>
    </div>
  </div>
</body>

Hooray, we now have a basic table for a full screen leaderboard! If you throw a little fancy CSS on top of it, you have a really nice looking table.

...but how do we populate it?

Translating the timing data

The CSV that I showed above contains some great data, but it's not particularly useful at the moment.

For starters, if I want to show the current leaders at 25K, do I use the values in the 25K row or do I use the values in the gun time row?

If I want to show how far back each racer is from each other (the time differential between each person), how do I generate that?

What happens if the racer's last name got entered in ALL CAPS instead of Title Case?

I figured I needed to write a tool that helped me transform this data into something a little easier to manipulate from the leaderboard template... so I did!

Behold, csvToJson.html in all its glory!

A screenshot of my rudimentary JSON generator

Because I know I'm going to forget what all these fields are for come next year, here's an explanation of what each field does:

  • CSV File: This is a basic input field to grab the CSV file from a local disk.
  • Header row: This is the name of the header row for which I want to pull the timing value (e.g. GUN TIME, which would pull 2:33:09 from the CSV example above)
  • Race: This allows me to tell the front end template which race to style it as
  • Title: This is the title in the top right corner for the full screen version or the first title on the smaller version
  • Subtitle: This is the second title on the smaller version (basically the name of the race)
  • Mile Split: In the smaller graphic, there's a little notch in the top right corner that contains the mile split for the most recently passed timing mat. This field lets me fill that in with the split.
  • Show time difference: On the full screen graphic, we may (or may not) want to show the time difference (e.g. +2:09).
  • Max number of elements: This should've probably said "max number of rows" because that's what this field controls. The full screen version of this graphic looks best with 10 entries, whereas the smaller version of the graphic looks best with 5.

Once you click "Load CSV", I fire off a Javascript method which loads the CSV and converts each row into a JSON object that looks something like this:

{
  "race": "marathon",
  "title": "Women's Leaders - 25K",
  "subtitle": "MEDTRONIC TWIN CITIES MARATHON",
  "mile_split": "25K",
  "show_time_difference": true,
  "table_data": [
    {
      "position": "1",
      "name": "Rosalynne SIDNEY",
      "time": "2:33:09",
      "difference": "--",
      "state": "MN",
      "country_name": "USA",
      "country_flag": "🇺🇸"
    },
    // More entries here
  ]
}

I would then take that JSON and paste it into a file stored on a remote server.

Now that I have both a beautiful-looking template and a beautiful-looking source of data, I was able to whip up some Javascript on the template side to read that file on page load and populate the table with all the customizations included on it.

What's next?

It was truly a bummer that the race didn't get started. As someone who has gotten heat stroke at mile 21 of a marathon, I know that the organizers of the race did the right thing by cancelling it outright.

As someone who was in charge of building and displaying these graphics, though, I am a bit relieved that I get another year to iterate on this foundation.

Here are the obvious areas for improvement:

Automate the fetching of the data from Dropbox

If it wasn't clear, this process was brittle and prone to human error. I was having to load Dropbox from the web, download a CSV, manually sort it in Numbers based on the gun time, remove all but the top 10 or so rows of data, and then save a sanitized version.

There could be a tool written to automate this process so it is continually polling for updates to the file, and once it finds updates, it automatically does the sorting and converting so I don't need to touch it.

Automate the creation of the JSON from that timing data

Similar to above, I shouldn't need a csvToJson.html file. Because I'm sharing the data between the two templates, I should hard code the number of rows I want each template to read, and then I can fully automate the creation of the JSON it uses to populate the table.

Also, because of how SPX works, I need to host that JSON file somewhere remotely that the graphics system can access whenever the director calls for the graphic. That process should be similarly automated.

Improve the flag display

The Twin Cities Marathon attracts professional marathoners from all over the world, but it's not uncommon to see Minnesotans and other Americans finish in the top 10. It might be cool to use state-level flags instead of the US flag for the top athletes.

Another little annoying thing: I only had five countries hard-coded in my JSON creator because that was what I had from the representative data sample (USA, Canada, Mexico, Kenya, and New Zealand). I should probably support more flags because you should always be prepared for an unexpected performance from someone not from one of those five countries, right?

MOAR GFX PLZ KTHX

This leaderboard only scratchs the surface with what's possible.

With the timing data we're getting, I should be able to have a permanent graphic built that shows the top 10 runners at all times.

I should also have more graphics that you see in most professional marathon broadcasts:

  • An omni-present clock[^tcm-2023-recap-4]
  • Biographic slides that show a runner's photo along with some of their professional highlights
  • Slides with historical facts (course record holders and whatnot)
  • A map showing where runners are along the course

But I want more!

If we start planning now, we could attach biometric gear to some of the runners and show things like current heart rate, current pace, current stride count, and more.

Even if we aren't able to pull that off, we could still use the existing data to tell interesting stories like how the hill on Summit Avenue affects pace and how many runners are actually hitting "the wall".

Gearing up for 2024

I am so pleased with what we were able to pull together in basically a week.

Now that we have a better understanding of the technology that powers the graphic system, I am beyond excited at the possibilities ahead of us next year.

The team at Twin Cities in Motion truly care about putting on a best-in-class event for runners. Their commitment and investment in this broadcast are evidence of this, and I am honored to be part of the team responsible for telling the story of the two races that take place that day.

Mark your calendars for next October. It's gonna be an exciting race to watch live!

[^tcm-2023-recap-1]: For our purposes, we basically mean up to date within a minute or two of capturing the data. Getting updates to the leaderboard within milliseconds of a racer crossing a timing mat is not yet technically feasible. Besides, time is an arbitrary construct, right, maaan? [^tcm-2023-recap-2]: The software used to capture timing/scoring data for races is necessarily archaic. I say "necessarily" because it's both a feature and a bug; you don't want to put your trust in some fancy pants, brand new, untested Javascript framework to calculate results for an event that depends on those results for attracting big name runners, sponsors, and money. Of course, you can wrap all sorts of transforming layers on top of the data you collect from the timing systems, which is what Mtec does to power their results pages. But creating an API on top of that layer was not really feasible in the time we had. [^tcm-2023-recap-3]: You might notice in that HTML that I have two logos: one for the marathon and one for the ten mile. This allows me to reuse the same leaderboard graphic but style it orange or green to fit the relevant race. Also, stop judging my HTML! [^tcm-2023-recap-4]: Do you know how hard it is to get an accurate clock to display on screen? The homies that create professional football graphics are insanely talented. Again, time is an arbitrary construct.


Andrew Ng: Opportunities in AI


🔗 a linked post to youtube.com » — originally shared here on

Andrew Ng is probably the most respected AI educator out there today. I am certainly among the 8 million students of his that they tout at the beginning of the video.

This 30 minute chat describes some of the opportunities out there for AI right now.

While his insights on AI are worth your time alone, I found a ton of value in his approach to product development and getting a startup off the ground towards the end of the talk.


The Never-Ending Then


🔗 a linked post to ofdollarsanddata.com » — originally shared here on

So, rather than living in ‘the never-ending then’, you have to learn to avert your focus elsewhere. You have to enjoy the present a bit more and stop trying to plan your idealized path through life. You won’t get that path either way. Something always comes up and sends you on a detour.

Accepting this is hard and something I still struggle with regularly. However, once you do, you will realize that the ideal life is not one that exists solely in the past, present, or future, but one that moves seamlessly between the three. If you can appreciate the past, live in the present, and plan for the future, then what more can you ask for?

Today, I went with my wife and kids up to the recently remodeled playground at my daughter’s school.

Right before we left, my son started playing a game he was making up on the spot.

I got so into it. It was totally engrossing, and my attention was solely on being in character, climbing across obstacles, having fun.

Financial wealth is surely important, but true wealth is being able to shut off the monkey brain for as long as possible.

Continue to the full article


Buggin'


🔗 a linked post to youtube.com » — originally shared here on

The very first album I ever bought was the Space Jam soundtrack.

While I was making my daughter's lunch this morning, I got this line stuck in my head from the song:

I'm the only bunny that's still goin'

Know what I'm sayin'?

I had no idea what that meant.

For decades now, I've been stumped by one cartoon bunny dissing another one .


A year after the disastrous breach, LastPass has not improved


🔗 a linked post to palant.info » — originally shared here on

In September last year, a breach at LastPass’ parent company GoTo (formerly LogMeIn) culminated in attackers siphoning out all data from their servers. The criticism from the security community has been massive. This was not so much because of the breach itself, such things happen, but because of the many obvious ways in which LastPass made matters worse: taking months to notify users, failing to provide useful mitigation instructions, downplaying the severity of the attack, ignoring technical issues which have been publicized years ago and made the attackers’ job much easier. The list goes on.

Now this has been almost a year ago. LastPass promised to improve, both as far as their communication goes and on the technical side of things. So let’s take a look at whether they managed to deliver.

TL;DR: They didn’t. So far I failed to find evidence of any improvements whatsoever.

If you aren’t using a password manager, the likelihood of every single one of your online accounts getting hacked is extremely high.

If you’re using a bad password manager, I guess it’s just as high? 😬

Continue to the full article


This time, it feels different


🔗 a linked post to nadh.in » — originally shared here on

More than everything, my increasing personal reliance on these tools for legitimate problem solving convinces me that there is significant substance beneath the hype.

And that is what is worrying; the prospect of us starting to depend indiscriminately on poorly understood blackboxes, currently offered by megacorps, that actually work shockingly well.

I keep oscillating between fear and excitement around AI.

If you saw my recent post where I used ChatGPT to build a feature for my website, you’ll recall how trivial it was for me to get it built.

I think I keep falling back on this tenet: AI, like all our tech, are tools.

When we get better tools, we can solve bigger problems.

Systemic racism and prejudice, climate change, political division, health care, education, political organization… all of these broad scale issues that have plagued humanity for ages are on the table to be addressed by solutions powered by AI.

Of course there are gonna be jabronis who weaponize AI for their selfish gain. Nothing we can really do about that.

I’d rather focus on the folks who will choose to use AI for the benefit of us all.

Continue to the full article


Half-assing it with everything you've got


🔗 a linked post to lesswrong.com » — originally shared here on

If you're trying to pass the class, then pass it with minimum effort. Anything else is wasted motion.

If you're trying to ace the class, then ace it with minimum effort. Anything else is wasted motion.

If you're trying to learn the material to the fullest, then mine the assignment for all its knowledge, and don't fret about your grade. Anything else is wasted motion.

If you're trying to do achieve some combination of good grades (for signalling purposes), respect (for social reasons), and knowledge (for various effects), then pinpoint the minimum quality target that gets a good grade, impresses the teacher, and allows you to learn the material, and hit that as efficiently as you can. Anything more is wasted motion.

Ah, an engineer’s approach to optimizing life.

There is a good section in here as well about how to deal with the associated guilt when you take this approach.

Continue to the full article


Blazing Trails with Rails, Strava, and ChatGPT

originally shared here on

a cute animated bicycle using a laptop that has a helmet on it

The main page of my personal website features a couple of lists of data that are important or interesting to me.

The "recent posts" section shows my five most recent blog entries. Rails makes that list easy to cobble together.

The "recent listens" section shows my five most recent songs that were streamed to Last.fm. This was a little more complex to add, but after a couple of hours of back and forth with ChatGPT, I was able to put together a pretty hacky solution that looks like this:

  1. Check to see if your browser checked in with last.fm within the last 30 seconds. a. If so, just show the same thing I showed you less than 30 seconds ago.
  2. Make a call to my server to check the recent last.fm plays.
  3. My server reaches out to last.fm, grabs my most recent tracks, and returns the results.

Pretty straight forward integration. I could probably do some more work to make sure I'm not spamming their API[^1], but otherwise, it was a feature that took a trivial amount of time to build and helps make my website feel a little more personal.

Meanwhile, I've been ramping up my time on my bike. I'm hoping to do something like Ragbrai or a century ride next year, so I'm trying to building as much base as I can at the moment.

Every one of my workouts gets sent up to Strava, so that got me thinking: wouldn't it be cool to see my most recent workouts on my main page?

How the heck do I get this data into my app?

Look, I've got a confession to make: I hate reading API documentation.

I've consumed hundreds of APIs over the years, and the documentation varies widely from "so robust that it makes my mind bleed" to "so desolate that it makes my mind bleed".

Strava's API struck me as closer to the former. As I was planning my strategy for using it, I actually read about a page and a half before I just said "ah, nuts to this."

A Frinkiac-generated image repurposing a Smithers quote where he says "Aw, nuts to this, I'll just get Homer Simpson", but gsub Homer Simpson for ChatGPT.

Knowing my prejudice against reading documentation, this seemed like the perfect sort of feature to build hand-in-hand with a large language model. I can clearly define my output and I can ensure that the API was built before GPT-4's training data cutoff of September 2021, meaning ChatGPT is at least aware of this API even if some parts of it have changed since then.

So how did I go about doing this?

A brief but necessary interlude

In order to explain why my first attempt at this integration was a failure, I need to explain this other thing I built for myself.

I've been tracking every beer I've consumed since 2012 in an app called Untappd.

Untappd has an API[^2] which allows you to see the details about each checkin. I take those checkins and save them in a local database. With that, I was able to build a Timehop-esque interface that shows the beers I've had on this day in history.

A sample of my This Day in Untappd History dashboard

I have a scheduled job that hits the Untappd API a handful of times per day to check for new entries.[^3] If it finds any new checkins, I save the associated metadata to my local database.

Now, all of the code that powers this clunky job is embarrassing. It's probably riddled with security vulnerabilities, and it's inelegant to the point that it is something I'd never want to show the world. But hey, it works, and it brings me a great deal of joy every morning that I check it.

As I started approaching my Strava integration, I did the same thing I do every time I start a new software project: vow to be less lazy and build a neatly-architected, well-considered feature.

Attempt number one: get lazy and give up.

My first attempt at doing this happened about a month ago. I went to Strava's developer page, read through the documents, saw the trigger word OAuth, and quickly noped my way out of there.

...

It's not like I've never consumed an API which requires authenticating with OAuth before. Actually, I think it's pretty nifty that we've got this protocol that allows us to pass back and forth tokens rather than plaintext passwords.

But as a lazy person who is writing a hacky little thing to show my workouts, I didn't want to go through all the effort to write a token refresh method for this seemingly trivial thing.

I decided to give up and shelve the project for a while.

Attempt number two: Thanks, ChatGPT.

After a couple of weeks of doing much more productive things like polishing up my upcoming TEDx talk, I decided I needed a little change of context, so I picked this project back up.

Knowing that ChatGPT has my back, I decided to write a prompt to get things going. It went something like this:

You are an expert Ruby on Rails developer with extensive knowledge on interacting with Strava's API. I am working within a Rails 5.2 app. I would like to create a scheduled job which periodically grabs any new activities for a specific user and saves some of the activity's metadata to a local database. Your task is to help me create a development plan which fulfills the stated goal. Do not write any code at this time. Please ask any clarifying questions before proceeding.

I've found this style of prompt yields the best results when working on a feature like this one. Let me break it down line by line:

You are an expert Ruby on Rails developer with extensive knowledge on interacting with Strava's API.

Here, I'm setting the initial context for the GPT model. I like to think of interacting with ChatGPT like I'm able to summon the exact perfect human in the world that could solve the problem I'm facing. In this case, an expert Ruby on Rails developer who has actually worked with the Strava API should be able to knock out my problem in no time.

I am working within a Rails 5.2 app.

Yeah, I know... I really should upgrade the Rails app that powers this site. A different problem for a different blog post.

Telling ChatGPT to hone its answers down on the specific framework will provide me with a better answer.

I would like to create a scheduled job which periodically grabs any new activities for a specific user and saves some of the activity's metadata to a local database.

Here, I'm describing what should result after a successful back and forth. A senior Rails developer would know what job means in this context, but if you aren't familiar with Rails, a job is a function that can get scheduled to run on a background process.

All I should need to do is say, "go run this job", and then everything needed to reach out to Strava for new activities and save them to the database is encapsulated entirely in that job.

I can then take that job and run it on whatever schedule I'd like!

Your task is to help me create a development plan which fulfills the stated goal.

Here, I'm telling ChatGPT that I don't want it to write code. I want it to think through[^4] and clearly reason out a development plan that will get to me to the final result.

Do not write any code at this time.

The most effective way I've used ChatGPT is to first ask it to start high level (give me the project plan), then dig into lower levels as needed (generate code). I don't want it to waste its reasoning power on code at this time; I'd rather finesse the project plan first.

Please ask any clarifying questions before proceeding.

I toss this in after most of my prompts because I've found that ChatGPT often asks me some reasonable questions that challenge my assumptions.

Now, after a nice back and forth with ChatGPT, I was able to start down a path that was similar to my Untappd polling script.

As I was approaching the point where I could first test my example, I went to go read the documentation and came across an entire section that discussed webhooks.

[cue record scratch]

Wait up... webhooks?!

A sojourn into webhooks

If you've made it this far into the article, I'm assuming you're a little bit technical, but in the interest of not making assumptions, I'll briefly explain the concept of webhooks.

If you look at how I'm integrating with the Untappd API, you can see I'm periodically reaching out to their API to see if there's any new checkin activity. This method is called polling.

Webhooks are kind of the opposite: when I complete an activity within Strava, Strava can reach out to my app and say, "there's a new activity for you." Once I get that notification, I can reach out to their API to fetch the new activity details.

For my Strava app, this is a much better[^5] way of doing things. I no longer need to poll for new information; I can just wait for Strava to tell me when action needs to be taken.

But, of course, when I first told ChatGPT to help me with this solution, it didn't mention webhooks. And why would it? I never told it "give me the best solution for this problem." I said "here's my problem and a proposed solution, go ahead and give me the path forward."

I'm a bit frustrated that I didn't see the webhook option initially. I'm even more frustrated that ChatGPT only mentioned it well after we started down this path, and only after I asked it point blank:

Does Strava offer a web hook API?

Yes, Strava does offer a webhook API. Webhooks allow you to receive real-time updates about athlete activities, which can be more efficient than periodically polling for new data. Here's a brief overview of how to set up and use Strava webhooks:

[brief outline complete with things to consider with this approach]

Attempt number three: finally.

While considering my original prompt, I gave it this new one:

Okay, I'd like to incorporate webhooks into this workflow. Here's what I'd like to have happen:

1. Let's add the infrastructure in place to subscribe to webhook notifications within my Rails 5.2 app.
2. When a webhook is sent to my server, I'd like to either:
    a. make a call to Strava's API to fetch that activity's information and save that information in my local database, or;
    b. use the updates field to update the locally saved information to reflect the changes

Knowing this simple walkthrough, first create me a detailed development plan for setting my app to be able to fully handle webhook notifications from Strava.

What resulted here was a detailed walkthrough of how to get webhooks incorporated into my original dev plan.

As I walked through the plan, I asked ChatGPT to go into more detail, providing code snippets to fulfill each step.

There were a few bumps in the road, to be sure. ChatGPT was happy to suggest code to reach out to the Strava API, but it had me place it within the job instead of the model. If I later want to reuse the "fetch activities" call in some other part of my app, or I want to incorporate a different API call, it makes sense to have that all sitting in one abstracted part of my app.

But eventually, after an hour or so of debugging, I ended up with this:

The final result: a list of my 5 most recent activities on Strava.

Lessons learned

I would never consider myself to be an A+ developer or a ninja rock star on the keyboard. I see software as a means to an end: code exists solely so I can have computers do stuff for me.

If I'm being honest, if ChatGPT didn't write most of the code for this feature, I probably wouldn't have built it at all.

At the end of the day, once I was able to clearly articulate what I wanted, ChatGPT was able to deliver it.

I don't think most of my takeaways are all that interesting:

  • I needed to ask ChatGPT to make fixes to parts of code that I knew just wouldn't work (or I'd just begrudgingly fix them myself).
  • Occasionally, ChatGPT would lose its context and I'd have to remind it who it was[^6] and what its task is.
  • I would not trust ChatGPT to write a whole app unsupervised.

If I were a developer who only took orders from someone else and wrote code without having the big picture in mind, I'd be terrified of this technology.

But I just don't see LLMs like ChatGPT ever fully replacing human software engineers.

If I were a non-technical person who wanted to bust out a proof of concept, or was otherwise unbothered by slightly buggy software that doesn't fully do what I want it to do, then this tech is good as-is.

I mean, we already have no-code and low-code solutions out there that serve a similar purpose, and I'm not here to demean or denigrate those; they can be the ideal solution to prove out a concept and even outright solve a business need.

But the thing I keep noticing when using LLMs is that they're only ever good at spitting out the past. They're just inferring patterns against things that have already existed. They rarely generate something truly novel.

The thing they spit out serves as a stepping stone to the novel idea.

Maybe that's the thing that distinguishes us from our technology and tools. After all, everything is a remix, but humans are just so much better at making things that appeal to other humans.

Computers and AI and technology still serve an incredibly important purpose, though. I am so grateful that this technology exists. As I was writing this blog post, OpenAI suffered a major outage, and I found myself feeling a bit stranded. We've only had ChatGPT for, like, 9 months now, but it already is an indispensable part of my workflow.

If you aren't embracing this technology in your life yet, I encourage you to watch some YouTube videos and figure out the best way to do so.

It's like having an overconfident child that actually knows everything about everything that happened prior to Sept. 2021 as an assistant. You won't be able to just say "take my car and swing over to the liquor store for me", but when you figure out that sweet spot of tasks it can accomplish, your output will be so much more fruitful.

I'm really happy with how this turned out. It's already causing me to build a healthy biking habit, and I think it helps reveals an interesting side of myself to those who are visiting my site.

[^1]: Maybe I can cache the data locally like I'm doing for Untappd? I dunno, probably not worth the effort. 😅 [^2]: Their documentation is a little confusing to me and sits closer to the "desolate" end of the spectrum because I'm not able to make requests that I would assume I can make, but hey, I'm just grateful they have one and still keep it operational! [^3]: If we wanna get specific, I ping the Untappd API at the following times every day: 12:03p, 1:04p, 2:12p, 3:06p, 4:03p, 5:03p, 6:02p, 7:01p, 8:02p, 9:03p, 10:04p, and 12:01a. I chose these times because (a) I wanted to be a good API consumer and not ping it more than once an hour, (b) I didn't want to do it at the top of every hour, (c) I don't typically drink beers before 11am or after 11pm, (d) if I didn't check it hourly during my standard drinking time, then during the times I attend a beer festival, I found I was missing some of the checkins because the API only returns 10 beers at a time and I got lazy and didn't build in some sort of recursive check for previous beers. [^4]: Please don't get it twisted; LLMs do not actually think. But they can reason. I've found that if you make an LLM explain itself before it attempts a complex task like this, it is much more likely to be successful. [^5]: Baga Chipz saying "much better" on an episode of RuPaul's Drag Race [^6]: Mufasa telling Simba to remember who he is in the Lion King