I ended my first blog detailing my job hunt with a request for insights or articles that speak to how AI might force us to define our humanity.
This op-ed in yesterdayās New York Times is exactly what Iāve been looking for.
[ā¦] The big question emerging across so many conversations about A.I. and work: What are our core capabilities as humans?
If we answer that question from a place of fear about whatās left for people in the age of A.I., we can end up conceding a diminished view of human capability. Instead, itās critical for us all to start from a place that imagines whatās possible for humans in the age of A.I. When you do that, you find yourself focusing quickly on people skills that allow us to collaborate and innovate in ways technology can amplify but never replace.
Herein lies the realization Iāve arrived at over the last two years of experimenting with large language models.
The real winners of large language models will be those who understand how to talk to them like you talk to a human.
Math and stats are two languages that most humans have a hard time understanding. The last few hundred years of advancements in those areas have led us to the creation of a tool which anyone can leverage as long as they know how to ask a good question. The logic/math skills are no longer the career differentiator that they have been since the dawn of the twentieth century.1
The theory I'm working on looks something like this:
- LLMs will become an important abstraction away from the complex math
- With an abstraction like this, we will be able to solve problems like never before
- We need to work together, utilizing all of our unique strengths, to be able to get the most out of these new abstractions
To illustrate what I mean, take the Python programming language as an example. When you write something in Python, that code is interpreted by something like CPython2 , which then is compiled into machine/assembly code, which then gets translated to binary code, which finally results in the thing that gets run on those fancy M3 chips in your brand new Macbook Pro.
Programmers back in the day actually did have to write binary code. Those seem like the absolute dark days to me. It must've taken forever to create punch cards to feed into a system to perform the calculations.
Today, you can spin up a Python function in no time to perform incredibly complex calculations with ease.
LLMs, in many ways, provide us with a similar abstraction on top of our own communication methods as humans.
Just like the skills that were needed to write binary are not entirely gone3, LLMs wonāt eliminate jobs; theyāll open up an entirely new way to do the work. The work itself is what we need to reimagine, and the training that will be needed is how we interact with these LLMs.
Fortunately4, the training here wonāt be heavy on the logical/analytical side; rather, the skills we need will be those that we learn in kindergarten and hone throughout our life: how to pursuade and convince others, how to phrase questions clearly, how to provide enough detail (and the right kind of detail) to get a machine to understand your intent.
Really, this pullquote from the article sums it up beautifully:
Almost anticipating this exact moment a few years ago, Minouche Shafik, who is now the president of Columbia University, said: āIn the past, jobs were about muscles. Now theyāre about brains, but in the future, theyāll be about the heart.ā