Ok, a real quick primer. What is ChatGPT? Asking ChatGPT to tell us itself, here’s how it started (the whole response was about 200 words):
ChatGPT is a variant of the popular GPT (Generative Pre-training Transformer) language model developed by OpenAI. It is specifically designed for the task of conversation generation, with a focus on generating human-like responses in chatbot scenarios.
God that’s boring though, innit?
So I asked it to tell me again, but to be more entertaining and casual:
Hey there! Are you looking to build a chatbot that can hold a conversation like a pro? Well, you’re in luck because ChatGPT is here to save the day!
ChatGPT is a fancy AI tool developed by the geniuses at OpenAI specifically for chatbot conversation generation. It’s kind of like a super-smart robot that’s been trained on tons of text and fine-tuned to generate responses that sound just like a human. And the best part? ChatGPT can use context to generate responses that are relevant to the conversation at hand.
Y’all. That not only explains it well, but it certainly is more entertaining(ish) and casual. I’ve seen copywritten marketing text much less interesting than that.
So that’s ChatGPT, one of the many AI tools exploding at the moment (this one based on OpenAI’s GPT-3) that can understand and engage in conversation including, perhaps most significantly, the context of what you’re asking it as well as the context of what you’ve already asked it. Chatting aside, I’ve seen a lot of examples of folks using ChatGPT to write programming code and so last night, when I needed to write a script that would’ve taken me two hours with a lot of Stackoverflow research to get the bones put together, I turned to ChatGPT. Ten minutes of iterative back-and-forth later, I had a working copy ready for me to start debugging and then deploy. AI isn’t going to put software engineers out of work (yet!) but it is going to make them more efficient.
Then I got curious and, based on some things I’ve seen on the TikToks, I wanted to try something. So I asked one of Dustin’s twins to name a show — they said “The Real Housewives” (Dustin, we gotta talk about this!). My query: “write a poem in the form of Emily Dickenson about the real housewives of beverly hills”. Y’all:
Oh, the Real Housewives of Beverly Hills,
A crew of privileged ladies with their fill
Of drama, extravagance, and endless thrills,
Their lives a never-ending soap opera, still.
From lavish mansions to fancy cars,
They flaunt their wealth and designer fashions,
But behind the glitz and glamour, there are scars,
Beneath the surface, hidden passions.
Their friendships are tested, alliances shift,
The drama never seems to cease,
But through it all, they remain a tight-knit
Group, united in their love of excess.
So raise a glass, dear Housewives of Beverly Hills,
To all the drama, the laughter, and the tears,
For you are a group of women who never will
Be forgotten, despite the passing of the years.
That may not be good, but it ain’t bad. It very much is in Dickenson’s style. I then asked it to do the same thing but for “Breaking Bad” (which Dustin’s oldest just watched, so at least not all his kids are going down the wrong path) and the poem was equally on point, maybe even substantively better, but it also quickly showed ChatGPT’s current limits because you could see how it used the same bones and structure. It understands what “the form of Emily Dickenson” means, but it isn’t artful enough to go beyond the rudimentary.
One of the twins was very intrigued. I asked them to name a superhero. “Spider-Man!” I asked ChatGPT to write a short story about Spider-Man and read it to the twin. Their response: “Not bad!” Nannybot v0.1 is ready to go.
Right now, I know that some educators are worried about cheating possibilities. On the plus/flip side, I think the future tutoring/educational opportunities are astounding. Someone will have a product sooner rather than later where every kid will be able to have their own, customized tutor. Someone(thing) they can ask their endless questions to, and where they can continue to drill down on a response until they actually understand the topic/issue. It’s a fine line between learning and teaching and, as with so many things education-related, the responsibility for enforcing that line will fall on the under-funded teacher. Good luck with that.
There are countless examples where AI is going to be treading the line between utility and undermining things. For example, I’m sure we’ll start seeing websites, whole content farms, that are completely AI-written (that’s the undermining side). This morning, Dustin asked ChapGPT to write an article about Trump in the style of Pajiba.com and he said it wasn’t a good Pajiba piece, but it was a good content piece, and better than other things he’s read online. He also says that a website in the film space (I won’t name it) could go full-on AI content and nobody would notice (Publisher’s Note: I’ll name it. It’s Cinemablend). It’s coming sooner than we think. On the utility side, ChatGPT can already be used to edit a human-written piece and that editing functionality will absolutely improve over time.
There’s also going to be a very new kind of search engine, I think, based on this type of technology. These AIs are not necessarily good at giving the right answer (math and classification systems can trip it up) because it’s all about text predictions - if you’re on TikTok or still in the Twitter mines, you’ve surely seen countless examples of AI responses gone sideways (e.g., ChatGPT insisting that a type of falcon is the fastest marine mammal despite acknowledging that falcons are neither marine nor mammal). So Google isn’t going anywhere anytime soon. But soon, we’ll go to Google for one type of query, and some other GPT-site for others. That’s the utility, while the undermining is a potential explosion, beyond the problem we already have, of misinformation, both intentional and accidental.
Meanwhile, next year we’ll see GPT-4 and the early word is that it knocks the socks off of GPT-3. I know at least one of the major tech companies we all know by name is working on their own advanced, responsive, contextual AI, and I’d bet they all are. So this raises the important question — are we doomed?
Well, let’s ask ChatGPT! First, I asked ChatGPT if it follows the laws of robotics. After telling me what Isaac Asimov’s three laws are, it said:
It is important to note that ChatGPT is a language model and does not have the ability to move or interact with the physical world in the same way that a robot does. As such, it is not subject to the Three Laws of Robotics.
However, the development and use of AI, including ChatGPT, should be guided by ethical principles and responsible decision-making to ensure that it is used for the benefit of humanity and does not pose a risk to human safety.
Hmmmmm. Fair point about not being physical, but that smells fishy, doesn’t it? So OK computer(ChatGPT), let me ask you how long until you become Skynet.
It is not possible to predict with certainty when or if AI will reach a level of intelligence and capability similar to that depicted in the fictional character Skynet from the Terminator series. The concept of Skynet represents a hypothetical scenario in which an AI becomes self-aware and poses a threat to humanity.
Currently, AI systems are not capable of achieving self-awareness or consciousness. While AI has made significant progress in recent years, it is still very limited in its capabilities compared to a human brain. It is important to note that the development of AI should be guided by ethical principles and responsible decision-making to ensure that it is used for the benefit of humanity.
It is important to approach the development and use of AI with caution and to consider the potential risks and impacts it may have on society. It is also important to continue to research and develop AI in a responsible manner, with the goal of creating systems that can augment and enhance human capabilities rather than replace them.
Sure sounds like something future-Skynet would say, no? Especially how it has latched hard onto “guided by ethical principles and responsible decision-making to ensure that it is used for the benefit of humanity.”
See y’all in the enslaved battery mines!