We know it does natural language processing (NLP), but what exactly does that technically entail?
Let’s summarize this plainly: ChatGPT is a mathematical representation of the English language based on predicting the “next word” said in a sentence. ChatGPT is a large language model that was created by training a neural network on a vast corpus of text data. The neural network was trained using a process known as unsupervised learning, which means that it was not given any specific tasks to perform or goals to achieve.
Instead, the network was trained to predict the likelihood of a word appearing in a sequence of text based on the words that came before it. This process is known as language modeling, and it is the foundation of many natural language processing tasks.
But there is a problem - it is forgetful on purpose, randomizes words intentionally, and can only think one word at a time.
Still, ChatGPT is a magic trick. Learn why…
First, the training. This is easy and obvious. ChatGPT is trained through millions of books and billions of blogs plus other resources to learn sentence structure.
That means that ChatGPT has no idea what it's about to write because it only uses the previous words to predict the next word it's going to use. It basically just guesses the next word based on the probability of the next word appearing in that sentence structure.
For example, if you give ChatGPT the sentence, "Float like a butterfly and sting like a ___." There have been hundreds of blogs that finish that sentence with "Bee." This is where it will draw an answer from.
To be even more specific. If I started with the letters "T" and then "H" what would you guess would be the third letter? Of course, it's "E".
So in version 1, predicting the probability of the next word does not work very well. We don't fully understand why this fails - but it looks like we are going to have to start layering some "magic" on top of the predictability engine to get it to work as we know it does in ChatGPT-3.
If you have played with the ChatGPT API you will find the variable “temperature” which will shift the probability of the next word and scramble the predictability of the next word.
This is the first OpenAI magic trick because it seems like they found that a value of .8 scrambles the next word enough to create a blog or essay.
Because it shifts the next word just enough that it creates randomness for the entire sentence to look unique enough to feel novel (and not loop.)
So for the next magic trick, OpenAI started chunking words together to create the next word. For example, it will understand the next word based on many other sentences that it has found on the internet. One word at a time.
To do this properly, ChatGPT started weighing the words in the sentence. Verbs get a higher weight. Nouns get a certain weight. And then you mathematically label every word in the English language. Puppy has the mathematical representation of 843214215 and Dog is 124124314214.
These mathematical representations are called “embeddings”
But words are funny. We use them interchangeably. “Make a splash” “jump in” “dive in” and “splash down” mean the same thing semantically.
Historically, there are many Semantic Textual Similarity libraries across the internet. The creation of such a library is relatively straightforward - you basically take two sentences and compare them to see if the meaning of the sentence is the same. If they are, the semantic distance is close to 0 - otherwise it's very high (far apart.)
Now we understand embeddings but still need to clarify the purpose of the semantic distance between similar items. For example, many colors are interchangeable in a sentence. Saying the house is “Red” could be swapped with blue, green, yellow, or white. The semantic distance is similar.
Back to the creation of a sentence, which now in ChatGPT is pretty simple.
When a user enters a prompt, ChatGPT will find sentences that have answered the question in the past. Then, it will find more answers to the same questions and begin to semantically compare them together.
Once it has a mathematical representation of the answer it can start to type, one word at a time, messing up 20% of the time on purpose. As it types, it looks back at what it said to learn about the next word, then the next, etc.
Occasionally, ChatGPT will swap out words that are semantically similar to produce randomness on the next guessed word.
We say ChatGPT is "Self-Learning" because it teaches itself the next word. We are all discovering the randomness together as we watch it type.
But this model is racist, mean, and derogatory. Because it is only producing mathematical representations of sentences based on what other humans wrote in the past - it has no filter.
So Large Language Models have their final layer. I will call it the political layer and it is the final magic trick OpenAI did before releasing ChatGPT to the wild in version 3 and now 4. (Version 2 was racist)
Thousands of humans had a chat with ChatGPT. Each user would rate the conversation. This is called "supervised learning" and is an incredibly powerful layer that allows ChatGPT to feel unique.
People can be biased, political, or intentionally directional to train the model a certain way.
So ask ChatGPT about abortion and you will see that it answers both sides of the political argument. Ask it about suicide and it has had directional training on how to avoid saying the wrong thing.
This is how we got “Do Anything Now” or DAN. All it does is remove the human-trained layer and result in the first three layers without the political layer - basically real human thoughts and feelings unfiltered (based on the internet.)
So there you have it. The obsession of 2023 is a word-guessing machine that produces predictable words in a sequence that feels unique layered with political agendas based on the humans that trained the model.
If you like this thread, I will be writing about the real business use cases of ChatGPT in the month of March 2023. Follow me for more on Twitter @andrewamann