GPT-3: A.I. That Could Pass For A Human

There is a strong chance I could have used GPT-3 to generate this article, and you would have never noticed.

GPT-3, otherwise known as Generative Pretrained Transformer-3, is Open AI’s latest text generating neural network. This model is not just a regular natural language processing A.I., it can generate text that sounds freakishly human-like.

Not only can this model generate text, but GPT-3 can also summarize text, answer questions, translate phrases, and even generate code, all based on a description provided by the user. Yup, this A.I. is practically a human.

You might be wondering, how can GPT-3 do all of these things?

Well, GPT-3 was trained on a humongous dataset containing around 499 billion words and numbers from sources such as Common Crawl and Wikipedia. After the data set used for this model was filtered and cleaned, the data summarized to around 500 billion tokens and 45TB of text data.

GPT-3 has 175 billion learning parameters, making it larger than Microsoft Corp’s Turning-NLG algorithm, which only has 17 billion learning parameters, and making it the world's largest natural language processing network.

As a result of the model’s immense amount of parameters and extreme capabilities, GPT-3 has taken natural language processing to the next level.

The Transformer Model

GPT-3 uses a transformer model.

Similar to recurrent neural networks, transformers primarily handle sequential data for tasks related to translation and text summarization.

Unlike recurrent neural networks, transformer models do not need the sequential data to be processed in a specific order.

Let’s say the input is a natural language sentence, the model does not need to process the beginning of it before processing the end, and because of the feature, this model allows for better parallelization and reduced training times in comparison to recurrent neural networks.

The fascinating thing about GPT-3 is that it can do impressively well on tasks that the network has never seen before, including tasks that the developers had not even anticipated.

To measure GPT-3’s in-context learning abilities, the model was evaluated based on over two dozen natural language processing data sets, and a few novel tasks for the purposes of testing adaptation to unlikely scenarios not seen in the training sets.

GPT-3 was evaluated based on three conditions: few-shot learning, one-shot learning, and zero-shot learning.

Few-shot learning allowed around 10–100 examples of the desired task and a natural language description, one-shot learning allowed for only one demonstration and a natural language description, and zero-shot learning did not allow any demonstrations, just a natural language description.

In comparison to humans, most people are able to complete a simple task using the one-shot learning method, and the fact that GPT-3 is able to achieve this with high accuracy and having only %0.175 of the parameters of the human brain is revolutionary.

In terms of proficiency, GPT-3 can perform noticeably well with one-shot and few-shot tasks that test rapid adaptation and on the fly reasoning. Some of these tasks include unscrambling words, doing arithmetic, and using words in a sentence after seeing their definition once. These tasks are demonstrated when GPT-3 generates artificial news articles that humans have great difficulty distinguishing from man-made articles.

In comparison to other natural language processing models such as Google’s BERT, GPT-3 does not require a sophisticated fine-tuning step to teach the model how to achieve specific tasks.

If GPT-3 had the elaborate fine-tuning step that the BERT model has, to train the model to translate French to English text would require a dataset with thousands of examples, and the issue with this is that finding such a large data set for a specific task can be inconvenient, and sometimes impossible.

Essentially, since GPT-3 is trained on such a large dataset, it can solve tasks never seen before, and with very high accuracy.

In summary, what makes GPT-3 amazing is that it can perform custom language tasks, without specific training data, and little to no examples.

GPT-3 Can Do Math

Strangely enough, GPT-3, a model meant for natural language processing related tasks, somehow has the ability to do simple arithmetic.

Initially, the model was not meant to do this, so when GPT-3 was tested in arithmetic tasks, the developers were quite surprised by its accuracy regarding simple equations.

GPT-3 does not understand numbers and arithmetic operators, since all of its information is taken in as tokens.

Essentially GPT-3 solves an arithmetic expression by taking in a string of input that would say something similar to “Add 76 plus 4”. Then the model gets one or a few examples along the lines of “What is 4 plus 5”, and then answer “4 plus 5 is 9”, and uses these examples to solve the prompt.

Since all the input of the arithmetic expressions are strings, the inference is that along the way of GPT-3’s training, it had developed some logic and reasoning capabilities.

Analyzing GPT-3’s arithmetic benchmarks, you can see at 175 billion parameters, the model does amazing in two-digit addition and subtraction. In terms of harder arithmetic expressions such as five-digit addition, and two-digit multiplication, GPT-3 does not do quite well because, in terms of computational theories, it is more difficult for computers to grasp these operations by themselves.

GPT-3 Vs The Human Brain

After doing research about GPT-3, I thought to myself, could GPT-3 ever be as large and as complex, as the human brain?

To satisfy my curiosity, I dove down into more research and found a video by Lex Fridman comparing GPT-3 to the human brain.

Essentially, the human brain has around 100 trillion synapses, a synapse being a structure that permits a neuron to pass an electrical signal to another neuron, and while doing, so neurons are able to communicate with each other.

In comparison to the human brain, GPT-3 has 175 billion parameters and costs 4.6 million dollars to train. So how much would it cost to train a model with the same amount of synapses as the human brain?

An Open AI research paper: Measuring the Algorithmic Efficiency of Neural Networks, suggests that for the past 7 years the neural network training efficiency has been doubling every 16 months.

Below the chart that demonstrated the estimated cost to make GPT-3 become like the human brain, is estimated based on the indication of this Open AI article.

Based on the estimations of this chart, in 2032 GPT-3 could become a network with as many parameters as the human brain and will cost approximately the same amount to train GPT-3 in 2020 to have 175 billion parameters.

In all likeliness, there is a big probability that in the near future, we will have crazy natural language processing models that could be as big as the human brain.

Programming User Interfaces

Using platforms like debuild.co that have access to the GPT-3 model, the user is able to input a few sentences related to what they would like to have built, and GPT-3 will generate code to accomplish these tasks.

In the short video below, you can see that the user asks for GPT-3 to make a button that can add $3, a button that can withdraw $5, a button to give away all their money, and a display that will show the current balance. When the user tests out the program, GPT-3 gives the user exactly what they asked for visually, and a working program that has perfect logic and reasoning. Notice that the user does not explain what giving all your money away means, but somehow GPT-3 is able to recognize that when you give away all your money, your balance totals to zero.

In the future, allowing the public to have access to platforms like debuild.co that use GPT-3 to generate websites and user interfaces, will eliminate the need to hire people to build these apps.

Regardless of your skill in programming, anyone will be able to generate their own app without hiring a skilled professional. The process of creating an application would simply require a natural language description of your desired product.

GPT-3 as an AI Lawyer

Moving to a legal field, GPT-3 is able to do wonders in terms of making legal help accessible to all.

In this image, you can see plain English legal complaints given, and GPT-3 takes these descriptions and translates them into a phrase using legal language. What is quite creepy is that when the plain language description says “My apartment had mould and it made me sick”, GPT-3 responded, “Plaintiff’s dwelling was infested with toxic and allergenic mould spores, and Plaintiff was rendered physically incapable of pursuing his or her usual and customary vocation, occupation, and/or recreation”. The creepy part was GPT-3 being able to create a description almost as if it knew what mould was and how sick it made people feel, and in a way, this proves that GPT-3 might have some level of understanding in terms of human emotion and feeling.

Another example shows a reverse of the prior, having the user enter in legal language and using GPT-3 translate it into “plain English”, explaining to you what is actually being said, without the legal jargon. With the use of GPT-3, you do not need lawyers to translate contracts or other legal papers, saving thousands of dollars for an individual and making legal help more accessible to people who may not be able to afford it.

GPT-3 For Medical Diagnosis

In the image above a medical prompt is given, and asks GPT-3 which receptor the medication is most likely to work on.

GPT-3 not only selects the correct answer, but it is able to generate a full-on explanation explaining why it selected its answer, coming to certain logical conclusions.

In the future models like GPT-3 will continue to meta learn through examples and will eventually become so good at answering prompts, that there is a high likelihood of GPT-3 being used in the medical field to diagnose patients.

GPT-3 Writing Descriptions

As shown in the example above, an author inputs a paragraph of text (green highlight), and the next generated paragraph is by GPT-3, expanding the story. You can see how GPT-3 is able to write exceptionally and use its “imagination” to paint a beautiful story, without any other additional input from the user. Anyone reading this example would not have expected for a computer to have written the response, but rather a person with emotions.

GPT-3 being able to write prompts that are logical and have a sense of emotion can impact many industries such as marketing.

Instead of having to hire people to create advertisements, models like GPT-3 will be able to generate a product's slogan, description, and other important details.

Not only would this be efficient, but would be cost and time saving for many companies.

Additionally, what if GPT-3 was able to create an image based on the witness’ description of a possible suspect? There is a possibility that the generated images could provide a more accurate representation of the suspect to the police authorities, which could save a lot of time in terms of finding a person.

So why should we be excited about GPT-3?

GPT-3 might only be in its testing stages, but with 175 billion parameters and 499 billion tokens, it has become revolutionary to advancing the limits and abilities of natural language processing models.

Because of the huge training set of 45 TB of text data and the model’s learning capabilities, GPT-3 is able to accomplish language tasks with little to no examples, and even tasks unrelated to language processing such as solving arithmetic expressions and creating programs with logic and reasoning.

If Open AI continues to advance their line of GPT models and they become as complex as the synapses’ in our brain, there is no telling what the future of natural language processing will look like. Who knows, maybe an AI will eventually be better at performing language tasks than humans.

Key Takeaways 🔑

  • GPT-3 is Open AI’s NLP model that sounds freakishly human-like
  • GPT-3 can do tasks such as generating text, summarizing text, answering questions, translating phrases, and generate code from user descriptions
  • GPT-3 has 175 billion learning parameters
  • Trained off of 45TB of data (data was basically retrieved from the internet)
  • Uses a transformer model
  • Can do very simple math even though it was not developed for that purpose whatsoever
  • The model performs well on tasks it has never seen before
  • In all likeliness, there is a probability that in the near future, we will have crazy natural language processing models that could be as big as the human brain
  • GPT-3 will be able to help many industries such as writing, marketing, law, design and medical analysis
  • If Open AI continues to advance their line of GPT models and they become as complex as the synapses’ in our brain, there is no telling what the future of natural language processing will look like

Contact me for any inquiries 🚀

Hi, I’m Ashley, a 16-year-old coding nerd and A.I. enthusiast!

I hope you enjoyed reading my article, and if you did, feel free to check out some of my other pieces on Medium :)

Articles you will like if you read this one:

💫 How I Made A.I. To Detect Rotten Produce Using a CNN

💫 Detecting Pneumonia Using CNNs In TensorFlow

💫MNIST Digit Classification In Pytorch

If you have any questions, would like to learn more about me, or want resources for anything A.I. or programming related, you can contact me by:

💫Email: ashleycinquires@gmail.com

💫 Linkedin

💫Github

Innovator and AI enthusiast

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store