Blog, Data science, Diverse, Software
ChatGPT – understanding and using the revolution
26 Jan 23
“ChatGPT” is currently taking the world by storm. The interactive chatbot writes poems, corrects computer codes or gives relationship tips – and often in amazing quality.
What’s behind it technically? And how can such a tool be used in a targeted manner?
The overview: What is ChatGPT and what can it do?
ChatGPT is an artificially intelligent program for generating texts. It was released in November 2022 by the US company OpenAI in the form of an interactive chatbot and has since caused a sensation worldwide, as it provides amazingly good answers on a wide variety of topics. For example, ChatGPT is able to
- give coherent and factually correct answers to various technical questions
- translate texts into various languages and language styles (e.g. understandable for children, highbrow or in the style of former Federal Councillor Ueli Maurer)
- debug computer code
- summarize documents precisely
- formulate letters and emails on the basis of a few catchwords
The qualities and possibilities of ChatGPT seem virtually endless and the question quickly arises as to how such a powerful system comes about.
In its own words: How does the program work?
In order to explain how ChatGPT works and to take a quality sample at the same time, we let the system itself explain how it works – in two different technical depths:
Datahouse: Explain briefly and simply how ChatGPT works.
ChatGPT:
ChatGPT is a computer program that is able to understand and create texts. It works by reading many texts and learning how words and sentences are related. When someone asks a question or needs a text, it uses the learned knowledge to create a meaningful and grammatically correct answer or text.
It is important to note that it does not necessarily understand what it is writing, it is a statistical model and therefore can sometimes make mistakes or produce unexpected results.
Datahouse: And now, please, the same in technical terminology.
ChatGPT:
ChatGPT (Generative Pre-trained Transformer) is a pre-trained neural network model based on the Transformer architecture used to generate natural language. The model uses the encoder-decoder structure, where the encoder is used to encode the given input words into a semantic representation, and the decoder is used to decode these encoded input words into natural language outputs.
The model was trained on a large set of texts, referred to as “training data.” This training data consists of a variety of texts such as books, articles, news feeds, etc. By analyzing this data, the model learns the structure and rules of language and can thus understand how words and sentences are related.
To perform text generation, the model is fed an initial sequence of words (prompt) and it generates the next word based on the learned knowledge and the given context by the prompt. This process continues until the model has generated a complete response or until a predetermined number of words is reached. It is important to note, however, that although ChatGPT has been trained on a large amount of text, it can make mistakes or produce unexpected results when confronted with unusual or new contexts.
Datahouse: Thank you very much! May we use these answers in our blog?
ChatGPT:
Yes, you are allowed to use the answers in your blog. I am a computer program and have no copyright on the information I share. However, it would be nice if you could mention the source (OpenAI) as they made the development and training of the model possible.
It can be seen that ChatGPT not only delivers technically correct, but also grammatically and stylistically (almost) perfect output. This spectacular result is largely due to the system’s training process, which involves cutting-edge machine-learning algorithms as well as human feedback.
Train ChatGPT: Interaction of human and machine
The training of the language model on which ChatGPT is built, is based on the idea of “reinforcement learning”. Here, a computer program, a so-called “agent”, interacts independently with an environment (for example, a computer game) and receives feedback on its behavior (for example, scores after a win). The agent then makes its next decisions based on the feedback it has received so far, learning better and better strategies for coping with the environment over time. This learning strategy has already produced programs that can play chess or Go and, after countless training games against themselves, could even beat the current human world champions.
In the case of ChatGPT, the feedback in the training process came from humans who evaluated responses generated by the system (so-called “reinforcement learning from human feedback”). This human-in-the-loop approach ensures that the learning process is optimized to understand and serve human preferences. ChatGPT is thus trained to generate outputs that match the needs of human users in terms of style, content and form as closely as possible.
Due to the very large amount of training data and feedback that ChatGPT has seen and received during the training process, the system can now cover a considerable range of questions and styles. This makes it interesting for transfer to a wide variety of applications.
Transfer: How can we use ChatGPT?
The world is just discovering the numerous opportunities of ChatGPT. In order to try and use them as well as possible, one can “finetune” the language model of ChatGPT. That means: the existing, already very powerful model can be even closer to specific applications by presenting it with additional training data and thus optimizing the model parameters for a specific use case.
For such finetuning, OpenAI provides different types of language models. To ensure their responsible use, the organization monitors the use of the models via a controlled interface. Misuse such as the writing of spam emails is thus detected and prevented.
If you find this topic interesting or want to read more about general applications of artificial intelligence in various fields, then have a look at our blog.
share