How Does Chat GTP Work?
Aktualisiert: 18. Feb.
Chat GPT is the new buzz in town. So what is it all about?
According to an article by Marco Ramponi of Assemblyai.com
What is Chat GTP
ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques.
It’s the latest language processing model from Open AI.
What is unique about ChatGPT?
According to Openai, ChatGPT is trained using Reinforcement Learning from Human Feedback (RLHF), the same methods as InstructGPT, but with slight differences in the data collection setup.
Human AI trainers provided conversations in which they played both sides—the user and an AI assistant. The trainers' access to model-written suggestions to help them compose their responses. They then mixed this new dialogue dataset with the InstructGPT dataset, which we transformed into a dialogue format.
To create a reward model for reinforcement learning, they collected comparison data, which consisted of two or more model responses ranked by quality.
To collect this data, they took conversations that AI trainers had with the chatbot and randomly selected a model-written message, sampled several alternative completions, and had AI trainers rank them. Using these reward models, they can fine-tune the model using Proximal Policy Optimization. There were several iterations of this process.
ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.12
Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
While efforts have been made to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. They’re using the Moderation API to warn or block certain types of unsafe content, but some false negatives and positives are accepted for now.
Here’s an interesting video on how to use ChatGPT to learn and practice English.
How to access ChatGPT
Currently, it appears that ChatGPT is at full capacity, what a bummer!
However, you can fill out a form to be notified when it’s back and follow up here