While AI language models like ChatGPT and Dall-E continue to impress people with their abilities, they’re both based on OpenAI’s GPT-3.x family of large language models. These models use deep learning techniques to produce human-like text based on inputs. When GPT-3 was first launched in 2020, it brought enhancements it brought over GPT-2. Now, GPT-4 is in the pipeline, and with rumours surrounding the improvements it’ll bring gathering steam, we thought it’d be a great time to check how it stacks up against the current version. What is GPT-4? GPT is short for “Generative Pre-trained Transformer,” which is essentially a string of language processing models that evolve and learn by training on extremely vast amounts of data. These language models stand out because they also use technologies like NLP (Natural Language Processing) and NLG (Natural Language Generation) to better understand and reproduce human language. GPT-4 will replace GPT-3 and GPT-3.5 when it releases, which is expected to be in late 2023. It’s expected that the version will not only be better at what it does, but also do a lot more than the current version. GPT-4 vs GPT-3: Parameters In an interview last year, Sam Altman, CEO of OpenAI, said that GPT-4 won’t be much bigger than GPT-3. GPT-3 has 175 billion parameters, and we can expect slightly bigger numbers with GPT-4. OpenAI may be aiming to get a lot more out of similar parameter numbers with GPT-4. However, some more recent reports claim that GPT-4 will sport one trillion parameters. A bump as significant as this should help ChatGPT to produce a lot more accurate responses at a much faster rate. More parameters may also drive up the cost of running GPT-4, though, meaning things could get pricier for OpenAI. GPT-4 vs GPT-3: Accuracy GPT-4 is also expected to bring along several improvements to the ability to mimic human behaviour and speech patterns in response to user prompts. Better optimisation could mean that GPT-4 will be much better at inferring human intentions, even when there are errors, than older GPT versions. GPT-4 vs GPT-3: Susceptibility to misinformation OpenAI’s dedication to constantly improving algorithms like RLHF (Reinforcement Learning from Human Feedback) means that GPT-4 could implement it in a better way. In RLHF, human trainers help fine-tune AI models using supervised fine-tuning. Better human-supervised training may help reduce the likelihood of GPT/ChatGPT generating toxic/biased content, and may also help reduce instances of misinformation. That said, OpenAI has kept most details about GPT-4 under wraps, and a lot of the information about it circulating the internet is just speculation. Therefore, it’s advisable to take this piece with a pinch of salt.