GPT-4 Turbo is the biggest update since ChatGPT’s launch

A person typing on a laptop showing the ChatGPT Generative AI website.
Matthew Bertelli/Pexels

OpenAI has unveiled the latest update to its large language model (LLM) during its first developer conference, and the most notable improvement is the release of GPT-4 Turbo, which is currently entering preview. GPT-4 Turbo comes as an update to the existing GPT-4, bringing with it a greatly enlarged context window and access to much new knowledge. Here’s everything you need to know about GPT-4 Turbo.

OpenAI claims that the AI ​​models will be cheaper as well as more powerful than their predecessors. Unlike previous versions, it is trained on information up to April 2023. This is a major update in itself – the latest version maxed out in September 2021. I just tested this myself, and in fact, using GPT-4 allows ChatGPT to get information from events that happened back to April 2023, so the update is already live.

GPT-4 Turbo has a significantly larger context window than previous versions. This is essentially what GPT-4 Turbo considers before generating any text in response. To that end, it now has a 128,000-token (this is the unit of text or code that LLMs read) context window, which, as OpenAI points out in its blog post, is equivalent to about 300 pages of text.

This is an entire novel that you can potentially feed to ChatGPT during a single conversation, and has a far larger context window than previous versions (8,000 and 32,000 tokens).

Reference windows are important for LLMs because they help them stay on topic. If you interact with older language models, you’ll find that they can go off topic if the conversation goes on too long. This can lead to some very unbalanced and disturbing reactions, like the time Bing Chat told us it wanted to be human. GPT-4 Turbo, if all goes well, will keep the madness at bay for much longer than the current model.

It’s also going to be cheaper for developers to run GPT-4 Turbo, with the cost dropping to $0.01 per 1,000 input tokens, up to about 750 words, while output will cost $0.03 per 1,000 tokens. OpenAI estimates that this new version is three times cheaper than its predecessors.

The company also says that GPT-4 Turbo does a better job of carefully following instructions, and can be asked to use the coding language of choice, such as XML or JSON, to generate results. GPT-4 Turbo will also support images and text-to-speech, and it still offers DALL-E 3 integration.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf from Root/Unsplash

This wasn’t the only big reveal for OpenAI, which also introduced custom versions of GPT, ChatGPT, that anyone can create for their specific purpose without any knowledge of coding. These GPTs can be created for personal or company use, but can also be distributed to others. OpenAI says GPTs are available today for ChatGPT Plus customers and enterprise users.

Finally, in light of persistent copyright concerns, OpenAI has joined Google and Microsoft in saying that it will take legal responsibility if its customers are sued for copyright infringement.

With a larger context window, new copyright shield, and improved ability to follow instructions, GPT-4 Turbo may prove to be both a blessing and a curse. ChatGPT is quite good at not doing things it shouldn’t, but still, it has a dark side too. This new version, while infinitely more capable, may also come with the same shortcomings as other LLMs, except this time, it will be on steroids.






Leave a Comment