OpenAI’s improved model can process approximately 20 pages of text at once
New GPT enables developers to provide descriptions of programming functions

OpenAI made an announcement about the launch of GPT-3.5-turbo and GPT-4, which are the latest advancements in text-generation AI. One notable feature in GPT-4 is the inclusion of function calling. This feature enables developers to provide descriptions of programming functions to GPT-3.5-turbo and GPT-4, allowing the models to generate code that can execute those functions.
Function calling has practical applications, such as facilitating the development of chatbots capable of answering questions through the utilisation of external tools. It also assists in converting natural language into database queries and extracting structured data from text.
In addition to function calling, OpenAI is introducing a version of GPT-3.5-turbo that incorporates a significantly larger context window. The context window is measured in tokens or units of raw text and represents the amount of text that the model takes into account before generating further text.
Models with smaller context windows tend to “forget” the details of even recent conversations, resulting in off-topic responses, often with undesirable consequences. OpenAI’s expanded context window aims to mitigate these issues and enhance the model’s ability to maintain relevant and coherent discussions.
The upgraded GPT-3.5-turbo now has a context length that is four times greater than the original, allowing for up to 16,000 tokens to be considered. However, this enhanced capability comes at a higher price, with a cost of $0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens.
OpenAI clarifies that this improved model can process approximately 20 pages of text at once, although it falls short of the processing capacity demonstrated by Anthropic, an AI startup renowned for its flagship model that can handle hundreds of pages. It is important to mention that OpenAI is currently conducting limited testing of GPT-4, a version with an even larger 32,000-token context window.