To advance AI capabilities in the workplace, OpenAI announced a series of updates to its pioneering generative AI models, GPT-3.5 Turbo and GPT-4.
These enhancements, which include the revolutionary introduction of new function calling capability, improved steerability, extended context for GPT-3.5 Turbo, and a revised pricing structure, aim to provide developers with an expanded toolbox for creating sophisticated, high-performing AI applications to meet the complexities of modern work environments.
Applications Powered By OpenAI
Developers are not the only ones who will benefit from the latest improvements to OpenAI’s GPT models. Chances are, you have used a tool that implements AI advancements from OpenAI.
- Microsoft partnered with OpenAI to bring developers AI models and to enhance popular products like Bing and Office with generative AI.
- Snapchat launched its generative AI chatbot, My AI, using OpenAI GPT models. My AI’s latest update can send and interpret image snaps!
- Salesforce released the first generative AI CRM product, Einstein GPT, powered by OpenAI’s most “advanced models.”
- Morgan Stanley announced a partnership with OpenAI as one of the few wealth management companies with access to the latest GPT-4 model.
- HubSpot developed new tools, like ChatSpot.ai, based on OpenAI GPT-4.
- GitHub Copilot added generative AI using the OpenAI Codex to its platform to aid developers, ultimately leading to a lawsuit over copyright.
- Stripe incorporated OpenAI GPT technology to help understand customers and reduce fraud.
- GetResponse introduced an OpenAI GPT-powered email generator.
- Instacart created an AI chatbot to help consumers with their grocery shopping.
Ideally, users of these and other tools built off of OpenAI technology should see improvements to generative AI performance thanks to GPT-3.5 Turbo and GPT-4 updates from OpenAI.
Enhancements To GPT-3.5 Turbo And GPT-4
The following are the latest updates announced by OpenAI to GPT-3.5 Turbo and GPT-4 models. The updates include a new function calling capability in the Chat Completions API, improved steerability, extended context for GPT-3.5 Turbo, and lower pricing.
Function Calling Capability
Based on developer feedback and feature requests, OpenAI gave developers the ability to describe functions to the updated models and have the AI intelligently produce a JSON object containing arguments for those functions. This enhancement enables a more reliable connection of GPT’s capabilities with external tools and APIs, which supports better-structured data retrieval from the model.
New function calling capabilities allow for a diverse array of applications, including the following:
- Creation of chatbots that answer questions by calling external tools,
- Conversion of natural language queries into function calls, API calls, or database queries,
- Extraction of structured data from the text.
- The new API parameters provide developers the means to describe functions to the model and to request the model to call a specific function optionally.
The introduction of the function calling opens new possibilities for developers. They can integrate the GPT models with other APIs or external tools more seamlessly.
For instance, a workplace app could use this feature to convert a user’s natural language query into a function call to a CRM or ERP system, making the application more user-friendly and efficient.
While OpenAI remains attentive to potential security issues associated with untrusted data, it suggests developers protect their applications by consuming information only from trusted tools and including user confirmation steps before performing impactful actions.
Developers can sign up for the waitlist to get access to GPT-4.
Model Improvements
The new GPT-4 and GPT-3.5 Turbo models incorporate improved steerability and extended context.
Developers could utilize increased steerability to design AI applications that align more closely with the specific requirements of an organization or task, such as generating more targeted business reports or creating detailed, context-aware responses in customer service chatbots.
The release of GPT-3.5 Turbo-16k can provide four times the context length of the standard GPT-3.5 Turbo, supporting up to 20 pages of text in a single request. This extended context capacity allows the AI to comprehend and generate responses for much larger bodies of text.
For example, in legal or academic workplaces, where documents tend to be lengthy, this feature could drastically improve the model’s ability to understand and summarize large amounts of text, making information extraction more efficient. Similarly, for project management applications, it could allow the AI to process and understand entire project plans in one go, aiding in generating more insightful project analytics and forecasts.
OpenAI also announced the deprecation of earlier GPT-4 and GPT-3.5 Turbo versions, with older models remaining accessible until September 13th. Developers were assured of a smooth transition and encouraged to provide feedback to help refine the process.
Lower Pricing
Following improvements in system efficiencies, OpenAI is passing on cost savings to developers.
The price for using the popular embeddings model, text-embedding-ada-002, is reduced by 75%. Moreover, there’s a 25% cost reduction on input tokens for the GPT-3.5 Turbo model.
I also wanted to take a second to reiterate the 75% price drop on embeddings. This is actually pretty crazy. You used to be able to embed the whole internet for ~$50M, now it is down to ~$12.5M.https://t.co/fyhQVGPEi8
— Logan.GPT (@OfficialLoganK) June 13, 2023
In conjunction with improved functionality, these price reductions should make it easier for developers to use and experiment with these models in their applications.
Continued Development Of GPT Models
OpenAI seems to be committed to continually improving its platform based on developer feedback. With the latest enhancements to its generative AI models, OpenAI offers new possibilities to developers for creating innovative and improved AI applications for the workplace.
The latest API updates and GPT models provide developers with more capabilities to create AI applications better suited to handling the complexity and specificity of tasks commonly found in workplace environments.
Featured image: iama_sing/Shutterstock