Sunday, May 5, 2024
HomeTechnologyFive-Minute Overview of OpenAI's Developer Conference: GPT App Store and GPT-4 Turbo...

Five-Minute Overview of OpenAI’s Developer Conference: GPT App Store and GPT-4 Turbo Unveiled

On Monday, local time, OpenAI, the leading company in the artificial intelligence industry, kicked off its inaugural developer conference. During the nearly 45-minute keynote address, OpenAI CEO Sam Altman showcased a series of product updates for global developers and ChatGPT users.

At the start of the conference, Altman briefly reviewed the company’s progress over the past year, highlighting, “The GPT-4, which was released by the company in March of this year, is still the most powerful AI large model in the world.” Today, there are already 2 million developers using OpenAI’s API (Application Programming Interface), providing a variety of services worldwide. 92% of Fortune 500 companies are using OpenAI’s products to build their services, and ChatGPT has reached 100 million weekly active users.

GPT-4 Turbo Model Unveiled

Next, they moved on to the new product releases, starting with the GPT-4 Turbo model.

Image In simple terms, GPT-4 Turbo represents improvements over the widely known GPT-4 in six key aspects.

  1. AI can understand longer contextual conversation lengths. The standard GPT-4 model supports a maximum of 8,192 tokens, a previous upgrade increased it to a maximum of 32,000 tokens, and GPT-4 Turbo supports up to 128,000 tokens, equivalent to the text contained in a standard 300-page paper book. Altman also mentioned that the new model’s accuracy in handling long text contexts has improved.
  2. Providing developers with more control. The new model allows developers to instruct the model to return valid JSON in a specific format. Developers can also achieve deterministic outputs for each request by accessing seed parameters and the system_fingerprint response field.
  3. GPT-4’s knowledge is up to September 2021, while GPT-4 Turbo’s knowledge is up to April 2023.
  4. The introduction of a multimodal API. This includes the DALL·E 3 model with visual input capabilities, GPT-4 Turbo, and a new text-to-speech (TTS) synthesis model, all integrated into the API. OpenAI also unveiled a new speech recognition model, Whisper V3, which will be available to developers soon.
  5. Following the opening of GPT 3.5 fine-tuning to global developers, OpenAI announced that it will offer active developers the opportunity to fine-tune GPT-4. Fine-tuning is a crucial process for developing specialized AI applications in various industries. OpenAI is also introducing a custom model project to assist organizations in training custom GPT-4 models for specific domains. Altman noted that this won’t be cheap initially.
  6. OpenAI has doubled the token rate limit for all GPT-4 users, and developers can apply for further rate increases.

Similar to Microsoft and Adobe, OpenAI has introduced a “copyright shield” mechanism. In cases where ChatGPT Enterprise Edition users and API users face copyright lawsuits, the company will defend and assume the compensation liability arising from them.

Image In terms of pricing, GPT-4 Turbo, as a leading large model in the industry, is priced significantly lower than GPT-4. The price per input token is only one-third, and the price per output token is half. In other words, the pricing for 1,000 input tokens is 1 cent, and the pricing for 1,000 output tokens is 3 cents. Additionally, the price of the GPT-3 Turbo 16K model has also been reduced.

During a break in the new product announcements, Microsoft CEO Nadella made an appearance and praised OpenAI, emphasizing Microsoft’s deep appreciation for OpenAI.

ChatGPT Updates

Altman announced that despite today being a developer conference, OpenAI couldn’t resist making some updates to ChatGPT.

Firstly, starting today, ChatGPT will use the newly released GPT-4 Turbo model. Additionally, to address the inconvenience of users having to choose different modes before each conversation, GPT-4 Turbo will receive a product logic update, allowing the machine to adapt its capabilities based on the conversation.

The second important product presented during this event is the GPTs. Users will be able to build their own GPT by providing custom instructions, extending the model’s knowledge boundaries, and issuing action commands. Moreover, they can release their custom GPT to a wider global audience. Importantly, the entire process of building a “custom GPT” is guided through natural language dialogue.

Image Altman demonstrated how to build a GPT through a chat conversation. He instructed GPT Builder with, “Help entrepreneurs brainstorm business ideas and provide advice, then question why their company isn’t growing fast enough.”

Subsequently, ChatGPT quickly constructed a business consultancy GPT and even generated a logo.

Altman then uploaded a speech he had given about startup companies to the attributes page, providing additional knowledge to this use case. With this, the preliminary construction of a GPT for custom use is completed. Users can choose to keep it for their own use or publicly release it.

Image Since we mentioned “public release,” OpenAI also announced that they will launch the “GPT App Store” later this month. For the most popular GPTs, the company will share a portion of the revenue to promote the advancement of the GPT application ecosystem.

Assistants API

Finally, there’s the new Assistants API, targeting developers. The “Assistant API” is a specialized artificial intelligence designed to follow specific instructions, leverage additional knowledge, and call models and tools to perform tasks. The new Assistant API provides features like code interpretation, retrieval, and function calls, eliminating many of the heavy tasks that developers previously had to handle themselves.

OpenAI stated that this API has a wide range of use cases, including natural language-based data analysis applications, programming assistants, AI travel planners, voice-controlled DJs, intelligent visual canvases, and more.

As an example, Romain Huet, OpenAI’s Chief Developer Experience Officer, created a use case of “knowing everything about the developer conference” and used Whisper for voice input.

Moreover, since this API can be connected to the internet, Romain also used voice commands on the spot to randomly select five audience members and add $500 to each of their OpenAI accounts.

Image As the final surprise of the event, Romain once again issued instructions to AI, adding $500 to the accounts of everyone present.

Most Popular

Recent Comments