Premium
This is an archive article published on June 13, 2023

Sam Altman’s future plans for ChatGPT, OpenAI’s GPU crisis revealed in leaked blog post

Sam Altman recently attended a closed-door meeting in London and one of the participants took to their blog to reveal some confidential information.

Sam Altman reveals future plans of OpenAIHow much do you know about Sam Altman, the CEO of OpenAI and one of the most influential figures in the tech world? (REUTERS/Issei Kato)
Listen to this article
Sam Altman’s future plans for ChatGPT, OpenAI’s GPU crisis revealed in leaked blog post
x
00:00
1x 1.5x 1.8x

Sam Altman, the co-founder and CEO of OpenAI that introduced ChatGPT to the world, is hailed as the next big icon in the world. The 38-year-old CEO is currently one of the busiest technocrats in the world and is seen at numerous symposiums and conferences. ChatGPT and its cutting-edge innovation have piqued the interest of millions worldwide, and hence Altman and any development related to OpenAI becomes a closely scrutinised affair on the internet. 

Amidst all his public appearances, Altman reportedly attended a private meeting in London about two weeks ago. One of the attendees at the event shared a blog post recounting all that Altman revealed at the event. Even as some reports have claimed that OpenAI has asked the individual to take down the blog post, here’s what we know about the now inactive blog and future plans of OpenAI for ChatGPT. 

The closed-door meeting was attended by around 20 people. While the blog has been appended ever since it was reportedly asked to take down the information, it still created a buzz in the AI community. Thanks to an archiving website, here are some revelations from the blog post.

Story continues below this ad

What the blog revealed

According to the blog post written by Raza Habib, the co-founder of London-based startup Humanloop, Altman reportedly said that OpenAI’s inability to obtain adequate graphic processing units (GPUs) – the chips that run AI applications – is one of the main reasons for the delay in the company’s short-term plans. This was also causing issues for developers who were relying on OpenAI’s services. This shortage has also made it difficult for the company to let users push data through LLMs that back software such as ChatGPT. The GPU scarcity has also reportedly slowed down OpenAI’s existing services, the blog post said. 

The blog post also revealed that Altman had enlisted all those things that OpenAI was not able to do owing to the chip shortage. One of them is the limitation to token windows. Context window determines the amount of data that users can input. As of now, most GPT-4 users have access to 8,000 tokens, and OpenAI announced an increment to 32,000 tokens in March this year. However, only a few users have actually gained access to the higher tokens yet. The blog stated that the longer 32,000 tokens context window will be further delayed.

Sam Alt closed-door meeting In his blog, OpenAI’s plans according to Sam Altman, London-based AI expert, and entrepreneur Raza Habib said that he was a part of the event that saw the participation of Altman along with 20 developers. (Image: Humanloop)

OpenAI’s short-term roadmap

The blog post mentioned that Altman shared OpenAI’s provisional near-term roadmap for API. For the year 2023, the company’s top priority was to offer cheaper and faster GPT-4. “In general, OpenAI’s aim is to drive “the cost of intelligence” down as far as possible and so they will work hard to continue to reduce the cost of the APIs over time,” read the post. 

The company said that the context window is as high as 1 million tokens possible in the near future. Fine-tuning of API will be extended to the latest models, but the exact form from this will be shaped based on what developers need. The company also envisioned a future version of the API that will remember conversation history. 

Story continues below this ad

For 2024, OpenAI reportedly has plans to include multimodality. While this was demonstrated as part of the launch of GPT-4, this can only be expanded to more users once the GPU shortage is mitigated. 

Other revelations from the blog post suggest that GPT-5 is not going to be monumentally bigger than GPT-4; it will likely be a few times bigger. The GPU shortage is hampering OpenAI’s ability to scale its operations. ChatGPT plugins for enterprise customers will likely be delayed. 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement