Last month, OpenAI unveiled GPT-4o and demoed a new advanced 'Voice Mode' for ChatGPT that can "understand and respond with emotions and non-verbal cues," saying it will be available to a small group of people late June. However, in a post on X, the company stated that they will be delaying the feature by a month because they are "improving the model's ability to detect and refuse certain content." It added that the new functionality won't be available until it meets certain internal safety and reliability checks. While the company did not share exact timelines on when it will be rolling out the advanced Voice Mode, the feature will be available for ChatGPT Plus subscribers sometime in July. Also, advanced Voice Mode won't be available to all paid users until fall. We're sharing an update on the advanced Voice Mode we demoed during our Spring Update, which we remain very excited about: We had planned to start rolling this out in alpha to a small group of ChatGPT Plus users in late June, but need one more month to reach our bar to launch.… — OpenAI (@OpenAI) June 25, 2024 However, the delay will not be affecting the rollout of the new video and screen-sharing functionalities the company demoed at its Spring event last month. To give you a quick recap, these capabilities include the ability to solve math problems just by looking at a picture, explaining various device settings and more. OpenAI says these features will be available on both mobile and the macOS version of the app. Last month, OpenAI employees demoed some upcoming capabilities, with ChatGPT replying to queries almost instantaneously. The company's advanced Voice Mode recently came under fire after Hollywood actress Scarlett Johansson accused OpenAI of using a voice that sounds 'eerily similar' to hers, following which CEO Sam Altman said that they will be removing 'Sky' from its products.