Click here to visit Original posting
Another week, another OpenAI announcement. Just last week the company announced ChatGPT would get a major memory upgrade, and now CEO, Sam Altman, is hinting at more upgrades coming this week.
On X (formerly Twitter), Altman wrote last night, "We've got a lot of good stuff for you this coming week! Kicking it off tomorrow."
Well, tomorrow has arrived, and we're very excited to see what the world's leading AI company has up its sleeve.
We're not sure when to expect the first announcement, but we'll be live blogging throughout the next week as OpenAI showcases what it's been working on. Could we finally see the next major ChatGPT AI model?
Good afternoon everyone, TechRadar's Senior AI Writer, John-Anthony Disotto, here to take you through the next few hours in the lead up to OpenAI's first announcement of the week.
Will we see something exciting today? Time will tell.
Let's get started by looking at what Sam Altman said on X yesterday. The OpenAI CEO hinted at a big week for the company, and it's all "kicking off" today!
we've got a lot of good stuff for you this coming week!kicking it off tomorrow.April 13, 2025
One of the announcements I expect to see this week is ChatGPT 4.1, the successor to 4o. Just last week, a report from The Verge said the new AI model was imminent, and considering Altman's tweet, it very well could arrive today.
GPT-4.1 will be the successor to 4o, and while we're not sure what it will be called, it could set a new standard for general use AI models as OpenAI's competitors like Google Gemini and DeepSeek continue to catch up, and sometimes surpass ChatGPT.
ChatGPT was the most downloaded app in the world for March, surpassing Instagram and TikTok to take the crown.
That's an impressive feet for OpenAI's chatbot which has become the go-to AI offering for most people. The recently released native 4o image generation has definitely helped increase the user count, as I've started to see more and more of my friends and family jump on the latest trends.
Whether that's creating a Studio Ghibli-esque image, an action figure of yourself, or turning your pet into a human, ChatGPT is thriving thanks to its image generation tools.
Speaking of the pet-to-human trend, I tried it earlier, and I was horrified by the results.
If you've not been on social media over the weekend, you may have missed thousands of people sharing images of what their dogs or cats would like as humans.
This morning I decided to give it a go, and then I went even further and converted an image of myself into a dog. Let's just say this is one of my least favorite AI trends of 2025 so far, and I don't want to think about my French Bulldog as a human ever again!
When will we get an announcement today?
There's no information on when to expect OpenAI's announcement today, but based off of previous announcements we should get something around 6 PM BST / 1 PM ET / 11 AM PT.
Your guess is as good as mine on whether we'll get daily announcements this week like the 12 days of OpenAI announcements in December.
We'll be running this live blog over the next few days so as soon as Altman and Co makes an announcement you'll get it here. Stay tuned!
One hour to go?
Around an hour to go until the expected OpenAI announcement. What will it be?
Could we see GPT-4.1? Or will we see some new agentic AI capabilities that take OpenAI's offerings to a whole new level?
Last week's memory upgrade was a huge deal, will today's announcement top that, or are we getting excited over a fairly minimal update? Stay tuned to find out!
Hello, Jacob Krol – TechRadar's US Managing Editor News – stepping in here as we await whatever OpenAI has in store for us today.
As my colleague John-Anthony has explained thus far, CEO Sam Altman teased, "We've got a lot of good stuff for you" this week, and it's a waiting game now.
OpenAI drops GPT-4.1 in the API, which is purpose-built for developers
This one is for developers, that is unless OpenAI has something else up its sleeve for later today. The AI giant has just unveiled GPT-4.1 in the API, a model purpose-built for developers. It's a family consisting of these models: GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano.
While Sam Altman is not on the live stream for this unveiling, the OpenAI team – Michelle Pokrass, Ishaan Singal, and Kevin Weil – is walking through the news. GPT-4.1 is specifically designed for coding, following instructions, and long-context understanding.
It seems that with the instruction following, the focus is on letting the models understand the prompt as intended. I've experienced this in the standard GPT-4o mini with ChatGPT Plus, but it seems that with the evaluation metrics, the GPT-4.1 in the API is much better at following instructions since it's trained for it.
And the ideal result will be a much more straightforward experience, where you might need to converse further to get the results you were after.
On the live stream, the OpenAI team walks through several demos that show how GPT-4.1 focuses on its three specialties – coding, following instructions, and long-context understanding – and that it's less degenerate and less verbose.
OpenAI says these models are the fastest and cheapest models it has ever built, and all three are out now in the API. And if you have access to the API, these are available right now.
Sam Altman, OpenAI's CEO, wasn't on the livestream today but still took to X (formerly Twitter) to discuss the updates. He later retweeted a few others, including one user who said GPT-4.1 has already helped with workflows.
Again, it's not available to the general consumer, but it is out in the API already with major enhancements promised. So, while you might not encounter it, some websites or services you use might be employing it.
GPT-4.1 (and -mini and -nano) are now available in the API!these models are great at coding, instruction following, and long context (1 million tokens).benchmarks are strong, but we focused on real-world utility, and developers seem very happy.GPT-4.1 family is API-only.April 14, 2025