OpenAI: The new update and why the best way to use large language models is not what you think

Latency. 

It’s a measure of how long it takes for an input to deliver an output. It may sound technical, but it is something that makes or breaks our interaction with new technologies like Artificial Intelligence. This is why OpenAI’s Spring Update was, at its core, all about reducing the gap between action and reaction.

 

Hopps and Wilde, our heroes in the 2016 animated movie ‘Zootopia’, find their progress hindered by a helpful, but slow moving bureaucrat.

Image from Yahoo: Sloth Character inches its way to fame in Disney’s Zootopia

 

What was announced?

Led by their CTO, Mira Murati, we saw the launch of a new faster model called GPT-4o. The ‘o’ stands for ‘omni’, which covers its fresh capabilities in more modalities - text, audio, images and video - as well as access. Yes, they have gone big on widening access by providing their most advanced model to the public for free. They’ve also ramped up the language capabilities, making it proficient in 50 languages, which covers 97% of the global population. They’ve narrowed the time between launch and access for a huge number of people

The second thing they showcased, in a rather impressive manner, was the new conversational interface. Here, too, we saw a laser-focus on reduced latency, where the space between spoken instruction and response was closed to levels that felt much more natural. Maybe this was the ‘magical’ aspect that Sam Altman was trailing? High latency conversational interfaces have hampered the spread of voice assistants, and it is a smart move bringing this down.

We saw several examples of quick fire conversations, turn taking, interruption, and even more dynamic language, with the model detecting sentiment from speech and facial expression and responding accordingly. From helping the presenter prepare for his talk, singing a lullaby about a robot, and blending speech and vision to become a patient and capable maths tutor, GPT-4o was intuitive and fluid in a way that promises an easier way forward to working with AI.  All of this is wrapped up in the consumer app, web, and a desktop app which is set to become welded to a shortcut key for millions of users. 

As in all announcements, we’re expecting a phased roll-out over the coming weeks, but you can play with it right now. I’d encourage you to do so, especially if you haven’t used a GPT4 grade model before, because the difference is striking.

 

Before you try it, you need to know…

The current batch of AI is fearsomely capable, across many different domains. There is huge promise, but many people struggle to find the best way to safely incorporate it into their daily lives. This is partly down to a phenomenon called the ‘Jagged Frontier’ by Ethan Mollick.

In short, the jagged frontier describes the fact that AI is great at some things, and not so good at others. In their study at Boston Consulting Group, Mollick et al discovered that where employees used GPT4 for tasks within the model’s capabilities, it improved their productivity and quality of work. Used outside the capabilities, things got worse. The tricky part is that it was unclear before the exercise what tasks were inside or outside this invisible frontier, and that it varied from role to role. Making AI work for you is all about finding out where that frontier is in your work, and this is where we can help.

A graph showing the Jagged Frontier of AI Capabilities from Centaurs and Cyborgs on the Jagged Frontier by One Useful Thing, made using ChatGPT with Code Interpreter

 

Understanding how AI can work for you

At Curistica, we’ve been delivering workshops on Generative AI and Prompt Engineering to dozens of clients since we launched. What we’ve learned from the hundreds of different problems and solutions we’ve jointly worked on, is that people tend to think too big and too literally about how to use AI. This limits the opportunities being considered, and hampers the value they might get from it. It’s not about going all-in and digitising your job, but instead finding the smaller, repetitive tasks that absorb your time and attention, and using AI there. 

This requires personal exploration and sharing of solutions. You know your work best. You know the problems you face every day much better than me, but I know how to help you optimise AI for you. Learning how to understand the problem and how to use this incredible general purpose tool at your disposal is key to making progress. Companies like Moderna , who recently shared how they have helped their employees develop GPTs used for bespoke tasks across the company, are starting to see the benefit of this. We use this approach at Curistica for our own operations, as well as our clients, and have seen huge benefits in efficiency and output. It can work for anyone, you just have to know how, which is why OpenAI’s broadening of access is so exciting - and I know lots of you are interested, too.

 

The route to success

The exciting thing about tech is it’s always growing and developing. Yesterday’s announcement, exciting as it is, will soon be superseded by the next one, at the ever accelerating pace we’ve come to expect from AI. I have the bags under my eyes to prove it. Throughout it all, the ability to understand this tech and how it applies to your world will be key. 

If you’d like to learn more and reduce the gap between hearing about the latest developments and turning into real world impact for yourself, we’d love to hear from you. Our effective workshops help give you the foundational knowledge about Generative AI in order to understand what these models can and cannot do. We’ll show you the best way to write prompts (instructions which the model can understand and act on) and how to create reusable tools personal to your needs. We’ll even help solve the problems you’re facing, turning that idea of yours into a reality. 

 

So, what are you waiting for?

Book your workshop to start optimising your work with AI today.

 
Previous
Previous

Time to Care