The parts around 8:18 and 9:56 really hit home. It's astonishing to see the shift from our traditional ways of using software to solve problems to these groundbreaking AI approaches. As a software developer, this makes me rethink everything I've learned and mastered over the years. There's a sense of insecurity, wondering how the skills I've spent years honing will adapt to this rapidly evolving AI landscape. The future of tech in the next 5 years looks incredibly different, and it's both thrilling and unnerving
@annaczgli2983 Жыл бұрын
Each new feature OpenAI adds is the death of 5 startups. Hilarious! Love it!
@chronicfantastic Жыл бұрын
Great recap, loved the vision demo.
@andrewlaery Жыл бұрын
This update from OpenAI has blown my mind. Just as we are starting to get used to a coding environment, all the major constraints get removed. The pace of change right now is incredible. What will tomorrow look like? Excited and nervous! Great vid Sam!
@mamisoa Жыл бұрын
You should add : "from openai import OpenAI " at the beginning of the colab so the basics GPT-4-turbo completion paragraph works. Nice work and video !
@samwitteveenai Жыл бұрын
yes totally right I got a bit aggressive in deleting imports before I posted the final colab. Thanks!!
@klammer75 Жыл бұрын
Exciting times! Looking forward to some tutorials Sam showing off all these new features! Great stuff and keep up the amazing work🥳🤩🦾
@clray123 Жыл бұрын
The JSON mode is not about increasing probability of tokens, it's about decreasing (to zero) the probability of unwanted tokens. And what is "unwanted" or "allowed" is decided based on the already generated context using a good ole grammar. A neat feature which llama.cpp has had built-in for a good while.
@Pure_Science_and_Technology Жыл бұрын
So, the tokens are used as usual, but the JSON format neatly packages the response?
@clray123 Жыл бұрын
@@Pure_Science_and_TechnologyFor each token's generation the syntax defined by the grammar restricts which tokens may be output at this position. So the model generates probabilities for all tokens from the vocabulary as usual, but the output filter does not sample from among the "top k highest probable tokens" (or some such), but instead from "top k highest probable tokens allowed by the grammar at this point in the sequence". It is not limited to JSON, you could write a grammar which constrains the output to "yes" or "no", or a digit from 0 to 9, only words in language X etc. But it is a bit dishonest in that you are cherry-picking tokens that the model "by itself" may not have judged as very probable at all, so the (content) quality of output may suffer. Of course, to alleviate this problem it is usually combined with a prompt to also tell the model to generate a particular syntax. But in principle it works with any prompt and even with a completely random (untrained) token generator (in which case you would get gibberish tokens that still conform to the prescribed syntax).
@JonNordland Жыл бұрын
By far, the best cover of the event. This is what really is important about the event.
@sundarramanp3057 Жыл бұрын
Eagerly waiting for the assistant video to release soon! Amazing content
@micbab-vg2mu Жыл бұрын
Great news - thank you for the update:)
@caiyu538 Жыл бұрын
great lectures
@nufh Жыл бұрын
Just tried it on colab. Really cool.
@davidw8668 Жыл бұрын
These are interesting news, thanks for the video. The newly introduced usage tiers that are limiting input tokens are rather complicated and practically limit large context to heavy users, unfortunately.
@samwitteveenai Жыл бұрын
Yeah agree, but I have seen OAI moving in this direction for a while. They want big companies that spend big consistently.
@TommyJefferson1801 Жыл бұрын
1 question, can I now finetune ChatGpt with inputs of what length? Previously it was 4K just. I read through their blog but it's confusing if the length has increased to 16K
@alechill3573 Жыл бұрын
00:00 OpenAI Dev Day introduced GPT-4-Turbo and other updates. 01:43 GPT-4 Turbo and new modalities 03:29 Code changes include model instantiation and message passing for completions. 05:07 GPT-4 Turbo allows for more up-to-date information and JSON mode for better JSON response 07:04 OpenAI Dev Day showcased GPT-4-Turbo and introduced tools for calling functions. 08:49 Using GPT-4 Vision Preview model to describe an image in detail as a story 10:34 The OpenAI TTS system delivers realistic audio and users own the rights to the generated multimodal LFTs 12:20 OpenAI introduced developer tools for building low code or no code applications with GPT. Crafted by Merlin AI.
@miteshpatel-u9c10 ай бұрын
Curious to know whether OpenAI Assistants video is in works!
@samwitteveenai10 ай бұрын
I started it but one but I never got around to finishing it. I think they have added some new stuff so I should take a look. One challenge was it couldn't be run easy in a colab from memory it required a docker instance. I try to normally give people a Colab they can try things out in easily etc.
@miteshpatel-u9c10 ай бұрын
Probably LangGraph videos you did are somewhat related. Can't we swap some of the agent there with OpenAIAssistantRunnable() from LC and go from there?
@jayhu6075 Жыл бұрын
I've been away from your channel for a while due to various reasons. When i come back I was Impressed by your update, I would like to see a following about TTS. Recently, I saw an example where Whisper is used to convert voice inputs to text. Then the new TTS API translates the text to speech. Perhaps an example with a mobile phone that you speak into and display on a large screen, like the example at DevDay?" Many thanks.
@ickorling7328 Жыл бұрын
How do you access this as a subscriber to open AI
@samwitteveenai Жыл бұрын
They have 2 ways , the code way through the platform.openai.com and through ChatGPT-Plus
@harshkohli6757 Жыл бұрын
Anyone have this error recently - NotFoundError: Error code: 404 - {'error': {'message': 'This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?', 'type': 'invalid_request_error', 'param': 'model', 'code': None}} . This seems to happen with all chat models - gpt-3.5-turbo, gpt-4, and now gpt-4-1106-preview
@avi7278 Жыл бұрын
In testing, it's all very underwhelming so far. Lots of weird limitations that you wouldn't expect. I asked it to port a component from material ui to a diff component lib and gave it the code and all the relevant docs and it kept completely changing the output on subsequent iterations. The 128k seems like it is losing track of some data in the context. Very odd behavior. Id expect a previous iteration in context, asking it to make a single change it wouldn't other parts of the component randomly along with the requested change. It'll just randomly remove some properties or methods or change the properties names.
@clray123 Жыл бұрын
Here you see the core feature of AI - you can't trust it to do work, and you also can't trust it to find which parts of its work you can't trust. But it's still entertaining.
@samwitteveenai Жыл бұрын
the 128k context certainly suffers from "Lost in the Middle" from my testing.
@paulocoronado2376 Жыл бұрын
Do you think OpenAI just killed Langchain agents? I know Langchain covers a wide variety of models, but I tested the OpenAI Assistant and I found better than any other agent!
@samwitteveenai Жыл бұрын
I dont think they have killed agents but they certainly have changed how they will work. I will try to talk about this in one of the upcoming vids
@ZYLON22 Жыл бұрын
Again thousands of small startups getting destroyed by their new launch of GPT's and the store. Really asking myself what should we develop now when they just c&p every good idea and integrate this into OpenAI??
@davidw8668 Жыл бұрын
Solve specific user problems instead of doing the obvious high level stuff. They can't break into custom stuff easily.
@ZYLON22 Жыл бұрын
@@davidw8668 You're right, but getting that feeling that they also want to go after specific problems by letting customer solve this without progamming. their APi is a trap, they are analysing all the data and knowing exactly what we are building and what works well.
@clray123 Жыл бұрын
The reason not to use OpenAI is freedom. If you don't care about freedom (like most people), you use OpenAI and Microsoft products. If freedom is worth something to you, then it is OpenAI/Microsoft who should be paying you to use their spying crap.
@samwitteveenai Жыл бұрын
I totally agree with @davidw8668 verticals with unique function calls and UI are the way to compete with this. Any horizontal play will attract the big guys. I saw recently that 2 of the big players have teams just sitting their ready to clone whatever becomes a killer app from LLMs.
@henkhbit5748 Жыл бұрын
Thanks for the update. I am not really impressed. With open source models u can do all of this. Open source was the first with a 128k token and even 192k exists. That is why closedai is lowering it's prices. And who trust closedai with your internal documents for doing rag? For privacy I go for open sorce.