This is brilliant. It’s tough to beat GPT-4. But if you make smaller, specialized models, I think it would be possible to beat GPT-4 on certain benchmarks. That’s what I hope the tech industry starts doing. Especially when 90% of the time I use Chat GPT it’s to write computer code.
@khangvutien253811 ай бұрын
1. I love the relax but precise style of this presentation 2. What we are learning here reminds me of my engineer thesis on PCA in 1975: to get significant eigen vectors, it is better to filter the data for meaningful samples 😅 or else there’s plenty of noise that waste computation time. 3. Question: in view of the lawsuit of the NYT against Microsoft and OpenAI, how can you make sure that the synthetic textbook-quality contents generated automatically by GPT-4 to train Phi-2 doesn’t contain litigious sentences?
@que_93 Жыл бұрын
This is brilliant work and gives so much for us to think and work upon. I guess, "size doesn't always matter". Pun intended. And I am glad that you have made phi open-source. Thank you!
@ShubhamSinghYoutube Жыл бұрын
How do you ensure that the textbook quality scoring by GPT4 and GPT3.5 is reliable/ true?
@QuantumXdeveloper Жыл бұрын
SLMs are good for individual purpose, but why not you building a gpt4 like llm model. Google just launched its gpt4 killer Gemini ai. Hope Microsoft will also come up with multimodal language model.