have spent the last two and a half days having o1 figure out how to implement an advanced ML coding project, combining 3 different types of nets scaled to my hardware, piece by piece feeding the next piece into 4o for it to implement, just round and round, almost finished, it's working, GPT-4o can write the code well enough to progress things, and then when it falls down I hand it off to o1, and it fixes everything in a minute.. working transformer and cnn's, training on audio and visual data, I haven't written a line.. that said I am burnt out, more than 30 solid hours of focused work, but I've never had a code base this pretty 😆 well, maybe once or twice, but not at this scale! yeah it really benefits from you having exactly what you want it to do nailed down, the less back and forth the better
@aiforculture2 ай бұрын
I really like the concept you outlined here about the allocation economy. Have you read the 'New Future of Work' report from Microsoft? It's a bit dense but you might enjoy the number of conceptual rabbit holes it gives you to explore and extrapolate with these new tools. A section there I found interesting (which relates to the concept of the allocation economy) is about how curatorial skills might begin to become much more important, instead of content creation skills. For example, a job begins to consist much less of writing 200 words of technical instructions for a product, and much more from reviewing 20 different top generations of those instructions and 'curating' the knowledge, essentially copy/pasting and amending to create a more optimal version.
@Nick-un2wvАй бұрын
Really good video
@DrBrianKeating2 ай бұрын
🎉
@etfacetimehome2 ай бұрын
Dan I'd be curious to know when you think the actual dust settles in terms of advancement in these models. Sometimes I feel like we are just on the road to AGI and once that happens then learning the models is pointless anyway. Don't mean to sound nihilistic, I enjoy learning and using the models currently I'd just be curious to hear your thoughts on the end game.
@Nick-un2wvАй бұрын
What do you think we should prioritize instead of learning about models?
@etfacetimehomeАй бұрын
@@Nick-un2wv My question is more about this seemingly exponential curve we're on with the advancement of these models. OpenAI's goal is to have an AGI that replaces intellectual labor. Idk the future is just really uncertain and it's hard to figure out what skill is actually going to be useful. You can spend a lot of time learning something about a model, and then upon the release of a new model that thing you learned was essentially pointless.
@glennmontague43102 ай бұрын
'Chain of thought has been around for a long time' Yeah--like 17 months?
@jason_v12345Ай бұрын
As an AI concept, yeah. As a general concept, it was referred to for centuries as a "train of thought." I don't know if "chain" of thought is an intentional play on words or a malapropism.