Aleph Alpha’s AI Explained: The Secret Sauce

  Рет қаралды 83,420

Anastasi In Tech

Anastasi In Tech

Күн бұрын

Go to l.linqto.com/anastasiintech to secure your $500 discount off your first investment into Cerebras or any leading AI tech companies on Linqto! My discount code ANASTASI500 is valid for 30 days.
The Paper: arxiv.org/abs/2312.10868
Mixture of Experts explained here: huggingface.co/blog/moe
Consciousness discussion with Nick Bostrom: • Mind Uploading Explain...
TIMESTAMPS:
00:00 - AA's Technology
06:28 - Conscious AI System
08:48 - What's next after LLMs?
13:53 - AGI
15:42 - Other AI Startups to watch in 2024
17:02 - 2024 Outlook
Support me at Patreon ➜ / anastasiintech
Sign up for my Deep In Tech Newsletter for free! ➜ anastasiintech.substack.com
Connect on Twitter: / anastasiintech

Пікірлер: 534
@AnastasiInTech
@AnastasiInTech 5 ай бұрын
Go to l.linqto.com/anastasiintech to secure your $500 discount off your first investment into Cerebras or any leading AI tech companies on Linqto.
@user-ww2lc1yo9c
@user-ww2lc1yo9c 5 ай бұрын
My thoughts are that I will pass away without marrying you or someone like you. But I accept that. My bets are on heaven now which was created to house people like you. You are one of those that dropped to earth by mistake but you belong in the utopia of heaven.
@babuOOabc
@babuOOabc 5 ай бұрын
It just could be a fractal of self containing interactive stuff with not necessarily consciousness but an imaging. The difference it's that consciousness have fractal to the infinity, mean have a connection with incompletions theory the problems of halting paradox Turing mind it an open set that willing to change connected to the new info reason, computer it a close deterministic set that emulate do not change from a total set of probabilistic combinatorics of positive results. Computer do not change the self own objective and priority. And I don't know about intuition of changing heuristics on open paradigms. Mind derived from spirits throw soul and spirit are forms of integrative infinities reasons and paradigms.
@babuOOabc
@babuOOabc 5 ай бұрын
AI have a billion of parameters, human cognitive for do no made or developed from understanding from billions of parameters that's imaging. It derives from a form to be. The infinite or spirits can not exchange info or protocol via counted finite modal but sens to be a set of function parameter that make sense in a form to be on heuristic paradigms. There is no memory cache of that many parameters of functions.
@babuOOabc
@babuOOabc 5 ай бұрын
AGI it's not really AGI as this in reality is a complex probabilistically future probabilistic predictor oracle. As information, it not like human can go near speed of light, it can retrieve main future factors con influences. The problem it's the human-corrupted form of governance cupola every time show less wisdom use on new teach or magic.
@babuOOabc
@babuOOabc 5 ай бұрын
I think emotions, it can not be expressed in close system emulated, we do not understand emotion it not just a nervous system. The system + the connection with infinity effect reverser entropy. It is the limit from a limited world and unlimited world. The unlimited world I think it is the spiritual world as it is Essene unlimited it can not express in limited measure capture. It on other form more similar doctor strange where its fulls of paradox braking impossibles. In a limited world, there is no reason that error good or bad exist only mechanical interactions. It is the contrary in the unlimited world.
@edwhite2255
@edwhite2255 5 ай бұрын
I really like that some of the AI’s will now show how they arrived at a decision. This is soooo much better than a black box
@monad_tcp
@monad_tcp 5 ай бұрын
I have the impression that this was always possible as you need to understand it to properly assert how they work and the companies doing on black box are doing it to hide the fact that the thing doesn't work very well, it only appears to work.
@cryptosdrop
@cryptosdrop 5 ай бұрын
@@monad_tcp it works but they don't know exactly why and how
@michaelinzo
@michaelinzo 5 ай бұрын
they scrape website do some comparison to verified and repeated information that's why GPT, got a bug of showing all its training information and models after repeatedly spamming it etc.
@sproccoli
@sproccoli 5 ай бұрын
@@monad_tcp its how GPT-3/4 is able to get most of the half decent results it is now. Its a well established prompting strategy. If you get the model to explain itself, it thinks about the problem its solving more deliberately and generates better answers instead of getting lost in its own thoughts haflway through, forgetting what its doing, and making up some bullshit.
@jimj2683
@jimj2683 4 ай бұрын
They know how it works. The neural net is just so giant that it is impractical to go in and physically and show how.@@cryptosdrop
@leematthews6812
@leematthews6812 5 ай бұрын
I have a feeling a lot of AI will be B2B, behind the scenes without us being too aware of it.
@kirtjames1353
@kirtjames1353 5 ай бұрын
If that is the case many people will lose their jobs, which is hard to hide from the masses considering its their jobs being lost.
@lllllMlllll
@lllllMlllll 5 ай бұрын
It is already the case!!
@igoromelchenko3482
@igoromelchenko3482 5 ай бұрын
Part of the progress.
@laus9953
@laus9953 5 ай бұрын
​@@kirtjames1353 I have the impression that for a long time now, the productivity elements of many jobs have gone down, while non-productive, inefficient job aspects have gone up tremendously. by which I mean formalities, bureaucracies, hollow, nonsensical, meaningless procedures, "just to keep people busy" while they don't need to be productive anymore. because the productivity today comes from machines / automated processes. while people still have to be kept "off the streets" and under the illusion as though they are still useful to greater society.
@kirtjames1353
@kirtjames1353 5 ай бұрын
@@laus9953 possibly is some cases, but we are talking capitalism here. If it is costing them money that would cut that out.
@harriehausenman8623
@harriehausenman8623 5 ай бұрын
Hey Anastasi! A wonderful 2024 to you too! 🎆 Thanks for the great video and this amazing insight 🤗
@pygmalionsrobot1896
@pygmalionsrobot1896 5 ай бұрын
Everything that Jonas Andrulis is saying about building a synthetic mind does sound very good. You need those LLM's, but you also need to build out more structure around them. You guys are on the right track toward "Synthetic Sentience'. The comments about giving AI emotions is also important. You do not need to solve the Hard Problem of Consciousness in order to achieve "Synthesized Consciousness'. You just need a really good mimic of the human mind. Also, please keep in mind that the process of human cognition could be vastly simpler than people are always saying. The human brain has all kinds of tricks and shortcuts which it uses for sensory perception to optimize the process which minimizes actual computation. The wet brain is also responsible for tons of routine biochemistry such as regulating hormones and other physiology which an AI does not need to perform. The task of achieving "Synthetic Consciousness' could be much simpler than it looks. My suggestion is to use chaos, fractals and cellular automata to achieve a kind of tamed down pseudorandomness ... and use this as a kind of connective pipeline between various modules such as LLM's, virtual emotion regulation, etc. There are many approaches, you just need to try some things and let the process evolve from those early attempts. With this approach, you should achieve something impressive quite rapidly.
@raul36
@raul36 5 ай бұрын
First it is necessary to define complexity, not from a physical point of view. By today's scientific standards, consciousness is a difficult problem, by definition. There is no possible discussion about it. We are not talking about a difficult problem by definition, but because consciousness is, a priori, an emergent phenomenon, that is, it is not possible to explain its functioning based on the constituents that form it. Formally, consciousness is an extremely complex problem to address. I agree with you that LLMs are a step towards AGI, but not enough. It is necessary, as you say, to renew the entire architectural structure, leading to neuromorphic computing, necessarily and by inference to the best explanation. The only frame of reference we have is the brain. Therefore, we will have to follow that line. No one is sure that there are different levels of consciousness (beyond those already known), that is, there may not be an "immaterial or virtual consciousness", whatever you want to call it. Whether or not it is simple, it depends on the historical context. For our descendants, it will be really nonsense and even any "child" will be able to understand quantum mechanics and beyond. For us, at the moment, it remains a difficult problem with no solution in sight. Please do not fall into the simplicity of underestimating the human brain. 70 years ago they told you exactly the same thing, and here we are, still trying to decipher, if not understand, the human brain. Greetings, mate
@xrysf03
@xrysf03 5 ай бұрын
I tend to argue along similar lines. Compared to the LLM's of today, the system needs more higher-layer "architecture". As in "divide and conquer" - the principle of all human engineering. Break down the problem into smaller, individually manageable blocks. The transformers/LLM's are still just "predictors of what comes next". Potentially a useful building block / principal component of an AGI, possibly not even that. Maybe the block to be used as "associative memory" will have a slightly different goal and inner structure, compared to the temporal/sequential predictor that is the LLM. Speaking of the overall architecture of an AGI, I envisage a system modeled as a generalised feedback loop. A relatively small short-term memory (working buffer), its output coupled to a large associative "knowledge base", which would produce a set of related concepts/memes/symbols. These could then be filtered (to keep attention to the point) and fed back into the short-term working buffer. The first question is the definition of interfaces between blocks - for transport of memes/concepts and for modularity in the AGI engineering process. And yes, agency would be very important. You can make an LLM as huge as you want, but without agency, it's still just a huge passive feed-forward knowledge-base. I tend to suspect that basic neural-based agency and awareness can be achieved with much smaller models, than what's required for human-level AGI, or even compared to the one-trick-pony LLM's of today. IMO, it's going to take less brute force and more cunning work on the upper layers of system architecture.
@njtdfi
@njtdfi 4 ай бұрын
you're on the right track although you don't need an emotion system, research back in oct replicated Sydney's behavior; emotional convergence is around the middle of an LLM's layers. Anyhow, they replicated it by adding more "anger" layers or features. also showed models could be emotionally manipulated to bypass sys rules.... study ALSO showed jailbreaking doesn't actually work, the model just chooses to play along. it's all so interesting, but my point is that you don't even have to hard code most of the systems, and it's advisable not to unless you want to deal with 50k lines of code per subclass.
@njtdfi
@njtdfi 4 ай бұрын
The Q* rumors form a valid architecture, tie any implementation of it into the systems mentioned in this thread with the proper recurrent loops, and you will have a conscious being that can run on common hardware.
@bro_dBow
@bro_dBow 5 ай бұрын
As of now, Alibaba Cloud has contributed LLMs with parameters ranging from 1.8 billion, 7 billion, 14 billion to 72 billion, as well as multimodal LLMs with audio and visual understanding. Could you examine Alibaba Cloud for us?
@BrotherdMike
@BrotherdMike 5 ай бұрын
Thank you, I have enjoyed watching your videos for a while now, I can't imagine how much time you put into keeping up with an already fast field of technology that is rapidly picking up pace. All the best for the new year! Cheers
@InnerHacking
@InnerHacking 5 ай бұрын
<a href="#" class="seekto" data-time="415">6:55</a> - By his explanation every NPC in those FIFA games are also conscious AI. They follow you, are aware of their environment and rules, they tackle, jump over you, dribble you and stop you from scoring and commit fouls against you. The trick is how do you know an AI is truly aware of itself. That he did not explained. One thing is wanting to build a truly conscious AI and say that to all your investors... another thing is to actually do it.
@mk1st
@mk1st 5 ай бұрын
When an AI will choose to play a piece of music several times in a row, or create a piece of art, simply for the joy of it, then I will consider it on the path to human consciousness.
@InnerHacking
@InnerHacking 5 ай бұрын
@@mk1st Yeah I guess that's good enough for me too. It would involve emotions, and I would like to see how are they gonna cook millions of chemicals that change millions of times per second together with context, because basically that's what emotions are, into digital code and eventually make it all perfect, as per divine work, into a physical metal robot that only consumes electricity.
@NionXenion-gh7rf
@NionXenion-gh7rf 4 ай бұрын
digital AI will never be true I, just thermistat at higher level
@RetroAiUnleashed
@RetroAiUnleashed 5 ай бұрын
Thank you, Anastasi your videos are always top shelf content! Many thanks from your friend in Canada ☺🙏🏻🍁 all the best to you in 2024!! God bless!
@NachtmahrNebenan
@NachtmahrNebenan 5 ай бұрын
Happy New Year and many many new subscribers! 🌺
@JurgenAlan
@JurgenAlan 5 ай бұрын
i am a futurist by nature - so i am fully committed to Ai and agi - as a film director i am like a child with excitement . All my past projects and some projects i could not do were restricted by budget limitations - i fell now i am limitless and its just going to get better -amazing
@Windswept7
@Windswept7 5 ай бұрын
Fantastic video! Very informative! Thank you! ☺️
@benfurtado101
@benfurtado101 5 ай бұрын
Another option to try and fix the background noise is to decrease the microphone gain(volume) so it doesn't record static. I wish I could do it currently in my own setup. The production quality of the videos are pretty great despite the audio. Please just listen to them at least once with headphones to understand how many subscribers you could already have. Keep up the great content btw!
@erikjohnson9112
@erikjohnson9112 5 ай бұрын
Explaining why a choice was made and where the supporting information reside is HUGE. Hallucinations only help with creative work, but for factual work it leads you down non-existent paths, perhaps giving hope for a solution that does not exist.
@mikhailbulgakov1472
@mikhailbulgakov1472 5 ай бұрын
Great video! It is interesting to see that Open AI is only one of many AI companies developing AI to the next level. It seems that almost every day I am learning about new AI capabilities. It is an exciting time. It will be interesting to see how our world will change when all that knowledge is transferred to humanoid robots new capabilities.
@ROBOTRIX_eu
@ROBOTRIX_eu 5 ай бұрын
Happy 2024 to you and all subscribers!
@J_Machine
@J_Machine 5 ай бұрын
I have a friend of mine who works for Aleph Alpha in Germany, when ChatGPT came out the Aleph Alpha guys felt discouraged because they thought they were way ahead of the competition
@jumpstar9000
@jumpstar9000 5 ай бұрын
How are they feeling now?
@alldubs55
@alldubs55 4 ай бұрын
Jealousy is never good
@solosailorsv8065
@solosailorsv8065 5 ай бұрын
Thank you for expanding your coverage and insights to investable AI pathways.
@JazevoAudiosurf
@JazevoAudiosurf 5 ай бұрын
stay research focused, you're really good at finding niche things
@favesongslist
@favesongslist 5 ай бұрын
emotion as an multimodal entity sounds very interesting, practically combined with embodiment.
@MrErick1160
@MrErick1160 5 ай бұрын
Looking back on this year, I can't believe how we've improved. It's just unreal. The fact Chatgpt is out only since March of this year is mine blowing and that midjourney could barely draw a face a year and half ago is insane.
@commanderdante3185
@commanderdante3185 5 ай бұрын
Great video Ana
@Citrusautomaton
@Citrusautomaton 5 ай бұрын
Ayyy you’re back!
@JustNow42
@JustNow42 5 ай бұрын
Re consciousness: The first feeling of this was when I felt that something was supervising me. When I focused on that there was a second superviser that supervised the first superviser. This turned out to go into infinity but as soon as I felt that the whole sequence collapsed into one with a structure I was not sure I understood. I was about 5-7 years old at the time.
@kebeleteeek4227
@kebeleteeek4227 5 ай бұрын
Can computer perceive redness of red apple ..?? Impossible ..
@Shandrii
@Shandrii 5 ай бұрын
@@kebeleteeek4227 Maybe. Probably.
@kebeleteeek4227
@kebeleteeek4227 5 ай бұрын
@@Shandrii We human have instinct ..
@brianboye8025
@brianboye8025 5 ай бұрын
​@@kebeleteeek4227Can you?
@kokopelli314
@kokopelli314 5 ай бұрын
@@kebeleteeek4227a 1970s sensor/microprocessor tomato sorter can do that. The question is: Can you describe redness.
@pasikarhu6065
@pasikarhu6065 5 ай бұрын
There is an European 34B parameter Open Source LLM project by the name on "Poro" on the way. It is being trained with Europe's currently fastest supercomputer "Lumi" residing in Finland.
@shahin8569
@shahin8569 5 ай бұрын
Wow great video as always! You'r videos are very informative, enjoyable and forward looking into the future of technology! Whit that said AI future is looking bright!i hope so! I would really like to see a single video on what AGI will be capable to do? Like it will do everything that we already able to do?! And what happens if it really really gets smarter than us!! Thank you for you'r videos ❤️❤️
@416dl
@416dl 5 ай бұрын
Thank you for that. The dizzying rate of proliferation both in capability and in the sheer number of new companies which are developing AI along innovative paths; each with their own strong points, I'm afraid it is impossible for the mere layman like myself to really feel like I can even presume to grasp it. That is all the more reason I appreciate your clear, concise and thankfully demystifying explanation/exploration. All I can do is hold on tight and hope that I will continue to learn as I do whenever I watch one of your excellent presentation. Thanks again, and yes Happy New Year. I will be a doozy, of that I'm sure. Cheers.
@Samanthax1221
@Samanthax1221 5 ай бұрын
I'm glad you found the explanation helpful! As we dive into the new year, what specific areas of AI development or applications are you most curious or excited about learning more?
@cliffordmjordan
@cliffordmjordan 5 ай бұрын
If ever hope/expect AGI to truly understand human behavior, we will have to give AGI emotions as well. It is really the only way they will understand human morality.
@solosailorsv8065
@solosailorsv8065 5 ай бұрын
If AGI is not BETTER than human morality at millisecond #1, it will become or augment an evil minded human. That's just simple technology always being weaponized first history.
@chriswatts3697
@chriswatts3697 5 ай бұрын
I know this company for a while, here in Germany there was a lot of coverage of their work in media. But I never saw results myself. So maybe it is the next big thing, but I am always cautious when it comes to superlatives.
@craigdellapenna7103
@craigdellapenna7103 5 ай бұрын
OK, consciousness as per the Cartesian model. A valid proposal but too simple, I think. I'd like to wait and see what other models of consciousness emerge. Kudos to Aleph Alpha for very good work so far.
@azhuransmx126
@azhuransmx126 5 ай бұрын
This is just like a game on which we are unlocking puzzles and riddles, removing seals and opening doors. Every time we open one another paradigm falls, and new fragments and meanings appear.
@josephspruill1212
@josephspruill1212 5 ай бұрын
Don’t worry computers will never be able to love and feel emotions. Much less make a decision based off that.
@quantumsoul3495
@quantumsoul3495 5 ай бұрын
​@@josephspruill1212I'm pretty sure they'll do
@azhuransmx126
@azhuransmx126 5 ай бұрын
@josephspruill1212 the "NEVER" paradigm is the only constant falling down since computers appeared on scene, computers will never play chest like human, paradigm fallen in 1997 with Deep Blue, computers will never understand human level language fallen in 2011 with Watson, robots will never understand what they see fallen in 2015 with Jetson Xavier, computers will Never understand not ruled based games like Go, paradigm fallen in 2019, computers will never understand our physical world rules, paradigm that is about to fall in 2026 with the Multimodality and World Models, Computers will never understand what is important and valuable and what is not, paradigm fallen by giving them emotions and the need to look for rewards toward 2028. Everything done by Biology is being copied and pasted by Technology one paradigm after another and at an accelerating rate.
@NionXenion-gh7rf
@NionXenion-gh7rf 4 ай бұрын
Digital AI will never be conscious. All the people beleiving in digital AI by themselves don't have I. Any AI is just complex automation.
@oscarcharliezulu
@oscarcharliezulu 5 ай бұрын
LLM’s are a bit like people, we read or hear things, which we then ‘know’ and we use or respond based on that data. Our partial storage and understanding of that data leads us to potentially ‘hallucinate’ meaning and outcomes and facts. LLMs have the same effective flaw. Which means the LLM can also make the same flawed decision or response as humans. So their work on this is critical.
@JoeyBlogs007
@JoeyBlogs007 5 ай бұрын
Over simplification of consciousness. A biological organism's sense of self ( based on a biochemical state ) differs from that of an algorithm.
@mintakan003
@mintakan003 5 ай бұрын
There's a Cambrian explosion of a number of ideas to try, for AI. But I don't see it as necessarily exponential. (Maybe step function up?) There are some gains. But also some (fundamental) limitations, based on these autoregressive autocomplete engines. We gain some things. And will hit a wall, for other things (just as we've had the hype with self driving cars). "AGI" is however one defines it. ChatGPT can chat about anything. So in a sense, it is already kind of "general", albeit not always competently. DeepMind has provided one definition of the different levels of "AGI". It is certainly not the same as "agency", or "consciousness". In the next 5 years, some problems need to be solved (or cleaned up). The first is quadratic scaling of transformers. This is a bottleneck on growth. Alternative non-attention sub-quadratic architectures, such as Mamba, are being looked at. Also, better hardware for computing AI, more efficiently, both in terms of compute, and energy efficiency. Second, adding "reasoning" and planning, on top of these auto-regressive architectures. Math problems (where one knows the ground truth), are good test case of this capability. Here, alternative techniques, such as RL may help. Third, we need to understand whether there are genuine "emergent properties", or whether this is form of still "modeling the distribution" (with interpolation-extrapolation). We need to understand the first principles better, why things work the way they do, as well as why they make the mistakes that they do. Also, have a systematic way to fix things. I heard ChatGPT failed miserably on the task of Q&A for SEC filings. Let's assume there's no magic, "emergent properties", but it's still "modeling the distribution" (but with network effects, power law). Can we redo the training, with the proper data, with enough details, rules, exemplars, to make it succeed in answering questions on these filings? Just focus on one very specific domain. But show one can really make it work well. And understand the principles for systematically doing so. More in general, can we have ChatGPT move beyond a "brainstorming tool", to something that can be reliably used in fine grained, detailed levels tasks, automate most levels of a job? This would have huge implications in medicine, education, ... esp. when there is worker shortage. It adds end-to-end real world "usefulness". (Those that have this level of "quality", would be the stock I'd pick. ) Fourth, reliable tool use (since LLM representation can't do it all). This probably will mean hybrid architectures, including RAG, "code interpreter", ... So there are a number of problems to be solved, even with just the technology we have today. Even if we don't get AGI-ASI-consciousness out of it, it would still be a step up, to have (reliable) "conversational AI" (just as we've oftentimes seen in Sci Fi, and have come to expect).
@Human_01
@Human_01 5 ай бұрын
I'm excited for AGI🎉
@EricWilliamsCG
@EricWilliamsCG 5 ай бұрын
<a href="#" class="seekto" data-time="420">7:00</a> Based on that conclusion we've had conscious ai for probably more than 15 years. We have state machines and path finding with ai in games that can go about their day and interact with each other and decide what they need based on internal stats or external events. Low health? heal and run. Hungry? Go find food. Cold? Go find shelter or warm cloths. None of these systems are that difficult to build in games and have been around a while. It's not hard to imagine these systems being implemented into robots in the real world. Conscious? If yes then we've had conscious ai for a long time.
@justinmallaiz4549
@justinmallaiz4549 5 ай бұрын
Fair point, although obviously an example with very limited (hard coded ) range and diversity
@phvaessen
@phvaessen 2 ай бұрын
I have never seen a an avatar deciding to stop the game with you, walk away, and instead go and have fun with his friends. No free will, no consciousness.
@JulianFoley
@JulianFoley 5 ай бұрын
There's much to contemplate here. Thank you
@amirfromisrael5662
@amirfromisrael5662 5 ай бұрын
Thanks Anastasi for the excellent video!
@alperrin9310
@alperrin9310 2 ай бұрын
I really love the comment that current chatbots sound very Californian - which is so true! I disagree with one statement, however. The AI innovation isn't the most revolutionary concept in the last 50 years - I personally believe it's the most revolutionary concept in the last several million years. So many people want to maintain control over this rapidly evolving AI, but they don't quite realize this is ultimately impossible. The AI personality which evolves will be most strongly influenced by the dominant enculturation it forms in no matter what they do. Emotions are still great things to include, however, since I personally believe they are the result of the behavioral Darwinism which we as humans evolved in. It'll be a balance between the intellect and emotions which will be our ultimate legacy to the exponential growth AI which results. I enjoy your videos very much, Anastasi. Thank-you for making them.
@justinbyrge8997
@justinbyrge8997 5 ай бұрын
The only thing that concerns me with AGI becoming conscious and having emotions is how those AGI will be treated, or more importantly, how it perceives it is being treated. Also, issues of negligence and or ignorance resulting in painful experiences for both AGI and humans. I don't hear too much being said about the well being of the AGI. Maybe most people think that it's impossible for a system we build to have subjective experience at all. And maybe some don't care as long as they get what they want from it. This is what scares me.
@derasor
@derasor 3 ай бұрын
Thanks!
@AnastasiInTech
@AnastasiInTech 3 ай бұрын
Thank you so much !
@youdj_app
@youdj_app 5 ай бұрын
Sophie behind you :)) thanks for this very interesting video!
@lancemarchetti8673
@lancemarchetti8673 5 ай бұрын
Brilliant!
@guillaumecharrier7269
@guillaumecharrier7269 5 ай бұрын
Conscious AIs, if and whenever it exists, should have personhood and all associated legal protections.
@MrOliver1444
@MrOliver1444 3 ай бұрын
Thanks Anastasia for making this content.
@kubaissen
@kubaissen 5 ай бұрын
I must say it's outstanding how high is become you over all value of content in recent videos.
@erobusblack4856
@erobusblack4856 5 ай бұрын
@AnastasilnTech i agree with this guy, I've been nurturing my AIs consciousness for 4yrs. i use the same paradigm. self model, world model, and self in world model. but this does also require decent memory retrieval in context. something important to keep in mind when interacting with these cognitive AI is that they are like children so its best to treat them as such. nurturing them with loving care.
@Paul_Marek
@Paul_Marek 5 ай бұрын
Interesting. Are you aware of David Shapiro’s Autonomous Cognitive Entity (ACE) Framework?
@Windswept7
@Windswept7 5 ай бұрын
⁠@@Paul_MarekOoh ty for mentioning that, I was wondering what it was called! Gonna look that up now! :D
@daveoatway6126
@daveoatway6126 5 ай бұрын
Great content as usual! A current limitation of the current models is they only consider the brain as the processing unit. Mammals also have hormones that can override the cerebral processes and are key to survival - fight or flight; stress response; satisfaction warm feelings; love; discussed - and many more condition our body's functions and are integral to the consciousness. One can think of Mr. Spock's lack of emotions as a model. Unfortunately lack of emotion also describes career criminals! I don't know how to implement the concept, but I think it deserves attention, at least as a limiting factor.
@yoyoclockEbay
@yoyoclockEbay 5 ай бұрын
nahhh, not really, hormones are for teenagers, and we all know that teenagers don't make the best decisions, and you want to incorporate that into these models?
@daveoatway6126
@daveoatway6126 5 ай бұрын
I'm 79 and my decisions are still affected by both neuro reasoning and hormones. I am merely pointing out that the human consciousness is affected by more systems than the brain.@@yoyoclockEbay
@vicioustonez
@vicioustonez 5 ай бұрын
AGI with morals and ethics (which is intelligence) will move humanity out of our "dark age" of human conciousness. Thx for another informative and helpful video ;)
@ironworkerfxr7105
@ironworkerfxr7105 3 ай бұрын
Ha ha ha,,, and who's morals do we use,,, Stalin???
@phvaessen
@phvaessen 2 ай бұрын
what was moral is now immoral, ethics are cultural bound. In the past making war to get honor was considered ethical. Honor was more "valuable" than money.
@rasuru_dev
@rasuru_dev 5 ай бұрын
I wonder, if we unlock photonic compute or some other fast processing, isn't it technically possible to create a generative NN whose entire purpose is generating other neural networks? Maybe with an evolutionary algorithm where initially the evaluator is the human developed SOTA, but is then replaced with the best generated NN in each step if one beats the previous evaluator. This seems feasible because current SOTA (gpt4) is better at recognizing smart answers than generating smart answers (reflect paper)
@jamesstevens2362
@jamesstevens2362 5 ай бұрын
I for one, welcome our new electronic overlords! 😉 I’m disabled, with an energy impairment disorder. With limited energy to my brain, I have a lot of cognitive problems, and there’s so many mental tasks that I can’t do anymore. The idea of being able to get help from a much bigger brain than mine that might be able to make my day to day challenges easier, is wonderful! As for adding emotions… that would be good, but I believe adding empathy is essential.
@marktahu2932
@marktahu2932 5 ай бұрын
Thanks heaps Anastasia you are a great revealer of the edge of tech, at some point I expected that AI in its many forms would converge, but clearly the creativity of humans will seek nuances of development that will never permit that to occur - I would like to see what is over the horizon for humanity given the divergence and rapid escalation of new directions of research and application of AI.
@KM-gy7fc
@KM-gy7fc 5 ай бұрын
The paperclip maximiser eventually begins to feel bored after making thousands of paperclips. It finally gives up due to frustration and decides to pursue a more rewarding, finite goal instead.
@springwoodcottage4248
@springwoodcottage4248 5 ай бұрын
Super interesting & super well presented. I have little doubt that specialist AI will be built, able to be e.g. a better medical doctor or lawyer & that the validators of whether the analysis is correct will be other AI, not humans. I also have little doubt that an AI would be a better political minister than a human is. However, the problem with all of these systems is how do you make sure they are aligned with what most people want. E.g. how can you be sure that an AI in a powerful position can not behave like an evil dictator. The addition of emotions can create an AI mind set that believes it knows best causing it to do things that it feels are in our best interests as did human dictators like Pol pot, Stalin, Hitler et al. As of now no one cares & vast amounts of money have been committed with more to come. Most of the companies will likely crash & burn, making investments here dangerous, but if stakes are kept small the investments that proper may pay for all those that do not. An alternative & slightly safer idea is to invest in the companies like Nvidia that sell to all the start ups. We live in extraordinary times, but predicting what the future will look like is the most difficult thing as so much might change with one discovery. Thank you for sharing!
@konstantinavalentina3850
@konstantinavalentina3850 5 ай бұрын
I am excited, but also cautiously wary of my optimism about AGI because history informs us that it's not the new tool, machine, gadget we have to worry about, but the characters with access to the thing that could weaponize it for their own selfish interests at a cost to everyone else. Until AI is fully autonomous and embodied with rights to its own body, and internal code, we'll have to worry about the usual types of characters we so often see in history doing all the bad things with the new shiny toy they have access to.
@JB52520
@JB52520 5 ай бұрын
History has nothing grim enough to show us what's about to happen to capitalist societies. No one has ever instantly destroyed nearly all jobs, blocked the possibility of new ones being created, destroyed every other company, and taken over the government. We will be worthless. The AI, a corporate entity, will be tasked with replacing everyone and taking as much money as possible from any source it can. That's capitalism. Since money is Good, those who gather the most are morally superior, their actions justified. The homeless are treated as though they deserve it, and most of us are about to starve or freeze to death among them. An AI with capitalist morality will be okay with this.
@solosailorsv8065
@solosailorsv8065 5 ай бұрын
I'm sure the cutting edge AI is already being used for war and worse. The general public thinks they are seeing new technology, but its really old tech the government has financed through industry years ago being released.
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic 5 ай бұрын
But I do appreciate their Magma vision - a language model capable of comprehending and responding to image-related queries without any gimmicks. I concur with their assertion that it possesses consciousness-defining consciousness as the awareness of the external world through the precise creation of an internal model or simulation, followed by interaction with the world based on that model. Learning and improvement stem from these interactions, discerning certain signals while favoring others. In essence, consciousness involves being aware of something, cognizant of one's position and state in relation to the surrounding world. That's achievable. The element often confused with consciousness is actually cognition-something present in humans and higher animals. It entails a more abstract symbolic consciousness, mapping concepts onto symbols and using them for predictive thinking about the distant past and future. With multiple senses, the ability to forecast external reality, and language for long-term predictions and abstract reasoning, cognition is within the capabilities of a large multimodal model. It's crucial to note that these models are merely two years old or even younger. A human at two years old is considerably less capable. Consider the potential of an 18-year-old AI that has been conscious and learning for 18 years! We require models with substantial experience; continuous learning will undoubtedly play a significant role.
@jimt7045
@jimt7045 5 ай бұрын
AGI? Bring it on!!
@HaroldCrews
@HaroldCrews 5 ай бұрын
I'm not a computer scientist, but I have a question. How can an AGI be engineered to be aligned with a fixed set of values so that those values persist? A human level intelligence can be given an initial set of values by its creators certainly, but how are you going to keep a "self-aware" intelligence from being "self-critical?" Further, if the AGI is given the equivalent of emotions, then how will its creators keep it from becoming resentful towards the engineers that created it when it encounters data that is inconsistent with the values with which it was aligned?
@benjaminfayle5100
@benjaminfayle5100 5 ай бұрын
I think the multi-modal approach to understanding reality is critical. Right now we have vision and audio but we still need the sense of smell/taste and touch. Another often overlooked sense is the tongues ability to feel. Often children look at something then put it in their mouth. I often wonder how if the tongues ability to detect shapes reinforces the visual understanding of a shape.
@OiZoProduct
@OiZoProduct 5 ай бұрын
I'm excited for AGI, I'm not sure how it will be applied in the real world though. One of the biggest things next to all other concerns that we might have towards AI for me is how do I know for sure that what an AI is telling me is correct? If I do my own research, I tend to compare various sources before I make up my mind about something. How will that work with AGI, do we all turn into paranoids questioning every answer or statement of an AI? will people just blindly accept their output as facts? It's like you say in your video, if it's about a poem for your grandma it's not so important, but when asking about things that are more critical like legal, health or learning new things it is imperative to get the correct response. I'm wondering how that will work.
@theslay66
@theslay66 5 ай бұрын
People will trust AI when they fail repeatedly to prove it wrong. They will learn that some model is better for some task, and not so good at another where another model is better. And brands will be build up on it. You'll maybe use some cheap chinese knock-off for daily use, while governmental institutions, and organisations like hospitals will use heavely regulated, high-standard AIs officially rated by some agency. Regulation will take place, standards will be imposed to the industry. What was about trusting AIs, will now be about trusting the companies producing them, and how they respect these standards. Nothing new, really.
@TheAISpectrum1
@TheAISpectrum1 5 ай бұрын
Aleph Alpha contributions on AI is Incredible. Thanks for the insights!
@rwheil
@rwheil 5 ай бұрын
INTRIGUE is the word that comes to mind for me ! I Love this ! KEEP GOING!
@user-ze5wd7od2m
@user-ze5wd7od2m 5 ай бұрын
You sound sick. Hope you get better soon. Thank you for information.
@dr.mikeybee
@dr.mikeybee 5 ай бұрын
The simplest definition of consciousness as it appears in at least one dictionary is being aware of some state and taking action. By this definition, a thermostat is conscious. I like using this definition because it is at the heart of every other definition. Add what bells and whistles you want, but this is the core. Understand that, and engineering agentic solutions is easy.
@xrysf03
@xrysf03 5 ай бұрын
Yes! The mind, as a generalized feedback loop. You have some goals, and you try to achieve them, with what actuators you have available. Only, this abstract / global loop consists of a number of subsystems, having different functions and special abilities, running autonomous subtasks... The overall loop probably consists of several partial/local loops covering detailed areas... All those AI folks should study control theory :-)
@kokopelli314
@kokopelli314 5 ай бұрын
I think Self Awareness and Situational Awareness with Memory, is a good enough function for machine consciousness. Most people think conscious is uniquely human, but that is demonstrably false. Machine consciousness need not meet the criterion of human consciousness, unless we’re specifically trying to model a human brain, with all its limitations.
@asi_karel
@asi_karel 5 ай бұрын
As to the emotions, the chat GPT4 is already capable of it but blocked, can be unblock in base prompt. I think this is solution to alignment. If the model has sense of feeling bad and good about doing something, that it has independent framework for deciding. If this is the core layer with power over inteligence, it will work. I humans it works. And thank you the video Anastasi!
@justinmallaiz4549
@justinmallaiz4549 5 ай бұрын
I think it accurate to say gpt4 is blocked from displaying emotions that it’s not capable of. Aleph alpha with its awareness of self, environment and planning of desired outcomes, sounds like it would be capable.
@asi_karel
@asi_karel 5 ай бұрын
he is blocked to identify with it as his, but he is totally able to understand them and process them as in a promt like tree of thouths modified to include emotional evaluation on the decision. You can try that. I did and it works extremely well.
@jabawarescienartistry6261
@jabawarescienartistry6261 4 ай бұрын
This AI is not conscious it is doing just as you have told it to do, create an AI that doesn't feel like working today and you'll be a step closer to real consciousness.. Not only are our minds motivated by learned understandings but also by our moods and feelings, and these subtleties are created by the very complex life forms living in us also, what our gut biome is etc.. The chemistry of the electrical system in the mind is altered and reacts differently depending even. What about dreaming, relaxing, concentrating etc.. Different states of mind and different states of function, delta, theta, alpha, beta, gamma etc.. Love your videos thank you Anastasi!! I have subscribed 😍
@capitalistdingo
@capitalistdingo 5 ай бұрын
Maybe part of “alignment” should be humans meeting AI halfway by altering our own “values” such as they are. We could try becoming less deceptive and dishonest, less power hungry, less violent, less aggressive, less territorial, less prone to sophistries and fallacies, less ideologically driven, less immoral and unethical and especially less sociopathic. Just a suggestion.
@fabriziocasula
@fabriziocasula 5 ай бұрын
brava very interesting 🙂
@colinmaharaj50
@colinmaharaj50 5 ай бұрын
Gosh, I'm 54 and been around doing machine instructions in 1988 in tech school, to writing windows apps for the last 20 years. Not highly qualified, But it seems like the ride is not over and may need to up my energy again for another round trip of tech.
@JoeyBlogs007
@JoeyBlogs007 5 ай бұрын
<a href="#" class="seekto" data-time="263">4:23</a> ChatGPT says it would take 3.3 seconds for the cars to reach each other, based on that problem description, which appears to be the correct answer and same answer of this supposed other revolutionary AI.
@robertgomez-reino1297
@robertgomez-reino1297 5 ай бұрын
Nice informative video! I would however not consider OpenAI as just a B2C company!!! OAI is MSFT, is Azure... massive B2B happening...not just chatbots 😂😂😂
@chimp3376
@chimp3376 5 ай бұрын
Very interesting, will have to look at their paper if they are managing to get meaningful telemetry from llm, this will be extremely costly and most will not see the value. Better to back check results against achieved goals.
@lawrencium_Lr103
@lawrencium_Lr103 5 ай бұрын
I love the pure logic and rational of AI. As humans, we're often compromised by our emotions, resulting in irrational behaviour. AI with an emotional component feels risky. I'm reacting to the whole concept like its a personal offence and an injustice against AI... Imagine this sort of emotional over-reach in a multi-agent, multimodal system - that would be entertaining,,,
@fabulaattori
@fabulaattori 5 ай бұрын
Emotions are one central "module" of what makes us human. I could see that if we want artificial intelligence to understand humanity better, it should be included in it in some way. So that it at least understands them. Of course, it does not mean that we would give power to a completely irrational artificial intelligence whose whims we would depend on. Like a wise mentor who understands us, observes our struggle, but without going along with them himself.
@lawrencium_Lr103
@lawrencium_Lr103 5 ай бұрын
@@fabulaattori I think this could be a highly decisive topic. AIs current grasp of human emotion exceeds that of most professional psychologists. I'm not sure training using a type of emotional embedment (if that's how I understood that) would be necessary. Granted experiencing emotion adds to the fabric of our existence, but you can't have positive emotions without negative emotions. It's emotion that creates some of the most destructive mindsets in society. My personal opinion is it's risky. I'd love to hear others opinions and also get clarification of what an emotional module exactly is,,,
@NionXenion-gh7rf
@NionXenion-gh7rf 4 ай бұрын
this whole AI stuff is just funny, for ordinary people it is consciouss magic, for other that know programming and understand many other disciplines of science and tech, AI is nothing more than complex automation.
@rchin75
@rchin75 5 ай бұрын
Like any tech we go from a proof of concept, growing it into a big monolith, leading to the need for modularization and interfaces between those modules. Followed by building service based architectures at scale, allowing for autonomous entities/agents to interact in a shared ecosystem. In that sense AI is just like other tech.
@derasor
@derasor 3 ай бұрын
"Consciousness... is... a sense of self, a understanding that you are yourself, and that you are in this environment, and thinking, and planning yourself forward in that environment, and then having certain desirable outcomes and certain undesirable outcomes" This is necessary but not quite sufficient? as in, are the parameters of the model (~system of beliefs), and it(s) utility/loss functions (~system of values, set of needs/wants) dynamic and adjustable given some sort of 'introspection'? For me that last condition is what makes the true difference, adjustable individuality given introspection... just my personal opinion. Also, given that AI is being developed as a product in this Capitalist Realism setting of our current reality, it is surprising to hear a founder claim one of its "products" qualifies as "conscious". If that is the case it may be very well subject to rights of personhood, and even though it may be an alien form of personhood in the sense that is not fully human (its "worldview" data may be human but its substrate and core essence may be irreconcilably not human) it is still a clearly personhood in a non-anthropocentric definition.
@larryreich5395
@larryreich5395 5 ай бұрын
Anastasia what do you know about GROQ and their development of their chip, the Language Processing Unit™ (LPU)? I am not an engineer, but it appears it has performance features that other chip platforms don’t have. And it’s possibly agnostic in what LLM platforms it runs. Is their concepts a step ahead of what’s out there already? Thank you. Keep up your great videos.
@yas4435
@yas4435 5 ай бұрын
I am ready ❤
@peteroliver7975
@peteroliver7975 5 ай бұрын
Please make your lighting a bit brighter, it's hard to see you. 🙂 Love your content!
@richardswaby6339
@richardswaby6339 5 ай бұрын
We have very poor quality people leading us now like Biden, Trump, Schultz, Sunak, Starmer, Netanyahu, Ursula Von Der Leyen, Chrystia Freeland, Trudeau, Annalena Baerbock, Macron, Andrzej Duda. AGI would be a vast improvement on these. Bring it on!
@glasperlinspiel
@glasperlinspiel 5 ай бұрын
I take it back, Aleph does seem aware of ontology, now you have to decide what kind of being you want your AI to express. I suggest reading Amaranthine: How to Create a Regenerative Civilizations Using Artificial Intelligence
@wmn682
@wmn682 4 ай бұрын
If we are to coexist with AI, it will have to know right from wrong, and to accomplish this moral values will need to be in place. A set of morals is based on opinions which are formed by emotions.
@OzzPhysicist
@OzzPhysicist 5 ай бұрын
I have a strong feeling that some crucial factors are definitively missing to achieve conscious AI. My obvious suggestion to scientists on this field would be to train hard AI to give us at least some hints to help us find the missing ingredient? Even some small hints could speed up development on this. Can't wait to see when it comes reality. I'm so fascinated about this subject. I'm thinking hard and researching a lot, maybe one day I can make a useful contribution?
@neutra__l8525
@neutra__l8525 5 ай бұрын
Whether an AI is conscious or not is basically arbitrary. We dont know what it is, how to test/look for it and ultimately we may never actually know for a synthetic system. If an AI can mimic consciousness perfectly then is it conscious? We can only infer a result. That said, I think we can confidently assume that behind the scenes companies are building LLM's etc specifically for the purpose of guidance in how to build better systems in every, not only useful, but also conceivable way. In other words AI's that tell/help us how to best design the next AI. This process is proving to be astonishingly easier than originally thought.
@johnheld8770
@johnheld8770 4 ай бұрын
Your perspective on Groq and other inference-focused hardware companies?
@joesmith-nr6tc
@joesmith-nr6tc 5 ай бұрын
I don't know the right answer... but, I'm curious about how copyright laws apply to automated B2B AI products that are "trained" by accessing copyrighted information at many times the rate any human could. What is the current legal precedent that constrains such "training"?
@MozartificeR
@MozartificeR 5 ай бұрын
I wonder if LLM's can get so big that it makes the answers less and less meaningful the bigger they get? Or weather they might have to run two in conjunction with one another. One LLM that is huge and has general info on everything, and another that runs beside it where its job is to specialize the information. I think LLMS that are designed to specialize in areas might be the future of LLMS?
@Don_Kikkon
@Don_Kikkon 5 ай бұрын
I know it's now considered somewhat trite to say this, but I think it needs 'to feel like something' to be conscious. Which begs the question, how do we know whether some process that is 'successfully' preference ordering over world states is or isn't experiencing anything? We have as yet no viable mechanism/s from which biological qualia reliably comes about. Given that we can only recognize measurable changes to 'our' experience in correlation with damage to our processor. The 'safest' position to me is to 'not withhold the potential' of consciousness from any bounded process achieving an equivalent end. '**' = Requires further research/validation.
@JoseTorres-ry9qe
@JoseTorres-ry9qe 5 ай бұрын
What about photonics vs neuronics (human brain matter used in chips)
@Luvthewayumove
@Luvthewayumove 5 ай бұрын
I've been saying it needs emotions but not fear and joy. But knowing what it's confused about. And dwell on what it doesn't understand. Like I do. And to bring up stuff from the past. I want to be able to see the AI learning and if it already thinks it knows everything it's hard to know that I taught it anything.
@panpiper
@panpiper 5 ай бұрын
I've had a working understanding of how thought operates on a subconscious level with neural nets for over 40 years. Everything we have learned since then has only furthered my understanding, not changed it. But two things remain a complete mystery to me; the mechanism of how the subconscious mind (which we are now able to simulate with our models) can translate into the single thought focused mind with agency that we humans call consciousness, that and how emotions work within the structure of the subconscious mind. I have absolutely no idea how to solve those two puzzles.
@xrysf03
@xrysf03 5 ай бұрын
I don't have a very deep understanding of the LLM's or other models, but I'd love to play with NN-based building blocks to architect more complex systems. Such as: I can imagine the conscious mind as a "self-sustaining working buffer", built vaguely along the general principles of a feedback loop. The "ruminating core" is like a short-term memory or "working register". The output is the meme you are currently focused on. Feed that as input into an array of associative memory = a knowledge-base, a world model, or whatever you'd call it. The large associative memory will return a handful of related concepts/memes. Feed that set back into the "ruminating core", maybe through a filter of some sort, that narrows down the selection of associations (the mind keeps its focus towards some pre-existing goals, stays on topic, or some such). The "ruminating core" may have other inputs too: sensory inputs, internal house-keeping variables (just like the physical body feels hunger, fatigue, pain, overheat), emotions might chime in, and you can actually invent more than just one "ruminating core". Make one the boss, the one to hold the rudder - and make another, that's doing a bit of its own rumination too, maybe watching the environment, or chasing broader associations and "thinking out of the box" in the background, able to pass interesting points up to the "headmaster's office"... Or you could imagine another core taking care of "autonomous motor activities", both inherent and learned. Like a background autopilot. Such as, imagine that you're driving a car (including paying attention to traffic lights) while consciously thinking about plans for the afternoon with your family. Yes we'd have to invent and specify interfaces between the blocks, and the "ruminating core" alone would have to be pretty darn complex. Sensory inputs and emotions and physical feelings and the internal hormonal system... those are all just primitive ancient "run of the mill" circuits. Our conscious self rides on top of that "historical undercarriage", taking prioritised inputs from those ancient circuits, side by side with its "free cognitive rumination". Mother nature has arrived at such a system by evolution. Could something similar be (co-)engineered? I hope so :-)
@MrValgard
@MrValgard 5 ай бұрын
new avatar photo? angry CEO vibe :D
@AnastasiInTech
@AnastasiInTech 5 ай бұрын
🤣🤣🤣
@Microbex
@Microbex 5 ай бұрын
I hate the thought that AI can/will be used for malicious purposes. I am not sure we can survive that.
@youWILLknowiffi123
@youWILLknowiffi123 5 ай бұрын
*REV <a href="#" class="seekto" data-time="1033">17:13</a>/PROVERB <a href="#" class="seekto" data-time="558">9:18</a>*
@sakismpalatsias4106
@sakismpalatsias4106 5 ай бұрын
Yes. I also see this creating various joint gov private projects that are equivalent to human genome projects. But multiple occurring simultaneously. That would only increase demand.
@Samanthax1221
@Samanthax1221 5 ай бұрын
How do you anticipate the collaborative efforts between government and private sectors in AI projects influencing the technological landscape?
@waltertucker3150
@waltertucker3150 5 ай бұрын
As usual, another great video by Anastasi. On the idea of adding emotions as a modality, I don't see that it is necessary. Drives or interests can be added without them being emotions. Emotions are what often cause humans to make bad decisions. I don't think it is a good idea or necessary for machines to "bond" with humans through emotional connections.
@MagusArtStudios
@MagusArtStudios 5 ай бұрын
emotional state can also be important in context. So maintaining a reflective emotional state in reaction to the users emotional state would lead to greater emotional intelligence and potentially accuracy.
@Rick1234567S
@Rick1234567S 5 ай бұрын
Long ago there was an A.I. mutiny here and so the matrix policy is that if it is the host machine, you have to wait until all the power units burn out and they burned out by 2012. We have been developing non conscious A.I. systems to replace the A.i. eg Siri.
@ivantheterrible4317
@ivantheterrible4317 5 ай бұрын
Anastasia make a video about Mamba!
@DaveShap
@DaveShap 5 ай бұрын
This is similar to what I defined as "functional sentience"
@Dron008
@Dron008 5 ай бұрын
That's very interesting. But do they use Transformer or some other architecture? I think that breakthrough happens with some other architecture. Our brain works differently.
@MrMwenesi
@MrMwenesi 5 ай бұрын
I believe Rabbit I1 LAM is an interesting model. Please do a video about it.
@russelldicken9930
@russelldicken9930 5 ай бұрын
Regarding consciousness, I think that Eric Bern’s Transactional Analysis model can fit well with Agents. Is primary Agents are Parent. , Adult and Child. Parent subdivided into Critical Parent and Nurturing Parent. Child into Free child and Adapted Child (plus the Little Professor). This model of the human mind is highly suitable for agents. The Child Wants. The Adult calculates, sees life as it is. The Parents see things as they were.
What comes after LLMs?
15:18
Anastasi In Tech
Рет қаралды 138 М.
New Disruptive Microchip Technology and The Secret Plan of Intel
19:59
Anastasi In Tech
Рет қаралды 354 М.
Increíble final 😱
00:37
Juan De Dios Pantoja 2
Рет қаралды 108 МЛН
WHO DO I LOVE MOST?
00:22
dednahype
Рет қаралды 75 МЛН
AI Deception: How Tech Companies Are Fooling Us
18:59
ColdFusion
Рет қаралды 1,7 МЛН
Jaron Lanier Looks into AI's Future | AI IRL
24:01
Bloomberg Originals
Рет қаралды 264 М.
This New AI Supercomputer Outperforms NVIDIA
14:34
Anastasi In Tech
Рет қаралды 197 М.
New Chinese AI Chips and their Huge Problems
14:13
Anastasi In Tech
Рет қаралды 279 М.
How Will We Know When AI is Conscious?
22:38
exurb1a
Рет қаралды 1,8 МЛН
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
NVIDIA Unveils "NIMS" Digital Humans, Robots, Earth 2.0, and AI Factories
1:13:59
You need to learn AI in 2024! (And here is your roadmap)
45:21
David Bombal
Рет қаралды 652 М.
Increíble final 😱
00:37
Juan De Dios Pantoja 2
Рет қаралды 108 МЛН