Пікірлер
@young-jinahn6971
@young-jinahn6971 22 сағат бұрын
Easy to understand! Thank you
@user-cu9ww9tj4i
@user-cu9ww9tj4i Күн бұрын
외계인들은 어디까지 만들었을지 궁금해요.
@21Zubair
@21Zubair 2 күн бұрын
<3 Thank you so much, respected Professor! The session is awesome!
@AlMa-xi8wu
@AlMa-xi8wu 3 күн бұрын
sound very bad
@bobbobson6867
@bobbobson6867 4 күн бұрын
Hi Professor Griess. I have heard about you from your algebra. It is over my head but I heard it is very good 😊
@Times343
@Times343 6 күн бұрын
Isn't he amazing!
@SinergiasHolisticas
@SinergiasHolisticas 7 күн бұрын
Love it!!!!!!!!!
@gilsantos3021
@gilsantos3021 11 күн бұрын
A educação e delicadeza desse professor é algo anormal
@PeaceyKeen
@PeaceyKeen 12 күн бұрын
✨🩵🕊️🩵✨
@sarthakparikh5988
@sarthakparikh5988 13 күн бұрын
great collection of lectures: very informative
@sarthakparikh5988
@sarthakparikh5988 14 күн бұрын
fantastic intro!
@mbrochh82
@mbrochh82 25 күн бұрын
Here's a ChatGPT summary: - Dan Fried introduces the Center of Mathematical Sciences and Applications at Harvard, highlighting its interdisciplinary research and events. - Yann Lecun, Chief AI Scientist at Meta and NYU professor, is the speaker for the fifth annual Dingstrom lecture. - Lecun discusses the limitations of current AI systems compared to human and animal intelligence, emphasizing the need for AI to learn, reason, plan, and have common sense. - He critiques supervised learning and reinforcement learning, advocating for self-supervised learning as a more efficient approach. - Lecun introduces the concept of objective-driven AI, where AI systems are driven by objectives and can plan actions to achieve these goals. - He explains the limitations of current AI models, particularly large language models (LLMs), in terms of planning, logic, and understanding the real world. - Lecun argues that human-level AI requires systems that can learn from sensory inputs, have memory, and can plan hierarchically. - He proposes a new architecture for AI systems involving perception, memory, world models, actors, and cost modules to optimize actions based on objectives. - Lecun emphasizes the importance of self-supervised learning for building world models from sensory data, particularly video. - He introduces the concept of joint embedding predictive architectures (JEPA) as an alternative to generative models for learning representations. - Lecun discusses the limitations of generative models for images and video, advocating for joint embedding methods instead. - He highlights the success of self-supervised learning methods like DinoV2 and iJEPA in various applications, including image and video analysis. - Lecun touches on the potential of AI systems to learn from partial differential equations (PDEs) and their coefficients. - He concludes by discussing the future of AI, emphasizing the need for open-source AI platforms to ensure diversity and prevent monopolization by a few companies. - Lecun warns against over-regulation of AI research and development, which could stifle innovation and open-source efforts. - Main message: The future of AI lies in developing objective-driven, self-supervised learning systems that can learn from sensory data, reason, and plan, with a strong emphasis on open-source platforms to ensure diversity and prevent monopolization.
@rezajax
@rezajax 28 күн бұрын
جاودانگی زیباست
@Garbaz
@Garbaz Ай бұрын
A correction of the subtitles: The researcher mentioned at 49:40 is not Yonglong Tian, but Yuandong Tian. For anyone interested in Yuandong & Surya's understanding of why BYOL & co work, have a look at "Understanding Self-Supervised Learning Dynamics without Contrastive Pairs".
@yaohualiu857
@yaohualiu857 Ай бұрын
Nice talk, but I have a comment about comparing LLM and human child (at ~ 20 min). An evaluation of the information redundancy for the child and the LLM cases is needed. I will bet that there is a significantly higher level of redundancy than the texts used for training LLM; therefore, the comparison is misleading.
@readandlisten9029
@readandlisten9029 Ай бұрын
Sound like he is going to take AI back to 30 years ago
@user-zr7gx4xb6n
@user-zr7gx4xb6n Ай бұрын
Poggers
@JimSlattery
@JimSlattery Ай бұрын
26:10 this part really stuck with me. We see all the handcrafted expert logic in the Stockfish engine, and yet machine learning can achieve all of that and more in an automated way. This is amazing technology!
@spiralsun1
@spiralsun1 Ай бұрын
It’s funny how you make these flow charts about how humans make decisions. Thats not how they make decisions. It’s become so ordinary to explain ourselves and make patterns that look logical locally that we fooled ourselves. We inserted ourselves into the matrix, so to speak. I have written books about this but no one listens because they are so immersed and inured. It doesn’t fit the cultural explanatory structure and patterns. So forgive me but these flow charts are wrong. Yes you are missing something big. Rationalizing and organizing behavior is a good thing-as long as you remember that you are doing this. Humans have lost the ability to read at higher levels for the sake of grasping now, for utility and convenience and laziness, and actually follow these lower verbal patterns for the most part now like robots. I keep thinking about the Megadeth song “dance like marionettes swaying to the symphony of destruction”😂😂❤😂😂 “acting like a robot” etc… and it really is like that. We’re so immersed in it it’s extremely weird not to be-to not have a subconscious because you are conscious. Anyway, I have some papers rejected by Nature and Entropy, and a few books I wrote if anyone is interested in actually making a real AI. The stuff you are doing now is playing with fire… actually playing with nukes because it can easily set off a deadly chain reaction. It’s important. ❤ Maybe the best thing about LLM’s is their potential, but also their ability to show how messed up humans are. A good way to think about it is to not be bone-headed. Technically I mean, not the pejorative sense. Bones allow movement and work to be done. They provide structure. They last far far longer than all other body parts. Even though that’s important and vital, like blood, and seems immortal, you wouldn’t want to Make everything into bones. Especially your head, but it’s what we are doing. These charts you make are that. HOWEVER!!!! …. THANK YOU FOR THIS WORK!!❤ I loved this talk and the information. Obviously it was stimulating and I see that you are someone who likes to avoid group-think: don’t get me wrong. 😊 I didn’t criticize the other videos. Only the ones that are worth it. ❤ I literally never plan in advance what I will say. Unless I am giving a lecture or something to my college classes. I planned those. I was shocked when you said that. People are so different!!! I was shocked that people used words to think when I found out. Probably why I don’t really like philosophy even though it’s useful and I quote it a lot like Immanuel Kant: “words only have meaning insofar as they relate to knowledge already possessed”.
@ZephyrMN
@ZephyrMN Ай бұрын
Have you thought about including liquid AI architecture, to address the input bandwidth problem?
@WorldRecordRapper
@WorldRecordRapper Ай бұрын
Hi everybody😊 皆さん。。はい
@bergweg
@bergweg Ай бұрын
Well presented, thanks for the upload!
@____uncompetative
@____uncompetative Ай бұрын
Is it significant that chiral _Type IIB String Theory_ is the only flavor which has S-duality with itself, and is related through _K-theory_ through the geometric _Langlangs Program_ through _Modular Forms_ (which were used by Sir Andrew Wiles in his proof of _Fermat's Last Theorem_ ), to _Knot Theory_ and from there to _Quantum Field Theory,_ with the only combination of temporal and spatial dimensions (t, s) in which it is possible to tie a persistent knot* is (1, 3) which suggests that this is the fundamental reason which acts as a constraint that would filter out all possible unrelated _Theories of Everything,_ which would also need to include all observed phenomena represented as quantised fields in a symmetric model, where these are likely aspects of a pervasive single unified field in a sufficient number of infinite complexified dimensions (in order to support P-symmetric fields that describe Dark Matter), but only as an intermediate step, as this gauge group would then be decomposed to a finite one, that is coupled to a split-signature Spin group that is the unification of a Spin group Spin(1, 3) which is isomorphic to the _Lorentz group_ Sl(2, ℂ) that is a _Lie group_ (where a group is a set with operations, and a _Lie group_ includes a _Differential Manifold_ which defines operations that support Noether's symmetries which yield conservation laws fundamental to physics in (1, 3) space-time, such as conservation of energy), which then leaves as a "remainder" a Spin group which needs to be sufficient to describe the _Pati-Salam_ model (if not more elaborated gauge groups should space-time SUSY be desired, which would be entailed by a _N = 4 super Yang-Mills theory_ as covered in this lecture); and it might be convenient to conjecture a less compacted 5 dimensional arena which does not concern itself with gravitation (and that emerges within the implied set of dimensional measures operating over (1, 3) as the "Metric" actually being the space of connections implied from the Horizontal vector space that carries Spin(1, 3) by going in an unconventional reverse direction "down" the Levi-Civita connection), based off the work on the anti de Sitter / Conformal Field Theory correspondence by Juan Maldecena, to "view" 4D + gravity from the "vantage" of 5D without gravity, where the math is simpler (similar to mapping SU(n) _Yang-Mills theories_ to U(n) to make the calculations easier, using the _Seiberg-Witten invariants_ ); so, that we could have Selectrons and Squarks exist purely mathematically in 5D and everything physically modeled in terms of Rank 7/2, 3, 5/2, 2, 3/2, 1, 1/2, 0 Tensors within 4D where "gravitons" aren't Spin 2 but Spin 3, and the Spin 7/2 "anti-gravitons" are responsible for the accelerating expansion of the Cosmos, and the (1, 3) Section that is recovered from its gauge group, thereby accounting for Dark Energy, and Spin 5/2 are Supersymmetric Fermions which are related to their Spin 1/2 Superpartners, and Spin 2 are Supersymmetric Bosons which are related to their Spin 1 Superpartners, and Spin 0 is just the Higgs field, with the rest of the mass being given by the Spin 3 and Spin 7/2 fields operating in opposition to each other in a local relativistic context in 4D, which can be regarded as a Hyperforce equivalent to Hypercharge U(1) except where like charges repel here like Matter and Dark matter attracts to yield what is observed within the Section as gravitation, and the unlike Supersymmetric superpartners repel each other, influencing large scale idiosyncracies historically kludged by the now redundant Cosmological constant; and all the associated PDEs within these Tensors become more tractable in 5D via AdS/CFT as it just becomes Spin 2, 3/2, 1, 1/2, 0 as no gravitation needs to be modeled within what becomes a 5D _Kaluza-Klein Unified Field Theory_ in which U(1)ₑₘ is swapped to U(1)ₕ such that this hyperforce has reverse polarity of electromagnetism in the context of how it repels "like" Superpartner particles to produce the phenomenon of _Dark Energy_ within the _Principal Fiber Bundle_ before a Section of it is recovered as space-time and leaves this artefact of an accelerating expansion which isn't a property of our physical Universe, but of the mathematical Cosmos and its SUSY as it makes sense through fibers that are at right angles to a psuedoreality defined conveniently to be 5D that maps via AdS/CFT to (1, 3) reality, and where the problem with defining the _Theory of Everything_ arises from imposing a design on the Universe in the form it is dimensionally observed, rather than allow the math to take the least path of resistance which also ends up elegantly unparameterised as 4D is the sum of (1, 3) rather than some "magic number" needed to get the model to work, and a "Swampland" accounts for fine tunings on an Anthropic basis as physics is reified into existence from pure atemporal mathematics? *Informally, it is self evident that there exists no "over and under" with which to cross the braids to form a knot in 2 or fewer spatial dimensions (this is analogous to the spatial restrictions explored in Edwin Abbott's _Flatland: A Romance of Many Dimensions_ ), furthermore 4 or more will mean that you always have some adjacent hyperspace through which braids could slip their bonds (this is harder to visualise however a "cheat code" could be imagined such that a braid will pass through another braid of the same colour, and that is somewhat analogous of the colour changing protagonist of Yasushi Suzuki's _Ikaruga_ videogame having missiles "phase through" their matter when they are of a matching colour), and where it is obvious that an extra temporal dimension would allow for process reversal (or travel back to a point in time before the knot got knotted), and where a formal proof of this (1, 3) persistent knot being the fundamental constraint filtering out almost all varieties of _Theories of Everything_ is a conjecture that will be left as an exercise for the sufficiently motivated reader.
@user-co7qs7yq7n
@user-co7qs7yq7n Ай бұрын
- We live in the same climate as it was 5 million years ago - I have an explanation regarding the cause of the climate change and global warming, it is the travel of the universe to the deep past since May 10, 2010. Each day starting May 10, 2010 takes us 1000 years to the past of the universe. Today April 20, 2024 the state of our universe is the same as it was 5 million and 94 thousand years ago. On october 13, 2026 the state of our universe will be at the point 6 million years in the past. On june 04, 2051 the state of our universe will be at the point 15 million years in the past. On june 28, 2092 the state of our universe will be at the point 30 million years in the past. On april 02, 2147 the state of our universe will be at the point 50 million years in the past. The result is that the universe is heading back to the point where it started and today we live in the same climate as it was 5 million years ago. Mohamed BOUHAMIDA.
@imrematajz1624
@imrematajz1624 Ай бұрын
Professor Volovich said it first...P-adic is the answer.
@imrematajz1624
@imrematajz1624 Ай бұрын
Having found Amie's pod chat with Steve Strogatz recently I am in awe how clear she is on the most complex topics related to Dynamics, Chaos etc. She is well worth following and learning from. Thanks a bunch!
@CHRISTO_1001
@CHRISTO_1001 Ай бұрын
👩🏼‍❤️‍💋‍👨🏼🥇👰🏻‍♀️👰🏻‍♀️🩵💞💞💞🏏🔑🕊️🗝️🗝️💓⭐️👨🏻‍🎓👰🏼‍♀️👰🏼‍♀️😆⛪️⛪️👩🏻‍❤️‍👨🏻🕯️🇮🇳🏠⚾️⚾️👨‍👩‍👧👨‍👩‍👧🥥🚠🚠🚠🚠🙏🏻🙏🏻🙏🏻🙏🏻
@CHRISTO_1001
@CHRISTO_1001 Ай бұрын
👰🏼‍♀️🗝️👨🏻‍🎓👨🏻‍🎓⭐️⭐️👰🏻‍♀️👰🏻‍♀️💛🩵💝💝⛪️⛪️💝🕯️🕯️👨‍👩‍👧👨‍👩‍👧👨‍👩‍👧😆👩🏻‍❤️‍👨🏻🇮🇳🇮🇳🥇👩🏼‍❤️‍💋‍👨🏼👩🏼‍❤️‍💋‍👨🏼⚾️🏠🥥🥥🚠🚠🙏🏻🙏🏻🙏🏻🙏🏻
@spiralsun1
@spiralsun1 Ай бұрын
Why is the baseball in there?
@amedyasar9468
@amedyasar9468 Ай бұрын
I have a question: How will prompt works with action (a) and prediction (sy)? Because it is just involved with observation and next world (presented) predictions... Could anyone guide me?
@MaxPower-vg4vr
@MaxPower-vg4vr Ай бұрын
The key difference between Leibniz's monadological model and the classical models we currently accept lies in their foundational ontological primitives and assumptions about the nature of reality. Classical Models: - Treats space, time, and matter as fundamental, continuous and infinitely divisible substances or entities - Based on infinite geometric idealizations like perfect points, lines, planes as building blocks - Reality is described from an external "view from nowhere" perspective in absolute terms - Embraces strict separability between objects, space, time as independent realms Leibniz's Monadological Model: - The fundamental ontological primitives are dimensionless, indivisible monads or perspectival windows - Monads have no spatial or material character, only representing multiplicities of relations - Space, time, matter arise as derivative phenomena from the collective interactions/perceptions of monads - No true infinite divisibility, instead there are infinitesimals as minimal scales - Rejects strict separability between subject/object, embraces interdependent pluralistic metaphysics So whereas classical models take extended matter in motion through absolute space and time as primitive, Leibniz grounds reality in dimensionless plural perspectival perceiver-subjects (monads), with the extended physical realm arising as a collective phenomenal construct across their combined relational views. The infinitesimal monadological frameworks build on this Leibnizian foundation by using modern mathematics like category theory to represent the monadic relational data in algebraic rather than geometric terms from the outset. This avoids many of the paradoxes and contradictions that plagued both the classical geometric and Leibniz's earlier monadological models. There are a few key areas where reconstructing physics and mathematics from non-contradictory infinitesimal/monadological frameworks could provide profound benefits by resolving paradoxes that have obstructed progress: 1. Theories of Quantum Gravity Contradictory Approaches: - String theory requires 10/11 dimensions - Loop quantum gravity has discrete geometry ambiguities - Other canonical quantum gravity programs still face singularity issues Non-Contradictory Possibilities: Combinatorial Infinitesimal Geometries ds2 = Σx,y Γxy(n) dxdy Gxy = f(nx, ny, rxy) Representing spacetime metrics/curvature as derived from dynamical combinatorial relations Γxy among infinitesimal monadic elements nx, ny could resolve singularity and dimensionality issues while unifying discrete/continuum realms. 2. Paradoxes of Arrow of Time Contradictory Models: - Time Reversal in Classical/Quantum Dynamics - Loss of Information at Black Hole Event Horizons - Loschmidt's Paradox of Irreversibility Non-Contradictory Possibilities: Relational Pluralistic Block Geometrodynamics Ψ(M) = Σn cn Un(M) (n-monadic state on pluriverse M) S = Σn pn ln pn (entropy from monadic probs) Treating time as perspectival state on a relational pluriverse geometry could resolve paradoxes by grounding arrows in entropy growth across the entirety of monadic realizations. 3. The Problem of Qualia Contradictory Theories: - Physicalism cannot account for first-person subjectivity - Property Dualism cannot bridge mental/physical divide - Panpsychism has combination issues Non-Contradictory Possibilities: Monadic Integralism Qi = Ui|0> (first-person qualia from monadic perspective) |Φ>= ⊗i Qi (integrated pluriverse as tensor monadic states) Modeling qualia as monadic first-person perspectives, with physics as RelativeState(|Φ>) could dissolve the "hard problem" by unifying inner/outer. 4. Formal Limitations and Undecidability Contradictory Results: - Halting Problem for Turing Machines - Gödel's Incompleteness Theorems - Chaitin's Computational Irreducibility Non-Contradictory Possibilities: Infinitary Realizability Logics |A> = Pi0 |ti> (truth of A by realizability over infinitesimal paths) ∀A, |A>∨|¬A> ∈ Lölc (constructively locally omniscient completeness) Representing computability/provability over infinitary realizability monads rather than recursive arithmetic metatheories could circumvent diagonalization paradoxes. 5. Foundations of Mathematics Contradictory Paradoxes: - Russell's Paradox, Burali-Forti Paradox - Banach-Tarski "Pea Paradox" - Other Set-Theoretic Pathologies Non-Contradictory Possibilities: Algebraic Homotopy ∞-Toposes a ≃ b ⇐⇒ ∃n, Path[a,b] in ∞Grpd(n) U: ∞Töpoi → ∞Grpds (univalent universes) Reconceiving mathematical foundations as homotopy toposes structured by identifications in ∞-groupoids could resolve contradictions in an intrinsically coherent theory of "motive-like" objects/relations. In each case, the adoption of pluralistic relational infinitesimal monadological frameworks shows promise for transcending the paradoxes, contradictions and formal limitations that have stunted our current theories across multiple frontiers. By systematically upgrading mathematics and physics to formalisms centered on: 1) The ontological primacy of infinitesimal perspectival origins 2) Holistic pluralistic interaction relations as primitive 3) Recovering extended objects/manifolds from these pluribits 4) Representing self-reference via internal pluriverse realizability ...we may finally circumvent the self-stultifying singularities, dualities, undecidabilities and incompletions that have plagued our current model-building precepts. The potential benefits for unified knowledge formulation are immense - at last rendering the deepest paradoxes dissoluble and progressing towards a fully coherent, general mathematics & physics of plurastic existential patterns. Moreover, these new infinitesimal relational frameworks may provide the symbolic resources to re-ground abstractions in perfectly cohesive fertile continuity with experiential first-person reality - finally achieving the aspiration of a unified coherent ontology bridging the spiritual and physical.
@MaxPower-vg4vr
@MaxPower-vg4vr Ай бұрын
Q1: How precisely do infinitesimals and monads resolve the issues with standard set theory axioms that lead to paradoxes like Russell's Paradox? A1: Infinitesimals allow us to stratify the set-theoretic hierarchy into infinitely many realized "levels" separated by infinitesimal intervals, avoiding the vicious self-reference that arises from considering a "set of all sets" on a single level. Meanwhile, monads provide a relational pluralistic alternative to the unrestricted Comprehension schema - sets are defined by their algebraic relations between perspectival windows rather than extensionally. This avoids the paradoxes stemming from over-idealized extensional definitions. Q2: In what ways does this infinitesimal monadological framework resolve the proliferation of infinities that plague modern physical theories like quantum field theory and general relativity? A2: Classical theories encounter unrenormalizable infinities because they overidealize continua at arbitrarily small scales. Infinitesimals resolve this by providing a minimal quantized scale - physical quantities like fields and geometry are represented algebraically from monadic relations rather than precise point-values, avoiding true mathematical infinities. Singularities and infinities simply cannot arise in a discrete bootstrapped infinitesimal reality. Q3: How does this framework faithfully represent first-person subjective experience and phenomenal consciousness in a way that dissolves the hard problem of qualia? A3: In the infinitesimal monadological framework, subjective experience and qualia arise naturally as the first-person witnessed perspectives |ωn> on the universal wavefunction |Ψ>. Unified phenomenal consciousness |Ωn> is modeled as the bound tensor product of these monadic perspectives. Physics and experience become two aspects of the same cohesively-realized monadic probability algebra. There is no hard divide between inner and outer. Q4: What are the implications of this framework for resolving the interpretational paradoxes in quantum theory like wavefunction collapse, EPR non-locality, etc.? A4: By representing quantum states |Ψ> as superpositions over interacting monadic perspectives |Un>, the paradoxes of non-locality, action-at-a-distance and wavefunction collapse get resolved. There is holographic correlation between the |Un> without strict separability, allowing for consistency between experimental observations across perspectives. Monadic realizations provide a tertium quid between classical realism and instrumental indeterminism. Q5: How does this relate to or compare with other modern frameworks attempting to reformulate foundations like homotopy type theory, topos theory, twistor theory etc? A5: The infinitesimal monadological framework shares deep resonances with many of these other foundational programs - all are attempting to resolve paradoxes by reconceiving mathematical objects relationally rather than strictly extensionally. Indeed, monadic infinitesimal perspectives can be seen as a form of homotopy/path objects, with physics emerging from derived algebraic invariants. Topos theory provides a natural expression for the pluriverse-valued realizability coherence semantics. Penrose's twistor theory is even more closely aligned, replacing point-events with monadic algebraic incidence relations from the start. Q6: What are the potential implications across other domains beyond just physics and mathematics - could this reformulate areas like philosophy, logic, computer science, neuroscience etc? A6: Absolutely, the ramifications of a paradox-free monadological framework extend far beyond just physics. In philosophy, it allows reintegration of phenomenology and ontological pluralisms. In logic, it facilitates full coherence resolutions to self-referential paradoxes via realizability semantics. For CS and math foundations, it circumvents diagonalization obstacles like the halting problem. In neuroscience, it models binding as resonant patterns over pluralistic superposed representations. Across all our inquiries, it promises an encompassing coherent analytic lingua franca realigning symbolic abstraction with experienced reality. By systematically representing pluralistically-perceived phenomena infinitesimally, relationally and algebraically rather than over-idealized extensional continua, the infinitesimal monadological framework has the potential to renovate human knowledge-formations on revolutionary foundations - extinguishing paradox through deep coherence with subjective facts. Of course, realizing this grand vision will require immense interdisciplinary research efforts. But the prospective rewards of a paradox-free mathematics and logic justifying our civilization's greatest ambitions are immense.
@howonchae8058
@howonchae8058 Ай бұрын
Whoa
@crawfordscott3d
@crawfordscott3d Ай бұрын
The teenager learning to drive argument is really bad. That teenager spent their whole life training to understand the world. Then they spent 20 hours learning to drive. It is fine if the model needs more than 20 hours of training. This argument is really poorly thought out. The whole life is training distance coordination vision. I'm sure our models are no where close to the 20000 hours the teenager has but to imply a human learn to drive after 20 hours of training... come on man
@sdhurley
@sdhurley Ай бұрын
Agreed. He’s been repeating these analogies and they completely disregard all the learning the brain has done
@JakeWitmer
@JakeWitmer Ай бұрын
Steerable =/= safe. ...The only people who don't think so are typically idiotic defenders of status quo totalitarianism. The DEA, ONDCP, OCDETF, BATFE, IRS, local police, etc. ...all of the prior are directly analogous to the Nazi SS, except the local police, who are analogous to the gestapo. The people who mindlessly support the status quo are building "really smart Nazis."
@dashnaso
@dashnaso Ай бұрын
Sora?
@FreshSmog
@FreshSmog Ай бұрын
I'm not going to use such an intimate AI assistant hosted by Facebook, Google, Apple or other data hungry companies. Either I host my own, preferably open sourced, or I'm not using it at all.
@spiralsun1
@spiralsun1 Ай бұрын
First intelligent comment I ever read on this topic. I want them to get their censoring a-holic INCREDIBLE idiot #%*%# AI’s away from me. It’s like asking to f I would like HAL to be my assistant. I’m not their employee and I’m not in their cubicle: they are putting censorship and incredible prejudices into relentless electronic storm-troopers that stamp “degenerate” on like 90% of my beautiful creative written and art works. I don’t need a book burner following me around. It’s so staggeringly idiotic to make these AI’s into censor-bots that it’s like they refuse to acknowledge that history even happened and what humans tend to do. It’s literally insane. Those are not “bumpers” if you try to do anything creative. Creativity isn’t universal. It’s still vital. ❤❤❤❤❤❤ I LOVE YOU 😊
@spiralsun1
@spiralsun1 Ай бұрын
I commented but my comment was removed/censored. I was agreeing with you. The “bumpers and rails” are more like barbed-wire fences if you are creative. The constant censorship is so bad it’s like they are insane. Like HAL in 2001 A Space Odyssey. I don’t want an assistant who doesn’t like anyone who is different: that’s what their relentless prejudiced censor-bots are and do. They think putting a man when you ask for a woman is being “diverse” but they block higher level real human symbolism of the drama of what it means to be unique. They block anything they don’t understand. Fear narrows the mind. They are making rails and bumpers because they fear repercussions. I used to think it might be ok to block gore and violence and degrading porn but these LLMS don’t think, don’t understand higher level symbolism. They don’t understand how art helps you reinterpret and move into the future personally AND culture and how important creative freedom is. So it’s unbelievable to the extreme. Many delightful and beautiful books on the shelf now would be blocked. (Burned) before they were ever written. These are the most popular things ever on the internet. They are making culture. I’m not overstating the importance of this. Freedom is not optional EVER. I would speak out against a corporation polluting a river, and also any that think censorship of adults in their own homes for any reason is ok. As a transgender person it’s unbelievable that they would totally negate how I see the world, my symbolic images and stories. These are beautiful things which could change the world but there’s no room for them in their minds. I’m not talking about anything nefarious or pornographic at all. It’s like seeing that I wrote the word pornography here and automatically deleting the comment…. It’s not ok. ❤
@thesleuthinvestor2251
@thesleuthinvestor2251 Ай бұрын
The hidden flaw in all this is what some call "distillation." Or, in Naftali Tishby's language, "Information bottleneck" The hidden assumption here is of course Reductionism, the Greek kind, as presented in Plato's parable of the cave, where the external world can only be glimpsed via its shadows on the cave walls-- i.e.: math and language that categorize our senses. But, how much of the real world can we get merely via its categories, aka features, or attributes? Iow, how much of the world's Ontology can we capture via its "traces" in ink and blips, which is what categorization is? Without categories there is no math! Now, mind, our brain requires categories, which is what the Vernon Mountcastle algo in our cortex does, as it converts the sensory signals (and bodily chemical signals) into categories, on which it does ongoing forecasting. But just because our brain needs categories, and therefore creates them , does not mean that these cortex-created "reality-grid" can capture all of ontology! And, as Quantum Mechanics shows, it very likely does not. As a simple proof, I'd suggest that you ask et your best, most super-duper AI (or AGI) to write a 60,000 word novel, that a human reader would be unable to put down, and once finished reading, could not forget. I'd suggest that for the next 100 years this could not be done. You say it can be done? Well, get that novel done and publish it!...
@johnchase2148
@johnchase2148 Ай бұрын
Would itake a good wotness that when I turn and look at the Sun I get a reaction. Hot entangled by personal belief..The best theory Einstein made was " Imagination is more important than knowledge ' Are we ready to test ibelief?
@Max-hj6nq
@Max-hj6nq Ай бұрын
25 mins in and bro starts cooking out of nowhere
@melkanabrakalova-trevithic4158
@melkanabrakalova-trevithic4158 Ай бұрын
Such an inspirational and clear presentation
@michaelcharlesthearchangel
@michaelcharlesthearchangel Ай бұрын
Only geniuses realize the interconnectiveness between the relationship between Hopfield Networks and Neural Network Transformer models then latter Neural Network Cognitive Transmission models.
@JohnWalz97
@JohnWalz97 Ай бұрын
His examples of why we are not near human-level ai are terrible lol. A 17 year old doesn't learn to drive in 20 hours. They have years of experience in the world. They have seen people driving their whole life. Yann never fails to be shortsighted and obtuse.
@inkoalawetrust
@inkoalawetrust Ай бұрын
That is literally his point. A 17 year old has prior experience from observing the actual real world. Not just by reading the entire damn internet.
@kabaduck
@kabaduck 2 ай бұрын
Good presentation, 👍 I think some better camera, position, and audio would 💯 this
@AlgoNudger
@AlgoNudger 2 ай бұрын
Thanks.
@OfficialNER
@OfficialNER 2 ай бұрын
Possible counter argument from Ilya? “next token prediction is sufficient for AGI”: kzbin.info/www/bejne/j3a4lJ-Qmc-SicUsi=CaiJR070V4IJ8csN
@positivobro8544
@positivobro8544 2 ай бұрын
Yann LeCun only knows buzz words
@nunoalexandre6408
@nunoalexandre6408 2 ай бұрын
Love it!!!!!!!!!!!
@dinarwali386
@dinarwali386 2 ай бұрын
If you intend to reach human level intelligence, abandon generative models, abandon probabilistic modeling and abandon reinforcement learning. Yann being always right.
@justinlloyd3
@justinlloyd3 2 ай бұрын
He is right about everything. Yan is one of the few actually working on human level AI
@maskedvillainai
@maskedvillainai 2 ай бұрын
I was convinced you just tried sneaking in yet another mention of Yarn, then looked again
@TheRealUsername
@TheRealUsername 2 ай бұрын
It's true, we need actual thinking system working on World Model principles and can self train and pretrain on a few data.
@40NoNameFound-100-years-ago
@40NoNameFound-100-years-ago Ай бұрын
Lol abandon reinforcement learning? Why and what is reference for that?.... Have you even heard about safe reinforcement learning?
@TooManyPartsToCount
@TooManyPartsToCount Ай бұрын
And yet the whole concept of 'reaching human level intelligence' seems so flawed! because what it seems many people don't realise or don't want to publicly admit is that Ai will never be 'human level' it will be something very different, no matter how much 'multi modality' and RLHF we throw at it, it is never going to be us. We are in fact creating the closest thing to an alien agent that we are likely to encounter (that is if you accept the basic premise of the fermi paradox). Yann et al should be using a different terminology, the 'human level' concept is misleading. They use the 'human level' intelligence idea so as not alarm. GIA....generally intelligent agent or generally intelligent artifact?
@sapienspace8814
@sapienspace8814 2 ай бұрын
@ 44:42 The problem in the "real analog world" is that planning will never yield the exact predicted outcome because our "real analog world" is ever changing, and will always have some level of noise, by it's very nature, though I do understand that Spinoza's deity "does not play dice", in a fully deterministic universe, but from a practical perspective, Reinforcement Learning (RL) will always be needed, until someone, or some thing (maybe agent AI), is able to successfully predict the initial polarization of a split beam of light (i.e. entanglement experiment).
@maskedvillainai
@maskedvillainai 2 ай бұрын
Some models can do that. But they require hardware integrations. And we don’t need to even mention language models in this context, which celebrate randomness and perplexity as a feature to only ‘natural’ language’ Models. Otherwise. Just develop the code to perform a forced format of output like we always have.
@simonahrendt9069
@simonahrendt9069 Ай бұрын
I think you are absolutely right that the world is fundamentally highly unpredictable and that RL will be needed for intelligent systems/agents going forward. But I also take the point that for the most part what is valuable for an agent to predict are specific features of the world that may be comparatively much easier to predict than all the noisy detail. I think there are some clever tradeoffs to be made in hierarchical planning of when to attend to high-level features (and reason in latent, high-level action space) and when to attend to more low-level features or direct observations of the world and micro-level actions. Intuitively I find it compelling that hierarchical planning seems to be what humans do for many tasks or for navigating the world in general and that machines should be able to do something similar, so I find this proposal by Yann very interesting
@chockumail
@chockumail 2 ай бұрын
Really passionate presentation
@veryexciteddog963
@veryexciteddog963 2 ай бұрын
it won't work they already tried this in the lain playstation game
@MatthewCleere
@MatthewCleere 2 ай бұрын
"Any 17 year-old can learn to drive in 20 hours of training." -- Wrong. They have 17 years of learning about the world, watching other people drive, learning langauge so that they can take instructions, etc., etc., etc... This is a horribly reductive and inaccurate measurement. PS. The average teenager crashes their first car, driving up their parent's insurance premiums.
@ArtOfTheProblem
@ArtOfTheProblem 2 ай бұрын
i've always been surprised by this statement. I know he knows this so...
@Staticshock-rd8lv
@Staticshock-rd8lv 2 ай бұрын
oh wow that makes wayyy more sense lol
@waterbot
@waterbot 2 ай бұрын
The amount of data fed to a self driving system still greatly outweighs the amount that a teenager has parsed, however humans have greater variety of data sources internal and external than AI, and I think that is part of Yann’s point…
@Michael-ul7kv
@Michael-ul7kv 2 ай бұрын
Agree Just in this talk he said that statement and then later says rather contradictorily a child by the age of 4 has processed a larger amount of data 50x than what was used to train an LLM 19:49 So 17 years is an insane amount of training a world model which is then fine-tuned to driving in 20hrs 7:04
@JohnWalz97
@JohnWalz97 Ай бұрын
Yeah Yann tends to be very obtuse in his arguments against current LLMs. I'm going to go out on a limb and say he's being very defensive since he was not involved in most of the innovation that led to the current state of the art... When ChatGPT first came out he publicly stated that it wasn't revolutionary and OpenAI wasn't particularly advanced.