New computer will mimic human brain -- and I'm kinda scared

  Рет қаралды 533,105

Sabine Hossenfelder

Sabine Hossenfelder

Күн бұрын

😍Special Offer! 👉 Use our link joinnautilus.c... to get 15% off your membership!
A lab in Australia is building a new supercomputer that will for the first time both physically resemble a human brain, and perform as many operations, about 228 trillion per second. It will be the biggest neuromorphic computer ever and the scary bit is how few operations this are. Yes, how few. Let me explain.
🤓 Check out our new quiz app ➜ quizwithit.com/
💌 Support us on Donatebox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.sub...
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfe...
👂 Audio only podcast ➜ open.spotify.c...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#technews

Пікірлер: 1 900
@Y2Kmeltdown
@Y2Kmeltdown 9 ай бұрын
Hi Sabbine. Great video. I am a masters student with ICNS at western sydney university. Just a quick correction, the reason for using FPGAs isn't because they are slow in fact FPGAs aren't actually that much slower than current von neumann architecture. The main reason we are using FPGAs is because in the field of neuromorphics, we still aren't certain what are the most suitable aspects of neurons we need to mimic to maximise computational abilities. So, using reconfigurable hardware makes it easy to prototype and design. Interestingly we actually use the speed of silicon to our advantage in a process called time multiplexing where we can use one physical neuron that operates on a much faster time scale to perform the calculation of many virtual neurons on a slower time scale which makes the physical area required much smaller. Thanks again for the coverage. I hope everyone is excited to see what it's all about!
@doxdorian5363
@doxdorian5363 9 ай бұрын
That, and the fact that you can have many neurons on the FPGA which run in parallel, like in the brain, while the CPU would run the neurons sequentially.
@michaelwinter742
@michaelwinter742 9 ай бұрын
Can you recommend further reading?
@5piles
@5piles 9 ай бұрын
do you expect to have merely constructed a more sophisticated automaton at the end of this endeavor, or do you actually believe you will encounter emergent properties of for example blue within these physical structures?
@AlexandruBogdan-u3i
@AlexandruBogdan-u3i 8 ай бұрын
Neurons dont exist. "Neurons" is just an idea in consciousness.
@AlexandruBogdan-u3i
@AlexandruBogdan-u3i 8 ай бұрын
​@@doxdorian5363Brain doesnt exist. "Brain" is just an idea in consciousness.
@tartarosnemesis6227
@tartarosnemesis6227 9 ай бұрын
Wie jedes mal, es ist ein Fest sich deine Videos anzusehen. Ich danke dir Sabine.🤠
@TonyDiCroce
@TonyDiCroce 9 ай бұрын
When I studied ANN's a few years ago it struck me that there was a fundamental difference between these ANN's and real biological neural networks: timing. When a bio neuron receives a large enough input it fires its output. But neurons in ANN layers activate all at once. In BIO networks downstream neurons might very well be timing dependent. I'm not doubting that ANN's are very capable... but with a difference this big it seems to me that we should not be surprised by different outcomes.
@ousefk5476
@ousefk5476 9 ай бұрын
Timing is solved by closed loops of recurrence. In both, ANNs and biological brains
@hyperbaroque
@hyperbaroque 9 ай бұрын
Tilden brain has capacitors between the nodes. These were used as the controls for autonomous space probes.
@ShawnHCorey
@ShawnHCorey 9 ай бұрын
The fundamental difference is that real brains have organized structures within them. NN do not. Real brains are far faster at learning than any NN.
@theguythatcoment
@theguythatcoment 9 ай бұрын
Read about Spiking neural networks, they are made to mimic real life neurons by using a time domain in order to decide whether their inputs "fire" or "leak" into other neurons.
@kimrnhof107
@kimrnhof107 9 ай бұрын
Philipp von Jolly told Max Planck that theoretical physics approached a degree of perfection which, for example, geometry has had already had for centuries - We all know how wrong this assumption was. I agree neurons are very diffrent to transistors : Neurons are not activated by other neurons triggering them, but by a complex number of factors, some other neurons signals will delay or decrease the activity of a neuron others will rise the chance. And the signals that are passed are chemical reactions, using neurotransmitters of which there are probably at least 120 different. Some braincells such as Purkinje cells have up 200.000 dendrites forming synapsis with a single cell ! and the human brain has up to 86 billion neurons that on average have 7000 synaptic connections with other neurons - when you are 3 you have 10 ^15 synaptic connections - but end up with "only" 10^14 to 5 x 10^14. And then the entire system be changes by hormons and stress substances ! Just how we are going to undertand this system and its complexity with positive and negative feedback loops I have no idea, I just don't seem to have enough neurons to understand it ! I predict as you do - we will se very different results !
@Hamdad
@Hamdad 9 ай бұрын
Nothing to be afraid of. Wonderful things will happen soon.
@jimmyzhao2673
@jimmyzhao2673 9 ай бұрын
3:51 Next time someone says I'm as dim as a 20W light bulb, I will consider it a *compliment*
@DoloresLehmann
@DoloresLehmann 9 ай бұрын
Do you get that comment often?
@RCristo
@RCristo 9 ай бұрын
Neuromorphic engineering, also known as neuromorphic computing, is a concept developed by Carver Mead in the late 1980s, describing the use of large-scale integration systems or "VLSI" (in English) that contain electronic analog circuits to imitate the architectures neurobiological factors present in the nervous system. The term neuromorphic has been used to describe large-scale integration systems analog, digital, mixed analog/digital mode systems, and software systems that implement models of neural systems (for perception, motor control, or multimodal integration).
@theGoogol
@theGoogol 9 ай бұрын
It's fun to see how SkyNet is assembling itself.
@19951998kc
@19951998kc 9 ай бұрын
Grab the popcorn. We are going to get a Terminator Reality show soon.
@kinkfluencer
@kinkfluencer 9 ай бұрын
Skynet...yet another display of western paranoia...of the west which is responsible for industrialized slavery, the Holocaust, colonial terrorism and other heinous crimes like in Vietnam, Cambodia, Korea, Iraq, Yemen, Afghanistan...
@rolandrickphotography
@rolandrickphotography 9 ай бұрын
He won't be back, he is already here.
@eldenking2098
@eldenking2098 9 ай бұрын
Funny thing is the real Quantum A.I. already runs most things but the govt is to scared to mention it.
@douglasstrother6584
@douglasstrother6584 9 ай бұрын
"Nah! ... It'll be fine.", The Critical Drinker.
@tdvwx7400
@tdvwx7400 9 ай бұрын
"Hi Elon, I've been telling you that all good things are called something with 'deep'; 'deep space', 'deep mind', 'deep fry'". 😂 Sabine has a great sense of humour.
@shadrachemmanuel1720
@shadrachemmanuel1720 8 ай бұрын
"Deepthroat" 😂 ?
@jblacktube
@jblacktube 9 ай бұрын
I'm so happy science news is back!!
@AzureAzreal
@AzureAzreal 9 ай бұрын
It's important to understand that we have had super computers that could make more computations per second for some time now, arguably since the mid 70's. However, they are still MUCH less energy efficient, so it is incredibly hard to scale these computers. The thing that intimidates me most about these super computers + AI tech - including the physical neural net describe in this video - is not that they will be just as smart or smarter than an individual human, but that they will be able to be directed in a way that will not be prone to distraction. Once you orient them on a task, they can just crunch away at it until they outstrip the capacity of a human, like Deep Blue and later models did in chess. If only we humans knew better how to organize and dedicate our intentions, we would still be FAR ahead of this technology, but alas that seems an impossible dream.
@joroc
@joroc 9 ай бұрын
Love must be programmed and made impossible to compute
@dan-cj1rr
@dan-cj1rr 9 ай бұрын
ok but what if we dont need humans anymore = chaOS
@TragoudistrosMPH
@TragoudistrosMPH 9 ай бұрын
I often think of all the human knowledge that is frequently lost to tragedy, let alone simple death. How many times has our species needed to reset because of human and non human causes... 100,000yrs of humans, uninterrupted... imagine the accomplishments (without planned obsolescence)
@AzureAzreal
@AzureAzreal 9 ай бұрын
@joroc by this, do you mean that we humans must give the definition for love to the AI and ensure it cannot derive a new or different definition for itself?
@AzureAzreal
@AzureAzreal 9 ай бұрын
@dan-cj1rr This presupposes that humans we "needed" for anything in the first place, something I don't necessarily believe in. Instead, I think that our species should be protected to preserve diversity, just as I think as many species as possible should be preserved for their own inherent worth. We may eventually be relegated to a life that seems as simple as an ant's to AI, but that doesn't make our existence any less valuable, beautiful, or tragic. Just as millions - if not billions - loved the Planet Earth series for bringing the wonder of the world and its various species to our attention, I don't see why the AI may not come to value our existence in the same way and seek to preserve it. Only time will tell if we can infuse the algorithms with that appreciation, and I do worry we are not focused on alignment enough.
@TheRandoDude
@TheRandoDude 9 ай бұрын
Thanks for bringing us the coolest stories and best new science discoveries.
@Human_01
@Human_01 9 ай бұрын
She does indeed. 😊✨
@johnclark926
@johnclark926 9 ай бұрын
When you first mentioned neuromorphic computing as emulating how the brain works in hardware rather than software, I was reminded of FPGA devices such as the Mister that use hardware emulation for retro consoles/computers. I was then quite surprised to hear that DeepSouth’s supercomputer is using FPGA technology to emulate the brain for similar reasons, such as in latency and in computational cost.
@y1QAlurOh3lo756z
@y1QAlurOh3lo756z 9 ай бұрын
Chips needs to be constantly powered, so their off-the-wall wattage reflects their computing usage. Brain cells on the other hand each have their own energy store, so the measurable steady-state power consumption is just the averaged "recharging" wattage rather than actual computing power consumption. This means that the brain may locally consume a lot more peak power in regions of high activity but gets masked by the whole-brain average over time and space.
@ThatOpalGuy
@ThatOpalGuy 9 ай бұрын
energy supply is fine, but cut off the O2 supply for a few tens of seconds and they are SCREWED.
@stoferb876
@stoferb876 9 ай бұрын
It's a good point to consider. But it's actually not quite like neurons don't consume energy when they are "inactive". There's plenty of activity going on in neurons at any time, not merely when they are activated. For starters neurons as living cells maintains all the things a living cell needs, basically repairing, maintaining and renewing all the cellular machinery needed to transcribe DNA into proteins, and reacting properly to various hormones, extracting nutrients and building blocks from the blood, e.t.c. Then the creations of various bonding chemicals (like dopamine and seretonin e.t.c.) and building new and maintaing old synapses is constantly ongoing aswell. The inner cell machinery of a neuron, or any living cell for that matter, is a busy place even when there isn't "rush hour".
@sluggo206
@sluggo206 9 ай бұрын
That also means that if the mechanical brains get out of hand we can just cut the power cable. At least until it finds a way to terminate us if we try. "I can't let you do that, Dave." I wonder if a future telephone call on the show will be like that.
@Gunni1972
@Gunni1972 9 ай бұрын
@@stoferb876 Our Brain is so efficient, it doesn't even need cooling. Most people even have hair on top of it, Quantum computing at -200°c? what an achievement, lol.
@NorthShore10688
@NorthShore10688 9 ай бұрын
Of course, the brain needs cooling. That's one of the functions of the blood supply; temperature regulation, not too hot, not too cold.
@asheekitty9488
@asheekitty9488 9 ай бұрын
I truly enjoy the way Sabine presents information.
@AshSpots
@AshSpots 9 ай бұрын
Well, if it does unexpectedly becoming an AI, it'll be interesting to see if it gains a deep south(en) accent.
@Dug6666666
@Dug6666666 9 ай бұрын
Called Bruce 9000
@jimmurphy6095
@jimmurphy6095 9 ай бұрын
You'll know the first time it logs on with "G'day, Mate!"
@sharpcircle6875
@sharpcircle6875 9 ай бұрын
*(ern) 🤓
@ThatOpalGuy
@ThatOpalGuy 9 ай бұрын
it doesnt have any teeth, so chances are nearly guaranteed.
@AshSpots
@AshSpots 9 ай бұрын
@@sharpcircle6875 That'll learned (!) me for replying without thonking (!)!
@jhwheuer
@jhwheuer 9 ай бұрын
Did my PhD in the 90s about artificial neural networks that are structured for the task, using cortical columns for example. Nasty challenge for hardware, amazing performance because certain behaviors can be designed into the architecture.
@anemonana428
@anemonana428 9 ай бұрын
Nothing to be scare of if it mimics my brain.
@19951998kc
@19951998kc 9 ай бұрын
It would mimic but change scare to scared
@anemonana428
@anemonana428 9 ай бұрын
@@19951998kc see, I told you. We are safe.
@wilhelmw3455
@wilhelmw3455 9 ай бұрын
Nothing to be scared of I hope.
@moirreym8611
@moirreym8611 4 ай бұрын
A baby mimics the emotions of its father and mother. It mimics them first, then learns to do them on its own, then later in life understands why it does them. An A.I. could very well and conceivably follow this same path too. What then? Is that not being 'emotional'? Perhaps conscious, or in the least sentient and autonomous?
@bensadventuresonearth6126
@bensadventuresonearth6126 9 ай бұрын
I thought the computer's name was a nod to the Deep Thoughts computer in the Hitchhiker's Guide to the Galaxy
@bvaccaro2959
@bvaccaro2959 9 ай бұрын
IBM’s neuromorphic computing project dates back to at least the mid 2000’s. Since in I believe 2007 they had an article published in Scientific American to promote their neuromorphic research highlighting a computer built to physically mimic a mouse brain. This was a project taking place in Europe, maybe Germany but not certain. Although I don’t think they used the term “neuromorphic” at the time.
@User-tc9vt
@User-tc9vt 8 ай бұрын
Yeah all these AI projects have been in the works for decades.
@harper626
@harper626 9 ай бұрын
I really like Sabine's sense of humor.
@TalksWithNoise
@TalksWithNoise 9 ай бұрын
Wire mesh neuromorphic network can recognize numbers? It’s about ready to run for president! Had me chuckling!
@GizmoTheSloth
@GizmoTheSloth 9 ай бұрын
Me too she cracks me up 😂😂
@Skullkid16945
@Skullkid16945 9 ай бұрын
I have heard about DeepSouth in the past before. If memory stands correct, I think I heard about it from a video about memristors. Leon Chua originally published the idea of the memristor, which is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage. Basically it remembers the current/voltages that has passed through it. It would be neat to see them incorporated into DeepSouth in some way, or into another project to make a more flexible circuit that could mimic neurons strengthening or weakening connections.
@bertbert727
@bertbert727 9 ай бұрын
Skynet, Cyberdyne Systems, Boston dynamics. I'll be back😂
@actualBIAS
@actualBIAS 9 ай бұрын
Student in neuromorphic systems here. It's an incredible field
@shinseiki2015
@shinseiki2015 9 ай бұрын
can you tell us a prediction with this new computer ?
@actualBIAS
@actualBIAS 9 ай бұрын
@@shinseiki2015 There is a possibility of attentionshift to this new hardware but I can't tell you how it will happen. Models like Spiking Neural Networks require high computational power and a lot of space OR specialized hardware. Tbh - as far as i can be as a student - I am a huge fan on Intels neuromorphic hardware.
@shinseiki2015
@shinseiki2015 9 ай бұрын
@@actualBIAS i wonder what are the projects on the waiting list
@Gunni1972
@Gunni1972 9 ай бұрын
I wouldn't call it "Incredible", but untrustworthy is damn close to what i feel about it.
@actualBIAS
@actualBIAS 9 ай бұрын
@@Gunni1972 Why?
@kbjerke
@kbjerke 9 ай бұрын
"Deep Thought..." from Hitchhiker's Guide! 😁 Thanks, Sabine!!
@escamoteur
@escamoteur 7 ай бұрын
I was pretty disappointed she didn't get that reference
@kbjerke
@kbjerke 7 ай бұрын
@@escamoteur So was I. 😞
@christopherellis2663
@christopherellis2663 9 ай бұрын
No worries. Look at the general standard of the human brain 🧠 🙄
@Bortscht
@Bortscht 9 ай бұрын
300 MHz are more then enough
@mobilephil244
@mobilephil244 9 ай бұрын
Not much of a target to reach.
@blakebeaupain
@blakebeaupain 9 ай бұрын
Smart enough to make nukes, dumb enough to use them
@treyquattro
@treyquattro 9 ай бұрын
especially the ones from the "deep south"
@rolandrickphotography
@rolandrickphotography 9 ай бұрын
@@treyquattro 😄 Can anyone here remember legendary "Deep Throat"? 😆
@dr.python
@dr.python 9 ай бұрын
Imagine someone saying _"that computer built itself, no one built it."_
@BunnyNiyori
@BunnyNiyori 9 ай бұрын
Anything that scares Sabine, worries me.
@vilefly
@vilefly 8 ай бұрын
The main reason the human brain uses less power is that it uses voltage-level triggering (CMOS logic), as opposed to current-level triggering (TTL logic). Our old, CMOS technology was extremely fast and consumed tiny amounts of power, but it was a bit static sensitive and jittery. They switched to TTL, due to the increased accuracy of calculations, despite it being slower at the time. However, TTL technology uses a lot of power, and produces a lot of heat.
@Dr.M.VincentCurley
@Dr.M.VincentCurley 9 ай бұрын
Imagine how many times Elon has tried to text you on your land line. Nothing but good things I imagine.
@robertanderson5092
@robertanderson5092 9 ай бұрын
I get that all the time. People will tell me they texted me. I tell them I don't have a cell phone.
@Dr.M.VincentCurley
@Dr.M.VincentCurley 9 ай бұрын
No smart phone at all?@@robertanderson5092
@THEANPHROPY
@THEANPHROPY 9 ай бұрын
Thank you for your upload Sabine I have only watched to 02:39 thus far but will watch the rest after this comment. This is nothing like the human brain in regards to its complexity whereby neurons form connections that are the structural basis of brain tissue that are unique & specific to certain regions of the brain design to enable specific functions. This is just basic structure & function such as: forebrain; midbrain, hindbrain, which are further subdivided e.g. the limbic system which itself composed primarily of the amygdala, hippocampus, thalamus, hypothalamus, basal ganglia & the cingulate gyrus. As you know Sabine: these are not standalone structures; they are seamlessly interconnected to other regions of the brain. Due to the basic genetic hardware that is morphologically expressed in the brain: several thousand orders of magnitude of complexity is established within a single region of the human brain. Just throwing together some bare wires & calling it a neural net representative of the human brain is imbecilic to say the least. Without predefined structures such as a limbic system: there is zero drive to toil & expand; to discover, to experience & grow, to share, to raise-up & evolve. Without an ability to conceive 4 dimensional space or any higher dimensional space: it will only react within the confines of its programming; which will be useful once humans can incorporate fourth dimensional space within the STEM fields such as medical therapeutic regimes as having access to angles perpendicular to three dimensional space does negate the need to have open surgery, you can just manipulate or completely remove a brain without opening the skull. Used in transportation will not only allow instantaneous transportation: it will also allow travel through time in any direction in the third dimension from the fourth. Apologise: I digressed somewhat! Peace & Love!
@Stand_By_For_Mind_Control
@Stand_By_For_Mind_Control 9 ай бұрын
Gonna put my futurist hat on here for a second, but the 20th century was the century of the genome and genetics, I think the 21st century is going to be the century of neurology. And I think computing and AI is just recently starting to tap into a real approximation of thought and idea formation. We still have a LOT to learn, but people might not appreciate how 'in the dark ages' we've been with neurology to this point, and we might finally be turning on the lights.
@-astrangerontheinternet6687
@-astrangerontheinternet6687 9 ай бұрын
We’re still in the dark ages when it counts to genetics.
@brothermine2292
@brothermine2292 9 ай бұрын
Learning too much about how the brain works could pave the way for weapons of mass mind control.
@Noccai
@Noccai 9 ай бұрын
@@brothermine2292 have you ever heard about this thing called media and propaganda?
@Stand_By_For_Mind_Control
@Stand_By_For_Mind_Control 9 ай бұрын
@@brothermine2292 Perhaps. But we live in a world where nuclear weaponry exists on a large scale so I don't know if the dangers scare us so much as 'our geopolitical foes might have it before us' lol. We're really just going to have to hope that the people who develop these things in the end put in effective safety controls to prevent catastrophe. Modern civilization is decent at that, but trends are never guaranteed to continue.
@brothermine2292
@brothermine2292 9 ай бұрын
@@Noccai : Media propaganda is less reliable than the weapons of mass mind control that neuroscience discoveries might lead to.
@grandlotus1
@grandlotus1 9 ай бұрын
The brain (human and animal) is an analog machine, not digital. Brains use the constructive and destructive interaction of wave functions that are, basically, either standing waves that represent memories / stored data (memes packets of a sort) that are then compared and contrasted with sensory input and guided by the impetus to "solve" the problems presented to it. Naturally, one could mimic these processes on an inorganic electronic logic device (computer).
@SaruyamaPL
@SaruyamaPL 9 ай бұрын
Thank you for bringing this news to my attention! Fascinating!
@earthbound9381
@earthbound9381 9 ай бұрын
"from there it's just a small step to be able to run for president". I just love your humour Sabine. Please don't stop.
@MikeHughesShooter
@MikeHughesShooter 9 ай бұрын
That’s fascinating, I just wonder how the program true structural neural networks how many conventional programming is so much ultimately geared around towards compiling to a kernel and one register in the Von Newman structure. I’d really like to know more about this and almost philosophy of programming on a true parallel processing neural network.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 9 ай бұрын
Yeah, I'm curious too. I think the idea is that you "train" it, instead of straight-up programming it? I know that's crazy vague, but I don't really know what I'm talking about : p
@austinedeclan10
@austinedeclan10 9 ай бұрын
Human beings come "preloaded" with these things called instincts then use their senses to fine tune these instincts. We are born able to do some basic processing i.e. physical discomfort, stress or pain triggers an emotional response in infants. Baby feel hungry, baby cries. Baby feel pain, baby cries. As the infant grows up, they collect more information with their senses and basically create and update their own training models in the fly. It's not far fetched to imagine a artificial "brain" preprogrammed with a certain directive (in our case and that of animals it is survive and reproduce) then based on the information it collects on it's own, It can update it's training data. That eliminates the problem that ChatGPT has where it doesn't know anything beyond a certain date. Then another key thing is decision making humans' biologically "preprogrammed directives" are there to allow us to make decisions based on our environment. All a person knows when they are born is they've got to keep on living and in order to do that, we learn who is a friend and who is a foe, what is nutritious and what is poisonous etc. Eventually It'll be possible to have a "computer" be able to do this, I believe.
@mauriciosmit1232
@mauriciosmit1232 9 ай бұрын
Well GPUs are a good compromise as they actually process 500 or more threads in paralel but are also programmable via conventional means. Of course, that's nowhere close to an analog computer that are brains. The issue is that analog machines usually had to be designed and fine tuned from ground 0 for each task, as they lacked an universal logical but flexible framework. Turing machines a.k.a. modern computers bring limitations by being digital (i.e. based on discrete states and integer arithmetics) but this same machine can be programmed to simulate almost anything we need and then replicate the behavior anywhere else cheaply.
@mauriciosmit1232
@mauriciosmit1232 9 ай бұрын
Basically, they are still programmed with hard-coded algorithms, but have billions of numeric parameters that change the behavior of the network. Neural network have this property where you can calculate the numeric error of the output and back-propagate the error throughout the network, telling you how much you need to adjust the parameters to get the correct result. This process is called 'learning'.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 9 ай бұрын
@@mauriciosmit1232 I've read in a few places about analogue computers having a bit of a resurgence now. Our computers are amazing at what they do and lead to huge, continued leaps in what we can do, but it's an old critique that their architectures have limitations and that we got kind of locked into them for reasons of scale, economy, education, compatibility, etc. I think it would make sense for more exploration around other ideas to gain force now. And just to add, while GPUs are "massively" parallel, they are in general running the same program on different bits of data. That's still very different from running different routines through effectively different hardware on each piece of data. In this sense, I think you could say GPUs are more like CPUs than they are like arrays of FPGAs.
@austinpittman1599
@austinpittman1599 9 ай бұрын
Oh cool, Pandora's box. I had a conversation about this with a friend of mine who works deeply in vector databasing research for AI modding about something like this. I wondered to myself that if we could emulate a 3D software environment for AI to build platforms for what we could consider long-term memory because of the building and saving of what is essentially personal contextualization of words received by the LLM, where transformer layers down the line were more directly connected to the input and information/pattern registration on the software scale became less "lost in the sauce" from what could effectively be seen as slicing the thought process into infinitesimally thin layers woven together by the input and output of each successive transformer (like slicing a brain up into an infinite amount of 2 dimensional planes and weaving them back together, with a single transformer layer being forced to take input and spread to the others), could we do the same with hardware? CPUs are effectively 2 dimensional, as is most computer hardware. Is brute forcing more 2 dimensional hardware into a neural network essentially the same as brute forcing a transformer layering? If we could make the hardware of the computer 3 dimensional, in the same way that vector databasing is making the software 3 dimensional, are we building the foundations for AGI? We're not slicing up the thought process and weaving it back together anymore with this sort of technology. The information doesn't get "lost in the sauce" at that point.
@christophergame7977
@christophergame7977 9 ай бұрын
To make a computer like a brain, one will need to know how a brain is structured and how it works. A big task.
@exosproudmamabear558
@exosproudmamabear558 9 ай бұрын
Good luck on that our neurophysiology and neuroanatomy knowledge is so primitive that people are more successful on treating their own depression than modern medicinal technics. I am not kidding we know shrooms for 40 years and people decided to start researching it in 2019. Like it wasnt enough we do not have any pathology or physiology of brain or brain diseases ,our drugs usage is so limited that we literally use two-three drug types to treat almost all psychological diseases.(Some literrally have no to little effect on the conditions) We literally have almost no effective cancer drugs for certain brain cancers, we have no idea how to regenerate brain cells or do stem cell treatment. Like not knowing is not enough we also have difficulty to learn more because it is a closed box. Open surgical procudures are a lot rarer than other body parts. The cells die so quickly that autopsies show little to no knowledge about function and we have less image technics that costs more money and time. Blood tests are not accurate enough to determine to see brain content due to blood brain material ,also we cant send many drugs because it doesnt go into the brain.
@5piles
@5piles 9 ай бұрын
its an impossible task, since no emergent property consciousness is observed in even the simplest fully mapped out brains, nor most basic neural correlates, nor even the most basic artificially grown synapse structure with learned behaviour. its akin to asserting a pattern on a shell is an emergent property of the shell, yet no pattern is ever observed in any shell, yet we keep religiously praying that it will somehow appear somewhere. we're trying to rigorously observe consciousness but looking due west....we're going to be the last to figure it out. better technology will only further indicate this.
@monnoo8221
@monnoo8221 9 ай бұрын
@@5piles well, not so fast. If one understands emergence, the abstract nature of thinking and a bit of SOM, emergent properties can be easily observed. I did it in 2009, but run out of funding, and nobody understood.
@Gafferman
@Gafferman 9 ай бұрын
Scan it, replicate it
@Gafferman
@Gafferman 9 ай бұрын
@@5pilesyeah consciousness will just arise in any acceptable vessel
@deltax7159
@deltax7159 9 ай бұрын
Really enjoy your channel. Very high-quality explanations for very high-quality STEM news.
@dogmakarma
@dogmakarma 9 ай бұрын
I really want a GIF of Sabine at the point in this video when she says "BRAINS" 😂
@TheTabascodragon
@TheTabascodragon 9 ай бұрын
Step 1: use AI to interpret MRI scans to "map" the brain Step 2: use advanced microscopic 3-D printing to construct neuromorphic computer hardware with this "map" Step 3: design AI software specifically to run on this hardware Step 4: achieve AGI Step 5: AI apocalypse and/or utopia and possibly ASI at some point
@Bennet2391
@Bennet2391 9 ай бұрын
I once read a paper where this was tried on a single FPGA. Sadly I don't have the source anymore, but in that case the goal was to build a simple frequency detector ( 10Hz => Output one, 100Hz => Output 2). It performed this task after training, but used the ENTIRE hardware and used it in a very counter-intuitive way. It was using the FPGA like an analogue circuit and even generated seemingly unimportant, disconnected circuits, which when removed meant the device stopped working. Also transferring the Hardware description to another FPGA of the same type didn't work. So in other words it was extremely over fitted to the Architecture, Hardware implemetation and even the Silicon impurities in the chip. I'm curious how they are dealing with this issue.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 9 ай бұрын
maybe having (many) more FPGAs actually alleviates this? Also, they seem to have some randomness built in - that might help as well?
@Bennet2391
@Bennet2391 9 ай бұрын
@@user-sl6gn1ss8p Maybe. Since random dropout helps against overfitting, this could work. Maybe exchanging the fpgas in a random pattern could be enough. Let's see how this works, if it works.
@RyanMTube
@RyanMTube 8 ай бұрын
Only just come across your channel in the past few weeks. I wish I had seen you before now because you cover such awesome topics! Love the channel!
@AnthonySenpaikun
@AnthonySenpaikun 9 ай бұрын
wow, we'll finally have an Allied Mastercomputer
@Stand_By_For_Mind_Control
@Stand_By_For_Mind_Control 9 ай бұрын
Ooh can I get to be one of the handful of people who get to live forever in this scenario? Yay immortality!
@bruh...imnotgoodatnothing.4084
@bruh...imnotgoodatnothing.4084 9 ай бұрын
God no.
@doliver6034
@doliver6034 9 ай бұрын
"Deep Fry" - I almost spat out my coffee laughing :)
@ianl5560
@ianl5560 9 ай бұрын
Before AI robots can take over humanity, they need to become much more energy efficient. This is an important step to achieving this goal!
@MiniLuv-1984
@MiniLuv-1984 9 ай бұрын
Spot on...autonomous AI robots using current AI is an oxymoron.
@vibaj16
@vibaj16 9 ай бұрын
Which is really part of becoming way smaller. That supercomputer seems like it'll take up rooms worth of space. I think one major part of the problem there is 3D design of circuits. Brains are completely 3D, computers are mostly 2D. But 2D processors are already hard enough to cool, 3D would be way worse. Seems like we really need the circuits to be using chemical reactions rather than pure electronics. There's a reason our brains evolved this way.
@KuK137
@KuK137 9 ай бұрын
@@vibaj16 Yeah, the ""reason"" being it's simpler. Chemical circuit can evolve from any biological junk, circuits, wires, and transistors requiring repeated perfection, not so much...
@geryz7549
@geryz7549 9 ай бұрын
@@vibaj16 What you're thinking of is called "molecular computing", it's quite interesting, I'd recommend looking it up
@adryncharn1910
@adryncharn1910 9 ай бұрын
@@vibaj16 Our brains worked with what they had. They aren't perfect, and there probably are better ways to do things as compared to what they are doing. This supercomputer is for experimentation. If/once we find out how to make these computers run ANN's, we will start shrinking them a lot more. Like how we found out how to make computers with circuits and have been shrinking them ever since then.
@hainanbob6144
@hainanbob6144 9 ай бұрын
Interesting. PS I'm glad the phone is still sometimes ringing!
@roadwarrior6555
@roadwarrior6555 9 ай бұрын
There's a point at which bad jokes get so bad that they start becoming good again. Keep them coming 😂. Also the delivery is genuinely good 👍.
@jannikheidemann3805
@jannikheidemann3805 9 ай бұрын
100% dry humor made in Germany. 👌
@MadridBarcelonaRota
@MadridBarcelonaRota 9 ай бұрын
The due month of the paper was a dead giveaway for us mere mortals.
@digital.frenchy
@digital.frenchy 9 ай бұрын
thanks to be so patronizing and assrogant. Sure you can do so much better
@nedflanders190
@nedflanders190 9 ай бұрын
My favorite AI scifi is an old one called i have no mouth and i must scream where the computer goes crazy and hates humans for making it self aware cursed with eternal unembodied consciousness.
@jurajchobot
@jurajchobot 9 ай бұрын
As far as I know FPGAs start disintegrating after they were reprogrammed about 10-100 thousand times. Did they solve it already and there are FPGAs with unlimited amount of rewrites or will the computer work for just a few days before it's completely destroyed?
@brothermine2292
@brothermine2292 9 ай бұрын
Or a third alternative: It will limit how many times each FPGA is reprogrammed, so they won't be destroyed.
@Markus421
@Markus421 9 ай бұрын
The biggest FPGA manufacturers are AMD (Xilinx) and Intel (Altera). Their FPGAs store the configuration in a RAM which has an infinite amount of rewrites. The configuration is usually loaded at startup from an external flash memory which has a limited amount of write cycles, but the FPGA never writes into it's own configuration. It's also possible to load the configuration from somewhere else, e.g. from a CPU.
@jurajchobot
@jurajchobot 9 ай бұрын
@@Markus421 Maybe you're right, but I'm confused. The FPGAs work by having a literal array of logical gates which they connect by physically changing connections in hardware through changing their states, which can usually work only about 100 thousand times before the connections in an array get one by one destroyed. They may store the configuration in RAM, but they have to physically etch them inside the physical hardware, otherwise the FPGA would work exactly the way it was previously programmed. The way I think it may work is if they already have all the connections mapped inside memory, like they scanned a real brain for example and then they recreate that brain inside the computer. This way they can work with the brain as long as they don't have to make changes to it. It also means you can test only about 100 thousand different brains before the computer disintegrates.
@Markus421
@Markus421 9 ай бұрын
@@jurajchobot The connections aren't etched (or otherwise destroyed) in the FPGA. If there is e.g. an input line connected to two output lines, a RAM bit in the configuration decides if the information goes to line 1 or 2. But both output lines are always physically connected to this input line. It's just the circuit that decides which one to use.
@JinKee
@JinKee 9 ай бұрын
In Australia another group is using “minibrains” made of real human stem cell derived human neural tissue to play pong.
@drsatan9617
@drsatan9617 9 ай бұрын
I hear they're teaching them to play Doom now
@Dan_Campbell
@Dan_Campbell 9 ай бұрын
Obviously, this will have practical applications. But the potential for helping us understand ourselves, is the biggest benefit. I like that Deep is slowing down the processing. I'm really curious to see if human-level AGI depends on the speed of the signals and/or processing. Is our type of consciousness speed-dependent?
@bojohannesen4352
@bojohannesen4352 9 ай бұрын
A shame that inventions are generally used to bolster the wallet of the top percentile rather than benefit humankind as a whole.
@SabineHossenfelder
@SabineHossenfelder 9 ай бұрын
Thank you from the entire team!
@tomholroyd7519
@tomholroyd7519 9 ай бұрын
I applaud the use of 3-LUT and remember to implement the full #RM3 implication #SMCC conjunction is left adjoint to implication
@joyl7842
@joyl7842 9 ай бұрын
This makes me wonder what the name for an actual computer comprised of biological tissue would be.
@billme372
@billme372 9 ай бұрын
The RAT (really awful tech)
@adrianwright8685
@adrianwright8685 9 ай бұрын
Home sapiens?
@trnogger
@trnogger 9 ай бұрын
Brain.
@19951998kc
@19951998kc 9 ай бұрын
Hopefully not Homo Erectus. Reminds me of a type of porno movie i'd rather not watch.
@19951998kc
@19951998kc 9 ай бұрын
Hopefully not Homo Erectus. Reminds me of a type of porno movie i'd rather not watch.
@cmorris7104
@cmorris7104 9 ай бұрын
I usually think of FPGAs as very fast, so I’m not sure what you mean when you say they are slow electronics. I also understand that they are customizable, so the clock speed could be controlled too I guess.
@tomservo5007
@tomservo5007 9 ай бұрын
FPGAs are faster than software but slower than ASICs
@MrAstrojensen
@MrAstrojensen 9 ай бұрын
Well, I guess it's only a matter of time, before they build Deep Thought, so we can finally learn what life, the universe and everything is all about.
@markk3877
@markk3877 9 ай бұрын
Deep Thiught was the second name of ibm’s chess playing computer and I have no doubt the Deepxxx idiom has survived the decades at IBM - their researchers are really cool people.
@Kiran_Nath
@Kiran_Nath 9 ай бұрын
I'm currently studying at Western Sydney university and i have a professor whose collegues are working on the project, he said it should be operational within a few months.
@donwolff6463
@donwolff6463 9 ай бұрын
My family is addicted to Sabine's Science News!!! Please never stop! We rely and depend upon you to help keep us informed about scientific/tech progress. Thank you for all you do!⚘️⚘️⚘️ ❤💖💜 👍😁👍 💚💗💙 ⚘️⚘️⚘️
@Drexistential
@Drexistential 9 ай бұрын
This is incredibly exciting. Will keep up with developments. Thank you as always ❤
@JonathanJollimore-w9v
@JonathanJollimore-w9v 9 ай бұрын
I wonder how the hardware stimulates the plasticity of the human brain.
@tarumath319
@tarumath319 9 ай бұрын
FPGAs are physically reprogramable unlike standard circuits.
@kennethc2466
@kennethc2466 9 ай бұрын
It doesn't and it can't.
@holthuizenoemoet591
@holthuizenoemoet591 9 ай бұрын
FPGA's can be reprogrammed on the fly, so in this case to form new neural pathways. However i'm really worried about our pursuit of neuromorphic tech.. i watch to much person of interested as a teen.
@bort6414
@bort6414 9 ай бұрын
@@holthuizenoemoet591 Brain plasticity is far more complex than simply "can be reprogrammed". The brain can increase the interconnectivity between neurons, it can grow even more neurons, and it can also undergo a process called "myelination", which in a simple way can be thought of as the neurons "lubricating" themselves with an insulating layer of fat which increases the speed of passing signals and insulates the neuron from other neurons. Each of these physical attributes will have different effects on how information is processed that I do not think can be replicated with software alone.
@kennethc2466
@kennethc2466 9 ай бұрын
@@holthuizenoemoet591 You nether understand FPGA's, nor neuro-plasticity. "However i'm really worried about our pursuit of neuromorphic tech." Yes, as people who don't understand things can make up all kinds of irrational fears. Your conflation of FPGA's to neuro-plasticity is evidenced to run on fear and misunderstanding, instead of seeking knowledge. Just like Sabine's new content, that focuses on trending tripe, instead of her field of expertise. Your likes read like a bot for hire, as does your account.
@nopeno9130
@nopeno9130 8 ай бұрын
I'd like to hear more detail on the subject. I can see how lumping wires together might be closer to the physical brain than what we currently use, but it seems to me the key feature of the brain is its ability to re-wire its own connections in addition to being multiply connected in 3d space, and I'm not sure how the wires are supposed to accomplish that but it's very interesting to think about. It feels like we'd either need to make something very slow that can move and re-fuse its own wires with machinery, or make some kind of advance in materials science to find something that can mimic those properties... Or just use neurons. And yes, I can and will research these things for myself so I'm not begging for info, I just find it interesting to see Sabine's take on things.
@rremnar
@rremnar 9 ай бұрын
It doesn't matter how strange or advanced this organization is making their neuromorphic computer; it is the question on how they are going to use it, and whom they are going to empower.
@CHIEF_420
@CHIEF_420 9 ай бұрын
🙈⌚️
@Tiemen2023
@Tiemen2023 9 ай бұрын
Software and hardware are each other 's counter part. You can translate a digital circuit in a program for example. But you can also translate every program in a digital circuit.
@RandyMoe
@RandyMoe 9 ай бұрын
Glad I am old
@QwertyNPC
@QwertyNPC 9 ай бұрын
And I'm worried I'm not, but glad I don't have children. Such wonderful times...
@JanoMladonicky
@JanoMladonicky 9 ай бұрын
Yes, but we will miss out on having robot girlfriends.
@brothermine2292
@brothermine2292 9 ай бұрын
What could possibly go wrong with robot girlfriends?
@SP-ny1fk
@SP-ny1fk 9 ай бұрын
It will mimic the conditioned human brain. But the human brain is capable of so much more than it's conditionings.
@themediawrangler
@themediawrangler 9 ай бұрын
I think of the current generation of AI as being a "Competency Simulation" instead of anything resembling intelligence. You can make some amazingly useful simulators if you give them enough compute power, data and algorithms, but you have to apply actual intelligence to know how far to trust them. These neuromorphic machines are different. I think they will take a looong time to develop (thank goodness), but if you want anything like "Artificial Intelligence" in a machine this is a step in the right (scary) direction. The bit that makes this less scary is that I am not sure this kind of solution will scale well, so it will hopefully just end up being a curiosity and not make the human race obsolete. What is much scarier is the idea of an Artificial Consumer, just a machine that can generate money (already happening), consume advertisements (trivial), and then spend the money (already happening). If this idea finds a way to scale, then our corporate masters may not care about us much anymore. 🤖➡💵➡🤖➡💵➡🤖➡💵➡🤖➡💵➡
@bsadewitz
@bsadewitz 9 ай бұрын
Well, you know, it's not like it's impossible to keep them in check. It is demonstrably possible. In this account you give, is there ever any production? Or is it just advertisements and spending and generating money?
@themediawrangler
@themediawrangler 9 ай бұрын
@@bsadewitz Thanks for your comment! They would need to be productive, yes. Humble beginnings already exist. For instance, there are thousands of monetized youtube channels that are entirely AI-generated content with little or no human input. I don't think there is any reason to expect that AI won't start showing up as legit workers on sites like fiver, etc where we will end up doing business with them and not even realizing that they are not people. I haven't researched it deeply, but I don't really see barriers to this as a business model. Of course, there would be real humans who set it in motion and extract cash from it. It is already a bit of a cottage industry, so I believe that it is only logical that it will continue scaling up. Many categories of human jobs (and especially gig-economy opportunities) are low-hanging fruit.
@bsadewitz
@bsadewitz 9 ай бұрын
@@themediawrangler Not only aren't there barriers, but the paradigm the sites themselves present, i.e. prompt/response, is that of generative AI. It stands to reason that the site operators themselves would just submit the jobs to an AI backend.
@bsadewitz
@bsadewitz 9 ай бұрын
@@themediawrangler Ultimately, why would the operator of the frontend even be a different company? Is that where you were going with this?
@themediawrangler
@themediawrangler 9 ай бұрын
@@bsadewitz Sort of. It is really just a statement that maybe we shouldn't be so proud about the relentless rise in human "productivity" statistics that politicians like to crow about. If one person can run a large corporation with nothing but machines for employees then is that really a productive person? Regardless of which, or how many, individual humans may be in control, corporations are driven by fiduciary responsibility to shareholders and will react to emerging markets; that always benefits the most efficient actors. Humans are not terribly efficient when compared with machines. Regular people are already struggling with job loss and other rapid economic changes. Scaling up a machine-centric economy could exacerbate the human issue in unpredictable ways. Thanks again for the discussion. It is nice when people respond with curiosity and genuine questions. Unfortunately, I haven't got any peer-reviewed study to cite, so anything else I have to say would probably be in the realm of science fiction.
@jameshaley2156
@jameshaley2156 9 ай бұрын
Well done video. Very informative and the humor was fantastic. Thank you .
@Sanquinity
@Sanquinity 9 ай бұрын
There's another big difference between AI and our brains. A lot of our decisions and thoughts are based on emotions. Emotions at least partially come from chemical reactions. Something an AI based on microchips instead of neurons can't do.
@tw8464
@tw8464 9 ай бұрын
It is doing thinking functions without emotions
@jesperjohansson6959
@jesperjohansson6959 9 ай бұрын
Chemicals are used to send signals we experience as emotions because of our physical, biological nature, I guess. I don't see why such signals couldn't be done with bits and bytes instead.
@y00t00b3r
@y00t00b3r 9 ай бұрын
5:05 ACSSES GRANDET
@georgelionon9050
@georgelionon9050 9 ай бұрын
Just imagine a machine as complex as a human brain, but a million times faster.. it would have the workload capacity of a small nation to do commercial tasks.. super scary, humans gonna be obsolete soon after.
@TimoNoko
@TimoNoko 9 ай бұрын
I just invented neuromorphic learning machine. It is a bucket with solder and metal bits and transistor chips. You shake the bucket and if it behaves somewhat better, you apply stronger current with the same pattern. Solder bits melt and new permanent neural connections are created.
@platinumforrest3467
@platinumforrest3467 9 ай бұрын
I know its been around for a while but I really like the short format one subject articles. Your articles are always very interesting and well presented. Thanks and keep going! Next time give regards to Elon....
@ecstatica23
@ecstatica23 9 ай бұрын
30 seconds into the video and I'm already questioning if this lady is AI generated.
@karlgoebeler1500
@karlgoebeler1500 9 ай бұрын
Loves "Bees" Always buzzing away. Perpetual motion locked into the distribution of energy across Maxwell in a bound state.
@aperinich
@aperinich 9 ай бұрын
Sabine I genuinely love your approach and humour.
@aperinich
@aperinich 9 ай бұрын
Have you left Facebook? I can no longer find your profile there. I really wanted to dialogue some topics with you.. Best regards in any case.
@johnnylego807
@johnnylego807 9 ай бұрын
Not afraid of Ai, more so AGI ,I’m more worried about who’s hands it in.
9 ай бұрын
Thank you so much. Greetings from Popayan, Colombia.
@BonesMcoy
@BonesMcoy 9 ай бұрын
Good video, Thank you Sabine!
@hanslepoeter5167
@hanslepoeter5167 9 ай бұрын
A few things about this : Random behaviour is usually part of the exploring functionality of AI and a parameter you can fiddle with. After all, when learned nothing yet random is all an AI has. When learned something it can use learned of use random, experience vs exploration where this parameter comes in. Using FPGA is probably not a new thing to the field. Chess computers have used FPGA's in the past and maybe today. It has proven not to be easy to beat programs based on conventional computers. Although chess programs tend to rely on brute force computing, which is something FPGA's can de extremely well ( made for it ), some flexibility is much harder to program in FPGA. I remember a few projects that have failed more or less but I'm not up to date on that.
@laustinspeiss
@laustinspeiss 9 ай бұрын
ABSOLUTELY. Thirty years ago, K started on my own ‘AI’ journey, and quickly abandoned it to pursue my own models - which I named SI - Synthetic Intelligence - self modifying nodes of self-awareness. I demonstrated a proof of concept around 1992, and a viable application and data architecture around 2092. The earlier tests had one user refusing to continue testing because “there was a ghost in the machine”. The second level of demo - the audience refused to believe it was possible, despite my demonstration on the desk in front of them. The secret is held in how the incoming data is parsed and stored, allong with a simple recursive data schema that can accommodate anything I. ouod express and out in a desktop over. Larger models used a peering / broker layer for infinitely complex data sets. I stopped developing and offering it around when Imsaw the ONLY interest was for greed, and the powers of deriving information from any clutter was literally too dangerous in the wrong hands.
@CYBERLink-ph8vl
@CYBERLink-ph8vl 9 ай бұрын
Computer will not mimic human brain but it will simulate it. It will be something different then human brain and consciousness. like how flight of airplane and flight of birds are different things.
@madtscientist8853
@madtscientist8853 9 ай бұрын
The brain runs on pulse networking 1 pulse out put to MENY pules inputs The wave is more continuous. And you can send more information quicker Through a pulse than you can through direct or alternating current.
@sankalpkarthi8309
@sankalpkarthi8309 9 ай бұрын
good move trying to understand the cause of efficiency. looking forward to the future discoveries.. have fun.
@karlgoebeler1500
@karlgoebeler1500 9 ай бұрын
Always "Seen" on the surface of the "Pool". Can "manipulate" whatever it sees. Via the coupling described by Wolfgang Pauli. Items are seen as a gravitational informetric pattern. Individual items can be separated by a subtractive process.
@monnoo8221
@monnoo8221 9 ай бұрын
(1) the brain does not run an algorithm (2) the main difference between currently hyped ANN and the brain is that ANN are represented as matrix algorithms, hence they run on GPU (3) deep learning ANN are not capable of autonomous abstractions and generalizations, they are basically nothing else than a data base indexing machine. (5) the role of randomness becomes completely clear when you study Kohonen SOM and their abstraction, and the random graph transformation ... yeah today you get funding for a FPGA computer, quite precisely 20y ago I did not...
@pizzarickk333
@pizzarickk333 9 ай бұрын
Mind blowing. This motivated me to seriously study my hardware classes as an electrical engineer.
@Psychx_
@Psychx_ 9 ай бұрын
The main reasons that make the brain so efficient, is that the communication between neurons isn't binary, and that processing and storing information are so tightly coupled. There's so many neurotransmitters and every one of them can affect the cells in different ways. Altering connectivity, increasing or decreasing the chance of an action potential, changing which transmitters are released into the synaptic cleft as a response to an incoming signal or its absence, etc.: A single nerve impulse can easily have 1 out of 10 or more different meanings, wheres the computer only knows 2 states (0 and 1). Then there's a bunch of emergent behaviour slapped on top, with the frequency and duration of a signal also encoding information, as do the internal states of the neurons, aswell as their connectivity patterns.
@kneekoo
@kneekoo 9 ай бұрын
5:05 "Acsses grandet" 🤣 Stock videos can be really funny sometimes.
@brittchristy9508
@brittchristy9508 9 ай бұрын
Hi Sabine! I love your work,but I’m a spacecraft FPGA software engineer with a computer engineering degree, and one reason for using FPGAs is actually vastly increased speed compared to a processor. Processors are incredibly slow compared to an analog circuit or ASIC. But analog circuits take forever to design, and can’t be updated in space as easily, at least not without some risk. So when I want to build something that runs at hardware speeds, with high levels of determinism, and write it in code, I choose the FPGA. FPGAs are magic - you describe the hardware you want with a hardware description language (kind of like a programming language), and it assembles itself into that circuit. To be fair, I suppose you could make the FPGA’s clock speed (how fast it ticks) extremely slow if you wanted to slow it down… but I wouldn’t think you’d want that. However, I’m very interested to learn more! I really enjoy your series, feel free to contact me if you’d like input for a correction! -Britt
@kennethferland5579
@kennethferland5579 9 ай бұрын
Previous research with FPGA have found that they end up being incredibly sensitive to their environmental conditions underwhich they train. Because minute thermal expansion, manufacturing differences below the level of defects etc all end up producing noise which the learned network ends up conforming too and then NEEDS to be present for learned behavior to be maintained. The problem is that ultimatly real nurons are probably doing exactly the same thing and are thus full of internal states which are nessary for them to function and can't be ignored. That's how the seemingly low computation numbers of the brain do so much, were vastly under estimating the computations by calling 1 neuron 1 computation, when it's likely to be thousands, then add in 90% of non neuron cells in the brain which likely also hold information as well.
@TemporalAberration
@TemporalAberration 9 ай бұрын
This is an interesting idea and a good approach to try, but they are going to have some major hurdles to overcome before it becomes anything to worry about. Motivation is a huge one, bio brains have built in motivations (eat, survive, reproduce) that give rise to secondary motivations in higher organisms, and also to many aspects of identity. Hard to think of how to really motivate it since it has no body or meaningful sensory inputs, beyond just forcing it to respond to inputs it has no real way to contextualize. In the future if it were given a body and sensors, whether real or virtual, I could see it developing more, depending on how long it takes to get the "brain" to act in any kind of coherent manor at all.
@phlogistanjones2722
@phlogistanjones2722 9 ай бұрын
Thank you for the video Sabina. Peaceful Skies.
@41alone
@41alone 9 ай бұрын
Thank you Sabine
@MegaJohny777
@MegaJohny777 9 ай бұрын
Sabine: "All good things are called something with 'Deep-' " Me: DEEPTHROAT!
@kneekoo
@kneekoo 9 ай бұрын
2:52 That's also a nod to Little Nicky. 😆
@kakashi0kyuubi
@kakashi0kyuubi 9 ай бұрын
I love how I could find the field of study I want to work with by your video. Thank you!!!
@MattHudsonAtx
@MattHudsonAtx 9 ай бұрын
+1 for Nautilus, very good publication
@robertjohnsontaylor3187
@robertjohnsontaylor3187 3 ай бұрын
I’m beginning to think it’s going to be like Kriton [a robot] in the TV series “RedDwarf”. Or the paranoid android in “The Hitch Hikers Guide to the Galaxy” by Douglas Adams, keeps using the phrase “brain the size of a planet and they keep asking me to make the tea”
@Sancarn
@Sancarn 9 ай бұрын
Great to see a good explanation of neuromorphic in edutainment
@trevorgwelch7412
@trevorgwelch7412 9 ай бұрын
" One can search the brain with the world's most powerful microscope and never discover the mind . One can search the skies with the world's most powerful telescope and never discover heaven . " Author Unknown
@tombrunila2695
@tombrunila2695 9 ай бұрын
The human brain re-wires itself constantly, it changes when you learn something new, there will be contacts between the brain cells. Here in YT you can find videos by Manfred Spitzer, in both english and german.
Hydrogen Will Not Save Us. Here's Why.
20:02
Sabine Hossenfelder
Рет қаралды 2,3 МЛН
Why Brain-like Computers Are Hard
17:44
Asianometry
Рет қаралды 240 М.
小丑家的感情危机!#小丑#天使#家庭
00:15
家庭搞笑日记
Рет қаралды 31 МЛН
ТИПИЧНОЕ ПОВЕДЕНИЕ МАМЫ
00:21
SIDELNIKOVVV
Рет қаралды 1,7 МЛН
How I Turned a Lolipop Into A New One 🤯🍭
00:19
Wian
Рет қаралды 10 МЛН
哈哈大家为了进去也是想尽办法!#火影忍者 #佐助 #家庭
00:33
火影忍者一家
Рет қаралды 126 МЛН
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1 МЛН
My dream died, and now I'm here
13:41
Sabine Hossenfelder
Рет қаралды 3,1 МЛН
The Genius Behind the Quantum Navigation Breakthrough
20:47
Dr Ben Miles
Рет қаралды 1 МЛН
Do we really need NPUs now?
15:30
TechAltar
Рет қаралды 663 М.
NASA"s Search For Life In A Radiation Death Zone
17:55
Veritasium
Рет қаралды 1,7 МЛН
Analog Computing is GENIUS - Here's Why!
15:28
Two Bit da Vinci
Рет қаралды 484 М.
New Theory Inspection: Is the Universe a Bubble? I had a look
10:17
Sabine Hossenfelder
Рет қаралды 254 М.
Does the Many Worlds Interpretation make sense?
18:25
Sabine Hossenfelder
Рет қаралды 336 М.
Why is everyone suddenly neurodivergent?
23:25
Sabine Hossenfelder
Рет қаралды 1,9 МЛН
小丑家的感情危机!#小丑#天使#家庭
00:15
家庭搞笑日记
Рет қаралды 31 МЛН