Awesome Lecture(r) ! Would have loved to see the rest
@HoboGardenerBen11 күн бұрын
Just learned about this after looking into brain organoid computing. So glad that there is a lot of quality information available about this exciting technology. I'm afraid and excited at the same time. Quite the time to be alive :)
@pygmalionsrobot1896 Жыл бұрын
At 52' 57" there is some discussion of attractor dynamics ... this is pure gold. Thank you for mentioning this. I'm not an expert on neurons but I know a bit about attractors ... very interesting subject !!! Great discussion :)
@junaidrahman7789 Жыл бұрын
Great session! Looking forward to the second lecture
@HiPEAC Жыл бұрын
Glad you enjoyed it! Please note that videos are only available of the first lecture. You can learn more about Yulia's work here: sandamirskaya.eu/ The next ACACES summer school will take place in July 2024 - further information here: www.hipeac.net/events/#/acaces/
@judhajitdutta6591 Жыл бұрын
@@HiPEAC My request please make some provision to access these videos - all 5 series. Almost all lectures are best in class.
@HiPEAC Жыл бұрын
Hi @@judhajitdutta6591, unfortunately the other classes weren't filmed - we decided just to film the first lecture for reasons including budget. You might want to consider applying for the ACACES summer school in future - it's a great experience and grants are available
@HiPEAC Жыл бұрын
@@judhajitdutta6591 Also, bear in mind that the lectures from ACACES 2020 are (almost) all available, as this was an online version of the summer school where the lectures were delivered by Zoom. There are different lecturers and topics to the ACACES 2023 summer school, but still with ACACES levels of excellence. You can find them here: bit.ly/ACACES20_videos
@mjollnirboy10 ай бұрын
I was searching for the 3rd lecture and then realised it’s not available. Now it’s confirmed 😢
@xrysf036 ай бұрын
Both analog encoding of signals (the classic continuous ANN's, perceptron and offspring) and the spiking style, have their merits. To use an analogy from electronics, the spiking has a strong element of "Schmitt trigger" to it = threshold detection and suppression of "noise" (insignificant fluctuations in the analog input). In a way, spiking distills important events out of a sea of noisy continuous signals, and helps prevent the noise from cumulating along the signal path. It is a filtering mechanism. The flip side of the same coin is, that you lose "resolution" in the signal transferred, and introduce a processing delay. Different "applications" may favour this or that optimization. Knowing a bit of control systems theory, the debate about "how to code a continuous scale using spiking signals (pulses) and make it quick", there are some obvious answers. If you are limited by a maximum firing rate of a spiking neuron, your option is just to use a continuous analog value. That is "instantaneous". Exactly what response time "instantaneous" means, that's subject to a further analysis of your transmitter and receiver (two neurons?) = what the propagation time is from some "inner current state" of the TX party to its physical output voltage (down to some precision spec) and what the propagation time is in the RX party, from the input voltage into its "relevant inner current state". In a continuous analog signal, other circumstances equal, it should be easier/faster/cheaper to work with the continuous analog value directly, rather than if you need to use pulses (spikes) and either wait for a spike before you act, or count them / integrate them over time. Mrs. Sandamirskaya has actually mentioned prior in her lecture, that spikes are neatly transferred as digital events. If the spikes are relatively rare, and you have a digital communication network of some sort, with nearly infinite bandwidth (compared to the spike sequences), it's a no-brainer to transport spikes as digital messages. It can be a very flexible arrangement of topology, minimizing fixed and physical wiring. But, if you start to struggle with the spike rates, i.e. your communication bandwidth becomes clogged / message forwarding capacity becomes a bottleneck, there are possibly savings available from reducing the number of individual events to transport, such as transporting fast continuous signals as streams of analog values, rather than rate-encoded as pulse trains. Each analog byte can carry an equivalent information to a couple dozen spikes (or timeslots in "time to pulse" precise timing). Still bandwidth-intensive, and not so easy to route across some topology... Luckily, fast closed-loop control tends to be characteristic of relatively simple and bounded tasks (motor functions / motion control) so the fast signal paths potentially need not reach very far / be overly flexible in topology. See that example with a single "looming detector neuron" in the grasshopper brain. To me it all really boils down to: we need more research into "architecture" of our ANN's. A biological brain consists of various specialized "centres" / function blocks / layers. One size of a neuron definitely does not fit all purposes and tasks. And, from a more macro perspective, the block schematic of an "autonomous brain" should resemble something like a car engine, or a modern computer (in the way of a very superficial analogy) rather than a flat babble predictor / generator. Those analog schematics are beautiful. If someone would be interested to get some insight into what these do, check out the free PDF book by the late Hans Camenzind, called Designing Analog Chips.
@Basicguy1798Ай бұрын
Isnt there a playlist on neuromorphic computing?
@HiPEACАй бұрын
Not yet (on HiPEAC TV at least) - we've added neuromorphic videos to the 'Machine learning and artificial intelligence' playlist kzbin.info/aero/PLUU79oBORyMiL8Mc2oMbH9wL1-7S9Fx53, which is a bit clumsy. Shall we create a playlist just for neuromorphic?
By the way, if you're interested in neuromorphic research, check out publications by the speakers at the NEUROCOM workshop at HiPEAC 2025: www.hipeac.net/2025/barcelona/#/program/sessions/8160/ also research such as the NimbleAI project: www.hipeac.net/magazine/7167.pdf#page=22
1:13:04, so sense of causality is indicated in brain by synapse magnitude? and is that then how brain interprets entropy and therefore time?
@Dr.Z.Moravcik-inventor-of-AGI7 ай бұрын
Neuron is not brain. It is more like transistor - hardware element brain hardware is build upon. Brain has no idea of time. Try to sleep and count how much time has passed afterwards.
@user-bx8ce9be2m7 ай бұрын
Fantastic lecture. I would pay to access the entire course.
@HiPEAC7 ай бұрын
Thanks for your feedback - Yulia's ACACES 2023 course was fantastic. Unfortunately, to keep costs down, only the first lecture of each course was filmed last year. In future we are considering filming full courses, but it would only be a selection of the full programme - stay tuned to HiPEAC TV for more information. And of course you can get access to ACACES lectures by attending the summer school itself, highly recommended: www.hipeac.net/events/#/acaces/
@TheApoorvagni4 ай бұрын
@@HiPEAC IT would have made sense to just have some kind of recording. The traction is gets would have offset the cost of recording any day. It is just unfortunate that we don't have the recordings since July 2024 already passed and I can't go back in time to attend the summer course now.
@HiPEAC4 ай бұрын
@@TheApoorvagni completely understand your frustration. Last year was the first time we tried recording live sessions from the summer school, and we recorded as many courses as we could, but just the first day. For the 2024 edition of the summer school, we recorded fewer courses but we recorded every session of these courses, and these videos will be available soon. Please bear in mind that HiPEAC is a publicly funded project with a limited budget, and it is not always possible to do everything - but we do take your feedback into account.
@anearthian8942 ай бұрын
Fields get heated up only after they get exhausted, look at this, the potential here is so great and no one watches.. meanwhile there are millions of views on some deep learning algorithm. Next AI breakthrough will most probably come through hybrid architecture containing Neuromorohic systems.
@HoboGardenerBen11 күн бұрын
Yup. I wonder about photonic computing as well. I doubt that keeping within the cmos structure will be the best way in the end, just a necessary step along the way.
@981van4 ай бұрын
Where is the rest?!
@HiPEAC4 ай бұрын
Only the first lecture was recorded for the 2023 summer school. A selection of full courses from the 2024 summer school will be made available in due course, although these will be different lecturers and on different topics
@nazeeraharmanza3 ай бұрын
lecture 2?
@HiPEAC3 ай бұрын
See discussion above - for ACACES 2023, just the first lecture of each course was recorded.
@delgaldo27 ай бұрын
I think the percentages are wrong. The brain, compared to the whole body, consumes 20% of the power by weighing around 2%
@Dr.Z.Moravcik-inventor-of-AGI7 ай бұрын
Third comment. 40:07 How can you model a neuron when you don't know what it is doing? It is like modelling a car when you don't know what a motor is. Typical example for this western civilization controlled by americans. 😁
@zdenekburian13664 күн бұрын
this is a general problem for overall sciences today, idealistically fudge with magical math formulas that are disconnected from mechanical materiality, and continuously feed them with new constants, zeroes, virtual particles, statistics, quantum scams, to make them match to a reality they cannot understand