How Swarms Solve Impossible Problems

  Рет қаралды 41,803

b001

b001

Күн бұрын

To further enhance your computer science knowledge, go to brilliant.org/... to start your 30-day free trial and get 20% off an annual premium subscription.
⭐ Join my Patreon: / b001io
💬 Discord: / discord
🐦 Follow me on Twitter: / b001io
🔗 More links: linktr.ee/b001io

Пікірлер: 68
@matveyshishov
@matveyshishov 2 ай бұрын
Hey, I've been working on this for several years now, one of my fav reseach areas, which I hope to see becoming applied really soon. Glad to see your interest!
@TheDevildragon95
@TheDevildragon95 2 ай бұрын
Hi, are you working on the problem or swarm algorithms?
@jeffreychandler8418
@jeffreychandler8418 Ай бұрын
I'm curious about your general experience/takes about these.
@davidetl8241
@davidetl8241 2 ай бұрын
crazy good content and animations. thank you
@BohonChina
@BohonChina 2 ай бұрын
this is so called ant colony optimization (ACO) or Particle swarm optimization (PSO) in the computational intelligence, the artificial neurl network and deep learning are also belong to the computional intelligence, but ACO and PSO are not popular any more.
@perspective2209
@perspective2209 2 ай бұрын
It's a really interesting topic, and one more best example of swarm intelligence is slime mold, which is used to design most efficient subway station in tokyo
@mrguiltyfool
@mrguiltyfool 2 ай бұрын
Is there any reason why it is not popular anymore?
@w花b
@w花b 2 ай бұрын
​@@mrguiltyfool Probably like most techniques that aren't popular in their respective fields
@Entropy67
@Entropy67 2 ай бұрын
​@@mrguiltyfoolits worse then other methods, we can literally approximate any behaviour with enough data using more modern (and relatively efficient) techniques (aka back propogation). This one is reliant on the granularity of the swarm (inefficient). Its like using apples to describe addition, we can do it without the apples now lol. Well thats somewhat of an unfair comparison, there are still use cases for it (lack of data, natural abstraction for a problem) but its not popular.
@hypophalangial
@hypophalangial 2 ай бұрын
Training neural networks involves solving an optimization problem. Early neural network researchers chose gradient descent to solve this optimization problem and everyone else since then has continued to use gradient descent because it’s very simple and it gets the job done. The optimization problem in neural networks has many local minima that are all roughly equally as good as each other, so there hasn’t been any reason to adopt more complicated or robust optimization techniques like swarms. Neural networks don’t need to find the best solution. Any local minimum will do.
@JJ-fr2ki
@JJ-fr2ki 2 ай бұрын
Excellent. As a an old philosopher of this business you managed to dodge every common conceptual error in this business which swarms with misunderstandings.
@juanmacias5922
@juanmacias5922 2 ай бұрын
Such an awesome concept, thanks for sharing!
@Sjoerd-gk3wr
@Sjoerd-gk3wr 2 ай бұрын
Great video. (Kinda just commenting to boost your videos in the algorithm, because these videos deserve more views)
@hamadrehman853
@hamadrehman853 Ай бұрын
without a doubt one of the best channels on youtube. This is premium quality content.
@LeonardNemoy
@LeonardNemoy Ай бұрын
are you kidding me? I was hoping for some real world examples, instead I got a tedious logical explanation with no soul. SO LAME. THANK YOU FOR YOUR HARD WORK.
@anon_y_mousse
@anon_y_mousse 2 ай бұрын
That was a much more interesting way to relate the problem than I've seen done before. I also like the nature shots as a lead up. Really breaks the tedium of an otherwise boring subject.
@a3r797
@a3r797 2 ай бұрын
How does this only have 2000 views? This is such a high quality video.
@demonslime
@demonslime Ай бұрын
This sounds like doing gradient descent multiple times just with extra steps
@airbound1779
@airbound1779 Ай бұрын
When I sign up to Brilliant I’ll use your link, you’ve earned it
@kingki1953
@kingki1953 Ай бұрын
You explain better than my lecturer. Thanks 🎉
@wanfuse
@wanfuse Ай бұрын
the reason it is so interesting is not because it is better than Adam, GD, etc. it is so interesting because it massively parallelizes the search with "low energy" expenditure. There are much more efficient algorithms fir high dimensional space though, far better than Adam or GD
@AG-ur1lj
@AG-ur1lj Ай бұрын
Really hoping this video fully addresses its title, cuz I spent all the time learning to implement this sh** and I’m struggling to find applications other than bragging about how 1337 I am
@Djellowman
@Djellowman Ай бұрын
The inertia + memory vector makes no sense. Not only would they cancel each other out, it also won't make an agent revisit the original area. It just makes agents slower to converge on the global best position.
@babsNumber2
@babsNumber2 Ай бұрын
He mentions that those vectors can have different weights. So you can tweak the algorithm to favor either the inertia, best social score or the memory. So there are versions of the algorithm where the inertia and memory vectors don't cancel out.
@blu_skyu
@blu_skyu Ай бұрын
They only cancel out on the first step away from the personal best. If the particle has travelled away since then, the inertia and memory vectors can have different angles too.
@Titouan_Jaussan
@Titouan_Jaussan 2 ай бұрын
Still waiting to know what color theme he is using, it looks incredible
@b001
@b001 2 ай бұрын
Synthwave’84, no glow
@Titouan_Jaussan
@Titouan_Jaussan 2 ай бұрын
@@b001 Thank you so much !!! and congrats on the video btw, such a great topic and great animations, keep going !!
@talkingbirb2808
@talkingbirb2808 Ай бұрын
Somehow it reminded me of grid search, random search and Bayesian search
@manfyegoh
@manfyegoh 2 ай бұрын
sound some similarity like KNN calculation
@DanielPham831
@DanielPham831 Ай бұрын
Hi, What did you use to make the video, or animation with in this video ?
@popel_
@popel_ 2 ай бұрын
BOOL FINALLY DROPPED!
@sabbirhossan3499
@sabbirhossan3499 2 ай бұрын
Great video, it's makes hard things simple!
@StevenHokins
@StevenHokins 2 ай бұрын
Cool video, thank you ❤
@4thpdespanolo
@4thpdespanolo Ай бұрын
Swarm optimization is unfortunately not feasible for very large search spaces
@aracoixo3288
@aracoixo3288 Ай бұрын
Swarm School
@PowerGumby
@PowerGumby Ай бұрын
can swarms solve the problem of odd perfect numbers? (OPNs)
@andrestorres7343
@andrestorres7343 Ай бұрын
how does this method compare to something like a genetic algorithm? Under what assumptions would this outperform (converge faster) than a genetic algorithm?
@Yours--Truly
@Yours--Truly Ай бұрын
Being the closest to the green squares in the given examples are also being the farthest away from them. Was that intentional? 😂
@roguelegend4945
@roguelegend4945 Ай бұрын
oh i get it, pascals triangle numbers represent 2/3 = 6666.... but it also represents 1/2 of a circumference, but it also represents a whole number = one= 1 universe... yeah i know this is beyond crazy math scientists, but it is accurate...
@luke.perkin.inventor
@luke.perkin.inventor Ай бұрын
Does this really scale? Rather than 3 warehouses in 2D, what if it was W warehouses in N dimensions? Like 100 in 100? It seems like there's a lot of arbitrary choices in the fitness function, or is there theoretical grounding?
@jeffreychandler8418
@jeffreychandler8418 Ай бұрын
From what I gathered from my limited experience, these swarm algorithms can be amazing in complex optimization problems (so rather than finding the minimum, it's finding minimums maximums, midpoints, etc), however it's scaling is pretty poor. Backpropagation is just insanely efficient, while this basically calculate pairwise distances, then uses those to create vectors, then has a memory component, plus a global memory component, for multiple points. The multiples multiply quickly. As far as the fitness function. You must define the actual optimization more explicitly than most ML applications, which is theory based. Weighing the vectors is similar to learning rate with gradient descent. There's no one size fits all answer, but there's rules of thumb that are generally good.
@luke.perkin.inventor
@luke.perkin.inventor Ай бұрын
@@jeffreychandler8418 Thanks for explaining. I looked a little more into it too and even the trade offs involved in nearest neighbour search are quite nuanced, figuring out for a given problem how much to invest in precomputing a graph/tree/reduced dimensionality approximation first, or just do N comparisons every step for each particle.
@jeffreychandler8418
@jeffreychandler8418 Ай бұрын
@@luke.perkin.inventor that is the fun part of optimization, it is an endless rabbit hole of odd nuances. Like I've worked on computing nearest neighbors to predict stuff and used a lot of little tricks to avoid expensive pairwise calculations, sorts, etc.
@rajeshpoddar5763
@rajeshpoddar5763 2 ай бұрын
what bgm you used ?
@42svb58
@42svb58 Ай бұрын
The logic is flawed from over simplification! Thus, the principle does not consider evolutionary and biological factors shaping this behavior. We still do not understand this behavior we'll enough to apply policy optimization towards AIML
@hackerbrinelam5381
@hackerbrinelam5381 2 ай бұрын
5:30 - 5:48 I thought can this run on a neural network?
@lancemarchetti8673
@lancemarchetti8673 21 күн бұрын
Wow
@silpheedTandy
@silpheedTandy 2 ай бұрын
please make the background music at least half as loud for future videos, or even quieter. i want to watch the video, but it's too draining to try to hear (and underestand/process) your narration from underneath that background music, so i quit watching the video.
@b001
@b001 2 ай бұрын
Noted. After all these years I’m still learning and struggling to find the right audio levels, and video ambience. Thanks for the feedback!
@iamtraditi4075
@iamtraditi4075 2 ай бұрын
Fwiw, I personally didn’t mind this level of background audio
@user-bf3uy5ve9k
@user-bf3uy5ve9k 2 ай бұрын
​@@iamtraditi4075I did find it quite distracting, probably not as much as OP though.
@lukurra
@lukurra Ай бұрын
​@@b001 a swarm of watchers nudging you towards an answer! I suggest looking into adding a bit of sidechain compression. it would make the music move aside in response to your voice, increasing its importance and focus, but leaving the ambiance untouched.
@rosettaexchangeengine141
@rosettaexchangeengine141 Ай бұрын
I agree. However, it is a difficult problem for the creator since it is so dependent on the listener's ears. It is bizarre that after 19 years of KZbin they still don't allow posting of a multiple track audio so the listener can adjust the background music themselves.
@DemetriusSteans
@DemetriusSteans Ай бұрын
Enough ais and you can generate a realistic chunk of a 3 dimensional object in a simulation.
@Extner4
@Extner4 2 ай бұрын
first!
@TheMaxKids
@TheMaxKids 2 ай бұрын
second!
The Algorithm Behind Spell Checkers
13:02
b001
Рет қаралды 419 М.
I Made an AI with just Redstone!
17:23
mattbatwings
Рет қаралды 1,2 МЛН
How many people are in the changing room? #devil #lilith #funny #shorts
00:39
FOREVER BUNNY
00:14
Natan por Aí
Рет қаралды 31 МЛН
Accompanying my daughter to practice dance is so annoying #funny #cute#comedy
00:17
Funny daughter's daily life
Рет қаралды 20 МЛН
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,3 МЛН
What P vs NP is actually about
17:58
Polylog
Рет қаралды 134 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
Why Runge-Kutta is SO Much Better Than Euler's Method #somepi
13:32
Phanimations
Рет қаралды 158 М.
Differential Equations: The Language of Change
23:24
Artem Kirsanov
Рет қаралды 89 М.
The Key Equation Behind Probability
26:24
Artem Kirsanov
Рет қаралды 150 М.
People said this experiment was impossible, so I tried it
34:49
Veritasium
Рет қаралды 6 МЛН
Dijkstra's Hidden Prime Finding Algorithm
15:48
b001
Рет қаралды 167 М.
Actually, Maybe There's Only 1 Game of Tic-Tac-Toe
11:58
Marc Evanstein / music․py
Рет қаралды 1,1 МЛН
How many people are in the changing room? #devil #lilith #funny #shorts
00:39