Prof LARISA SOLDATOVA - Automating Science

  Рет қаралды 4,060

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Support us! / mlst
MLST Discord: / discord
Interview with Larisa starts at [00:16:48]
Prof. Larisa Soldatova joined Goldsmiths in November 2017 as director of the online Masters in Data Science programme.
Larisa is an internationally recognized expert in AI, particularly in discovery science, reasoning, knowledge representation and semantic technologies. She leads in Goldsmiths the EPSRC-funded project ACTION on cancer aiming to develop an AI system to assist in recommending personalized cancer treatments.
Larisa is also working on the Robot Scientists project, which investigates what processes of scientific discovery can be automated and how robotic and human scientists can work together.
The Robot Scientist Adam was the first system that made an autonomous scientific discovery.
Larisa is involved in a number of international projects on the development of semantic standards, e.g. Machine Learning Schema, Robotics Task Ontology Standard, Laboratory Protocols EXACT.
www.gold.ac.uk/computing/peop...
Filmed 31st May at Creative Machine, Oxford University - www.creativemachine.io/
TOC:
[00:00:00] Automating scientific progress
[00:02:45] Knowledge in AI systems
[00:08:51] Abduction in science
[00:14:31] Human cyborgs
[00:16:48] Main Interview start
[00:21:19] Knowledge and science
[00:32:25] CYC project
[00:35:53] Neurosymbolic

Пікірлер: 45
@itslogannye
@itslogannye 6 ай бұрын
Hey y’all, just wanted to say I came across MLST a few months back and it’s quickly become one of my favorite channels. Love the combination of technical and philosophical depth that you guys explore, and the guests are incredible. Much love from Carnegie Mellon
@ebenezeragbozo
@ebenezeragbozo 6 ай бұрын
I'm from Ghana and did my PhD in Russia. Very tough process. One of the very essential things every postgrad needed to pass was (and is) History of Philosophy of Science. It helps you appreciate whichever program you're focusing on.
@CreativeMachine-px9gi
@CreativeMachine-px9gi 6 ай бұрын
Great presentation of Larisa's wonderful work (and of her collaborators); many important insights provided; thank you for this combination of interview, talk and background.
@retrofuturism
@retrofuturism 6 ай бұрын
We should merge Eureka's cutting-edge reward functions and simulations with Robot Scientists' autonomous capabilities to supercharge scientific research.
@ArchonExMachina
@ArchonExMachina 6 ай бұрын
Fascinating, loved to listen.
@luke2642
@luke2642 6 ай бұрын
I really enjoyed this too, you're on a roll with great guests Tim! It irks me that you say a couple of times, the purpose of science is explanatory, without even a nod to predictive! I think the two are inseparable. One without the other is deeply unsatisfactory! Plus, combining them would work sympathetically with your preference for more objective, less anthropocentric measures? I greatly enjoyed Prof Larisa schooling you on what a chair is... She clearly builds highly explanable systems, to process clear crisp clean logic, modulated for uncertain environments, making falsifiable predictions. The perfect scientist and the perfect engineer, if you can build it, you understand it!
@MachineLearningStreetTalk
@MachineLearningStreetTalk 6 ай бұрын
Thank you Luke!!
@stretch8390
@stretch8390 6 ай бұрын
It's possibly a bit semantic, but I think explanatory implies predictive but not the other way around. What is it about 'predictive' you think is distinct from 'explanatory' in the sciences?
@luke2642
@luke2642 6 ай бұрын
@@stretch8390 I think it's semantically important, although they are so intimately linked! It's the difference between the model and its output? The explanatory bit is the understanding, the logic, the constituents, their relationships... and the model makes falsifiable predictions, given a specific input scenario. It's the difference between explaining that gravity makes the tides, and the Fourier analysis predicting tomorrow's tides. Or perhaps an explanation is more a human thing, the simplest causal representation of some observations, but predictive gives the new observations. Or maybe I just like to squeeze falsifiability in, everywhere I can? An explanation is only as good as its predictions!
@stretch8390
@stretch8390 6 ай бұрын
Fascinating! Didn't know any one was working on this topic. Frankly, I did not know this was a topic but am loving it.
@u2b83
@u2b83 6 ай бұрын
Amazing intro and superuseful summary of the priors and their role in shaping the activities in the field through time. And finally a nice compact distinction between data and knowledge, where knowledge is the set of functions that map groups of data to other groups of data within the data set.
@fuzzmeister
@fuzzmeister 6 ай бұрын
Fantastic 👏👏👏👏👏 Thankyou 😊
@justtoleavecomments3755
@justtoleavecomments3755 2 ай бұрын
Why isn't this episode on Spotify?
@richardnunziata3221
@richardnunziata3221 6 ай бұрын
It seems to me that auto coding can benefit from using a standard mathematical algorithm language for generating pre-code . Which would deal with different issues then system that produce the target language.
@CodexPermutatio
@CodexPermutatio 6 ай бұрын
Interesting guest. Automating scientific progress, even partially, is the most realistic way in which the so-called "Singularity" could be reached.
@NicholasWilliams-kd3eb
@NicholasWilliams-kd3eb 6 ай бұрын
She's 100% right it's, great conversation. The proportional relationships between values in different parsing context ((D*bias)/(J*bias)/(P*bias)=?) or ((J*bias)/(M*bias)/(D*bias)=?) (Parallel multi-direction parsing space of relational values (Proportions /over proportions /over proportions) of parsing capability that conserve objective function, cross referential predictive power) relativities to concurrent optimization trajectory. Mutation instillation degree = (frequency of reward activation), while (priority decreases or increase to push for strategic diffusion based of the equilibrium reward functions that receive least attention).
@u2b83
@u2b83 6 ай бұрын
Abduction might just be the coming up with high level priors for the given problem space. "Ordinary science" / incremental progress then follows, trying to sample these distributions. A paradigm shift is then a realization that you were sampling the wrong solution distribution lol.
@Californiansurfer
@Californiansurfer 6 ай бұрын
❤I work in automation industry on PLC. We have been using networking and each mechanical device has an ip address and cat-5 wire changed everthing. We have so many applications but what is the goal. We are looking for the process which automation is what I call a loop. A loop is what we humans do. We wake up , we eat, we work, we shit and we do it again and again that is the loop. The heuristics is the on hands. Expericne. I just leaned of Enrico Fermi. He was the on hands guy who discovered fusion. I love what I do, I get my hands dirty, but I love it when the call me Fermi at work. Chicano surfer Downey California. 😢😢😢
@Kazekoge101
@Kazekoge101 6 ай бұрын
Wonder what the Professor thinks about companies like Polymathic AI
@richardnunziata3221
@richardnunziata3221 6 ай бұрын
Judea Pearl: "Aside from being impressed, I have had to reconsider my proof that one cannot get any answer to any causal or counterfactual query from observational studies. What I didn’t take into account is the possibility that the text in the training database would itself contain causal information. The programs can simply cite information from the text without experiencing any of the underlying data."
@_ARCATEC_
@_ARCATEC_ 6 ай бұрын
💓
@u2b83
@u2b83 6 ай бұрын
Encoding high level knowledge into the system introduces brittleness because high-level knowledge doesn't enable the path of least resistance in training, otherwise the system would have quickly learned it following the gradient; so I suspect. Furthermore, even if a model is found/discovered [by the training process], which incorporates high level knowledge, it's probably unstable and requires some kind of averaging strategy to protect the model's pathways from pruning. I've see this as a running-average strategy in the MUNIT GAN. Personally, I suspect you really need a separate/dedicated fitness function to maintain the high-level knowledge, maybe something as simple as a JEPA constraint or maybe a separate fitness function for different resolutions of reconstruction. I'm going to call this rigmarole "knowledge maintenance" lol
@NicholasWilliams-kd3eb
@NicholasWilliams-kd3eb 6 ай бұрын
Notice your last post is a good point. Separate fitness functions are key for differential parsing capability, this is why pretrained reward activations can relay the equilibrium state and instill the mutations based on automated bias mutation and trial-merit (Which facilitates steeper learning gradient like a sling shot, and is robust in larger dimensions). Remember all networks do is cross referential value transformations proportional to adaptive behavior. These models learn relativities proportional conversion units (biases) that conserve equilibrium, Example = (a) This guy over (b) Lives here over(c) At this time over (d) previous integrated context = (a*bias/b*bias/c*bias/d*bias)=Your proportional weight to the parsing context and the proportionalities you test, in dynamic environments where you have multiple parallel parsing context it's better to have as many (good behavior) modulation strategies, and even let it learn to transform it's own param space and test it's own mutations (While preserving alignment).
@XOPOIIIO
@XOPOIIIO 6 ай бұрын
What is better than listening something that makes your brain struggle? Listening something that also makes your ears struggle.
@kensho123456
@kensho123456 6 ай бұрын
This lady is the Sabine Hossenfelder of AI
@kensho123456
@kensho123456 6 ай бұрын
@@deenagold7136 I used to be stupid but I overcame it....I suggest you do the same.
@simonmasters3295
@simonmasters3295 6 ай бұрын
​@@kensho123456Nah. Fishing clears your mind, permitting sparse representation. Play nice - it takes all types.
@kensho123456
@kensho123456 6 ай бұрын
@@simonmasters3295I don't have a clue what you're talking about.
@simonmasters3295
@simonmasters3295 6 ай бұрын
@@kensho123456 "Sparse Representation" is how the human neocortex appears to categorise things. "Play nice" because I felt you were hyper-critical. Clearer now?
@kensho123456
@kensho123456 6 ай бұрын
@@simonmasters3295 Oh I can read the words but you seem to assume that everyone is American and understands your vernacular which is not actually the case. "nice" relative to WHAT / Hyper relative to WHAT (your standards ?). You seem to think you can give voice to your internal thoughts and then demand the agreement of others on your subjectivities.
@kensho123456
@kensho123456 6 ай бұрын
UNGRACIOUS ME? No, I have spent the last 50 years watching you lot hoodwink the public...it's time you got your comeuppance - Hubert Dreyfus will be watching and smiling..
How can we add knowledge to AI agents?
49:57
Machine Learning Street Talk
Рет қаралды 10 М.
ORIGINAL FATHER OF AI ON DANGERS! (Prof. Jürgen Schmidhuber)
1:21:04
Machine Learning Street Talk
Рет қаралды 43 М.
Don’t take steroids ! 🙏🙏
00:16
Tibo InShape
Рет қаралды 26 МЛН
Зу-зу Күлпәш. Агроном. (5-бөлім)
55:20
ASTANATV Movie
Рет қаралды 439 М.
Noam Chomsky - On Being Truly Educated
3:34
The Brainwaves Video Anthology
Рет қаралды 2,3 МЛН
Understanding AI from the nuts and bolts
1:33:58
Machine Learning Street Talk
Рет қаралды 32 М.
The Free Energy Principle approach to Agency
1:18:00
Machine Learning Street Talk
Рет қаралды 13 М.
What can science tell us about dogs? - with Jules Howard
53:34
The Royal Institution
Рет қаралды 57 М.
This is what DeepMind just did to Football with AI...
19:11
Machine Learning Street Talk
Рет қаралды 163 М.
Prof. KARL FRISTON on upcoming WOLFRAM show!
26:20
Machine Learning Street Talk
Рет қаралды 7 М.
The joy of abstract mathematical thinking - with Eugenia Cheng
51:49
The Royal Institution
Рет қаралды 55 М.
САМЫЙ дешевый ПК с OZON на RTX 4070
16:16
Мой Компьютер
Рет қаралды 114 М.
❌УШЛА ЭПОХА!🍏
0:37
Demin's Lounge
Рет қаралды 331 М.