Not often I listen to a podcast twice. Even rarer I listen three times. A wonderful example of how mathematics helps in understanding a wide variety of things from physics, to biology, language, evolution, or even improving clarity of thought. Sometime in the future this is going to be seen as a golden age of mathematics. Eye opening. Thank you Sean and Tai for the conversation and explanations. The world is a better place for what you do.
@samig90323 жыл бұрын
Mrs. Bradley’s enthusiasm for her work is so obvious just listening to her speak. Great listen!
@thewiseturtle3 жыл бұрын
YES! Finally! Someone else exploring how entropy relates to everything, by using geometry/topology and language! This has been the focus of my work for about a decade, and, I believe, even inspired Stephen Wolfram to start work on this level of thinking, where you allow for all combinations to happen and be equally probable, and thus generate a model of reality that continually expands while simultaneously contracting. This is what we see as natural selection (gravity, matter, stability, contraction) and random mutation (electromagnetism, energy, change, expansion) which, combined, produce a family tree of all possible paths through space~time.
@TJ-hs1qm3 жыл бұрын
lol I wasn't expecting to hear about Monoids and Category Theory just 10 min in. I learned this kind of stuff through Haskel and Scala, so basically Functional Programming. Awesome :)
@alvarorodriguez15923 жыл бұрын
Great interview. Not only was the topic very interesting and delivered with a joy and carisma, but hype was kept to a refreshing minimum, aknowledging the limitations of applying an awesome math concept to a domain, language, that is not necessarily described by current math. Kudos and thanks to both :-)
@user-wu8yq1rb9t3 жыл бұрын
Hello Dear Professor Carroll (one the my beloved Physicist). Thanks a ton for all of your efforts.
@Xx_Eric_was_Here_xX3 жыл бұрын
one of my favorite podcast episodes ever, the knowledge and enthusiasm is palpable
@JacobCanote3 жыл бұрын
Wow. A joy to hear. Thanks for walking us through the insights gleaned from the paper.
@robdin813 жыл бұрын
This is for me probably the most difficult to understand episode so far, and that has nothing to do with the explanations that were given by the miss Bradley or Sean Carroll for that matter. It is just difficult parts of math that are connected that makes it even more difficult to understand. Very interesting episode though and I think I will listen to it a couple more times just so I'll be able to understand everything.
@scott12855 күн бұрын
This is one of the most enlightening things I've ever heard.
@Dth0912 жыл бұрын
This was fantastic. The relationship between boundaries and entropy makes me think of the holographic principle; that the maximal information contained in a region of space is proportional to its surface area, and entropy could be thought of as a measure of unknown information so it seems pretty connected! Also how the surface measure of an object is the derivative of its volume measure!
@paxdriver3 жыл бұрын
Omg this has been one of my favorite episodes!
@manfredkrifka84002 жыл бұрын
Interesting podcast! But just for the record, the algebraic perspective on language has been established for a long established. For example, the Polish logician Kazimierz Ajdukiewicz in 1935 developed this concept in what later became Categorial Grammar. Basically, words are assigned categories that tell you precisely in which neighbourhood they occur. Around 1970, the American logician and philosopher Richard Montague provided a very general mathematical framewok in his article "Universal Algebra". Basically, Montague sketched a way to describe syntax in algebraic terms, semantics in algebraic terms (referring to models in intensional logic), and a homomorphic mapping from syntax structure to semantic interpretation. It describes how we humans can form and understand sentences that have never been uttered before. This became a great research program called "formal semantics" -- Professor Barbara Partee would be an excellent guest to talk about that! I should also mention that grammars with probabilistic rules have been introduced several decades ago.
@DudokX3 жыл бұрын
Ohh coarsegraining! now I know why Sean is interested in this!
@cloudrouju5263 жыл бұрын
You know, what have been ringing in my ears after this were lots of “you know” and “ I don’t know”. I don’t know.
@DeclanMBrennan3 жыл бұрын
That was a fascinating chat that also triggered some nostalgia because Tai-Danae Bradley was a writter/presenter on the great "PBS Infinite Series" which sadly turned out to be all too finite. kzbin.info
@grawl693 жыл бұрын
Fantastic interview, thank you.
@LearnedSome3 жыл бұрын
A pleasant surprise after the clickbaity addition of the word Entropy at the end of the title, however apt. :)
@flexeos3 жыл бұрын
a philosopher worth reading on the philisophical side of all this is Alain Badiou. In is book "being and event" he redifines Being and explores the parts and the whole using set theory. He also has a strong definition of Event as something that could not have been predicted by the analysis of the existing, so in your way as an part of a probability distribution with a high entropy.
@AaronParks3 жыл бұрын
hey it's the PBS Infinite lady!
@robhollander18443 жыл бұрын
Would Yoneda suffice? Using Quine's example: "renate" and "cordate" refer to the same set of individuals in the actual world, so it's possible that they might just happen to always coincide in actual use, in which case the Yoneda lemma might not distinguish their meaning. The lemma might distinguish them in all possible uses in all possible worlds, but to identify those worlds one already must know the meaning of the words, in which case the lemma would be unnecessary. Granted it's an extreme example, but its point is that meaning must be more than mere actual use. What that more is can be a difficult question, harder than the so-called "hard problem" of consciousness.
@fs57753 жыл бұрын
In her application to language, she's just talking about corpus linguistics & descriptive linguistics
@thewiseturtle3 жыл бұрын
I think the novel understanding here, which she may or may not yet be aware of, is how the topology (squishy geometry) means that language is multidimensional, and fractal, such that every single word can "contain" an infinite number of other words within it, as the location on the map is expanded to become it's own map that can be described in infinite detail, like zooming into the trees from the forest, and the cells from the trees, and the atoms from the cells, and so on. This is why natural language is so messy and complex and impossible to pin down. But this evolutionary family tree of all possible relationships that the Pascal's triangle of simplices describes allows us to at least make an attempt to quantify a whole language with an impressive level of usefulness. I believe that at least one electronic 20 questions game used this to allow for categorizing all nouns so as to seriously limit the number of questions needed to sort out something close to the answer.
@fs57753 жыл бұрын
@@thewiseturtle language being fractal, recursive, or existing as a complex adaptive system is not a new idea ...
@HarryNicNicholas3 жыл бұрын
"putting an elephant on a cardboard plane" - i just saw a poem written by AI about this.....synchronicity. damn, can't find it. it was in a tweet. "pull the house down" lol.
@marianmusic72213 жыл бұрын
@Sean Carroll Hello, mister Carroll. Thanks for making youtube a more interesting place and bringing the beauty of science closer to us. Here is a question regarding the gravity and the way it affects everything. It is said that a strong gravity field slows down the clocks and even the thoughts of a person which is found in that strong gravity field. Very precise clocks were mounted on planes flying at high altitude around the planet. In that experiment the altitude and the speed of the planes were taken into consideration and it was proven that the gravity and the speed have an effect on time. I wonder if we could make the following experiment - By my understanding, "the movement" of the electrons in the atoms of our brains/our bodies is the thing that gives us, beside the sens of time passing, our thoughts/our ability to think and the speed at which we think/our perception of life. In the experiment involving the clocks on airplanes the result of the behavior of the whole clocks were analyzed. Can we make the same experiments involving electrons only? Is there a property of the electrons that can be measured using today's technology? Can we put some electrons on some airplanes and measure their properties while flying at high altitude? I know the physicists would tell me: "We cannot say that the electrons are moving. We can only say what is the probability to find an electron at a certain position inside the atom". But maybe there is a property of the electron that, when running that experiment, we can notice it slowing down. Hint - Electricity is also a result of the "moving" electrons. Can we make precise measurements of the properties of the electricity at high altitude? Maybe by analyzing the properties of the electricity at high altitude and compare the results with the results of the same experiment made here on earth, we can find some differences. And those differences can tell us something about the "movement" of the electrons and how the gravity affects it. I am talking about a device consisting of an electricity source, a wire and a consumer. Thanks!
@tonytanner30483 жыл бұрын
Interesting does that mean a student t distribution can be seen as a infinite an entropy space.
@robhollander98213 жыл бұрын
It must be all the *possible* contexts in which a word appears that might render its semantic information. An arcane scienctific neologism, for example, found in only one or two or just a handful of occurences, will be underdetermined by looking at its actual uses (unless the language has a word for every possible meaning, which English demonstrably does not have). But to designate what constitues all possible uses/contexts of a word (beyond the mere actual uses and contexts) requires first already knowing its meaning, so it seems the Yoneda lemma doesn't help any. We still need a semantic relation between word forms and their semantic content. How do humans figure out this relationship? For one thing, we know more than the local context of a word. We know so much about the larger context of the word use as well -- does it appear in a scientific journal about math or a personal email about lunch at an exotic restaurant, e.g. Given enough context, we don't need multiple context to figure out the likely meaning of an unfamiliar word. Fascinating discussion, engaging, rich ideas. I want to read the paper!
@manfredullrich4833 жыл бұрын
She already lost me, when her dice rolled a 4 with a probability of more than 20% - and when they talked about the probability matrixes, they never mentioned what are the different axis of these matrixes. Plus, are these only 2-dimensional matrixes, or do they have more dimensions? I would assume for a 6-sided dice it's just [1 0 0 0 0 0] [0 1 0 0 0 0] [0 0 1 0 0 0] [0 0 0 1 0 0] [0 0 0 0 1 0] [0 0 0 0 0 1], but I do not see more information here, than saying the chance for each number is 1/6, assuming the dice is fair. And I may even not see that on a first view.
@manfredullrich4833 жыл бұрын
But later it's getting easier, cause she actually can explain abstract concepts in a more relatable way.
@AndreAmorim-AA3 жыл бұрын
Elements
@aprylvanryn58983 жыл бұрын
Bradley is so much smarter than I am
@fs57753 жыл бұрын
who cares? it's not a competition. focus on the knowledge, not the personal comparison