Socratic Learning
2:54
8 жыл бұрын
PaleoDeepDive
3:37
11 жыл бұрын
Пікірлер
@t.j.mcnamara4379
@t.j.mcnamara4379 4 ай бұрын
Omfg baby! I miss your face
@t.j.mcnamara4379
@t.j.mcnamara4379 4 ай бұрын
Omfg baby! I miss your face
@lukebechtel
@lukebechtel Жыл бұрын
Very cool.
@mrdbourke
@mrdbourke Жыл бұрын
Wow! What a fantastic overview! I'm building my own ML system right now and slowly stumbled upon my own Overton-like system. Thank you for sharing.
@PrachuryyaKaushik
@PrachuryyaKaushik 2 жыл бұрын
Thank you very much.
@kskfernando2945
@kskfernando2945 2 жыл бұрын
Slow downnnnah 😐
@imranq9241
@imranq9241 2 жыл бұрын
Why is a hyperbolic space so much better for tree-like data, and how do you know how tree-like ones data is ?
@hubsiii5969
@hubsiii5969 3 жыл бұрын
Great talk and amazing work! What exactly is the difference between the projection and the logarithmic map? Both map from the manifold to the tangent space? And how would these projections be computed in the Poincare disk? Thanks :)
@billbao7091
@billbao7091 3 жыл бұрын
Awesome resource! One question, what is the difference between Hazy and the current knowledge base/graph like ConceptNet, Yago, DBPedia, OpenCyc/NextKB, WordNet, and so on?
@ProbablyTurtles
@ProbablyTurtles 3 жыл бұрын
What a great presentation! At minute 7pm the author explains how for binary trees, the number of nodes grows exponentially with depth. But in Euclidean space, the number of nodes grows polynomially with the tree radius. My disconnect with understanding this is about relating a "binary tree" to "Euclidean space". Is this "Euclidean space" referring to the adjacency matrix created for the nodes in the graph? If so, how does Hyperbolic Geometry offer an improvement? Thank you in advance. This is such an interesting topic.
@julessci2716
@julessci2716 3 жыл бұрын
Nice ideas
@stephennfernandes
@stephennfernandes 3 жыл бұрын
Wow this is a really amazing tool. Great work entire Team
@SamiHaija
@SamiHaija 3 жыл бұрын
Fantastic talk! I learned a ton. Thank you for sharing! May I ask: you defined "g" (Riemannian Metric Tensor) on slide 10. On slide 15, you use g inverse. Does it invert the g? The first g takes 2 vectors and maps onto a scalar. The inverse is now taking a vector. How can g^-1 be computed? -- I am sorry if my question is novice... g(u, v) = <u, v>_L ... And also, in general, is the g function invertible? I.e. if you fix "a" and "b" in "g(u, a) == b"; must there be at most one value of "u" that satisfies equality?
@prikarsartam
@prikarsartam 2 жыл бұрын
the g is considered as a matrix with i-j entries as [ e_i(p) . e_j(p)], the standard inner product of i-th and j-th local basis, with non-negative determinant, whose inverse is mentioned here.
@RoDrop
@RoDrop 4 жыл бұрын
Love it! Thanks for sharing
@albertotono5282
@albertotono5282 4 жыл бұрын
Awesome work.
@kenubab
@kenubab 4 жыл бұрын
Hallo 8a!
@haoyin3366
@haoyin3366 4 жыл бұрын
Thanks for your clear explanation. I really enjoyed this video. As for the scaling, I am a little bit confused about how scaling solves the problem of Zi = 0? Why using effective rank is important? Could you please explain a little more? Thanks a lot.
@SaimonThapa
@SaimonThapa 10 ай бұрын
The the term "scaling" refers to the scaling of the theorem itself, not to solving the problem of z_i=0. Scaling of the theorem is referred to how the performance or behavior of the theorem changes as certain parameters vary. The issue of z_i=0 arises because it implies that the labeling function associated with z_i​ has an accuracy of 0.5, meaning it's essentially guessing randomly. This situation can be problematic because it doesn't provide any useful information for training the model. However, the scaling of the theorem, in this context, doesn't directly solve the problem of z_i=0. Instead, it helps in understanding how the theorem's performance changes as the parameters, such as z_i, vary. By understanding this scaling behavior, we can assess the impact of such cases where z_i=0 on the overall performance of the weakly supervised learning approach, and potentially devise strategies to mitigate its effects. The effective rank provides insight into the complexity of the sample data. It indicates how much information is contained in the covariance matrix relative to its size. A higher effective rank suggests that the data is less noisy and contains more meaningful information. If the effective rank is high, it implies that the data contains a significant amount of useful information relative to its size. In this case, the theorem may scale favorably, meaning that the model's performance improves as more weakly supervised data is provided. In structured learning tasks, where relationships between data points are important (e.g., in graph-based models), understanding the effective rank helps in determining the appropriate learning rates. By accounting for the complexity of the data through the effective rank, it becomes possible to optimize the learning process and achieve better performance on structured learning tasks.
@amiltonwong
@amiltonwong 4 жыл бұрын
Thanks for the great tutorial. Is the slides available for download?
@mikemihay
@mikemihay 4 жыл бұрын
Thank you for this awesome explanation!
@ОлександрЛапшин-ч5д
@ОлександрЛапшин-ч5д 6 жыл бұрын
Is UI for Babble open-sourced anywhere?
@Bradenjh
@Bradenjh 6 жыл бұрын
Unfortunately not. The code underwent a big refactor for public release and the UI wasn't kept in sync. You can find the code though with tutorials at: github.com/HazyResearch/babble
@ShunZheng-ot7zx
@ShunZheng-ot7zx 7 жыл бұрын
Great! 👍👍👍
@jiliu3619
@jiliu3619 8 жыл бұрын
Awesome :-)
@woodyprescott
@woodyprescott 9 жыл бұрын
It would be great if you could publish this video without the music in the background.
@noushervanijaz
@noushervanijaz 10 жыл бұрын
background music is just irritating in understating...
@jimkirkland5838
@jimkirkland5838 11 жыл бұрын
So how does it deal with improving stratigraphic and temporal resolution.? Who are your experts; I never heard of them?
@shananpeters1958
@shananpeters1958 11 жыл бұрын
In the same way it deals with improving and changing taxonomic opinion and classification: by the year of the "opinion" expressed in the source. The "experts" are mostly me and the contributors to the Paleobiology Database (paleobiodb.org), but this number will be growing soon....
@JonTennant
@JonTennant 11 жыл бұрын
This looks great! Couple of questions though: are there any legal issues associated with data mining such a huge corpus of journal articles? And how are the relative 'accuracies' of PDD and the PaleobioDB ranked? I mean, PDD might have found 15k taxon names in their sample, but I bet most of them didn't have occurrence data, which is probably what the 1.1k from the PalaeobioDB represents.
@MarkoBosscher
@MarkoBosscher 11 жыл бұрын
The data can't be copyrighted, only the articles. There's probably a bigger problem with non-machine readable articles (Nature?)
@JonTennant
@JonTennant 11 жыл бұрын
Marko Bosscher Yep, but just how many articles does this include?
@MarkoBosscher
@MarkoBosscher 11 жыл бұрын
Jon Tennant I believe that depending on the arrangements made by PDD they may have access to all of Elsevier's catalogue except for papers published under restrictive OA license.