Interview with Aude Billard
18:14
4 ай бұрын
Interview with Bruce Davie
22:57
4 ай бұрын
Interview with Fernando Pereira
34:21
Interview with Gordon Brebner
22:35
Interview with Lincoln Wallen
26:44
Interview with Mark Jerrum
23:46
4 ай бұрын
Interview with Masashi Sugiyama
11:01
Interview with Philippa Gardner
29:45
Interview with Albert Cohen
17:03
4 ай бұрын
Interview with Peter Dayan
20:21
4 ай бұрын
Пікірлер
@EzraCheney
@EzraCheney Жыл бұрын
Trivial contrivance
@saqibwarriach
@saqibwarriach 2 жыл бұрын
A very informative session
@Erkocaktan
@Erkocaktan 3 жыл бұрын
👍
@bennri
@bennri 3 жыл бұрын
16:36 add one for Tomotopy
@Consistent_studying
@Consistent_studying 4 жыл бұрын
Very beautiful! Thanks for sharing.
@Jack-lg9mq
@Jack-lg9mq 5 жыл бұрын
Does anyone know where his other talk is that describes how to perform inference? 16:12
@HarpreetKaur-qq8rx
@HarpreetKaur-qq8rx 5 жыл бұрын
Hello professor, Can LDA be used to categorize documents into strict categories. Your video suggest otherwise but I wanted to confirm.
@manishthapliyal6372
@manishthapliyal6372 5 жыл бұрын
I think you should use hard clustering algorithm like k-means or hierarchical clustering for strict topics because topic modelling is a. Soft clustering approach
@HarpreetKaur-qq8rx
@HarpreetKaur-qq8rx 5 жыл бұрын
Thank You Manish for the reply but could you further elaborate on what is meant by soft and hard techniques
@guoguozheng32
@guoguozheng32 3 жыл бұрын
@@HarpreetKaur-qq8rx to my understanding, a hard clustering would assume all the documents in a corpus have the same probably of showing each of the topics. Each document is assumed to only show one of the topics and all the words in this document are assumed to show this topic. A soft clustering assumes each document has different probabilities of showing each of the topics. And a document shows all the topics rather than one. A word in a document shows one of the topics and the words in a document may show different topics.
@kapilkumar2650
@kapilkumar2650 6 жыл бұрын
If any semantic meaning that lda has, I think its because of the gibbs sampling step that tries to push a word into a topic that its neighbor words are already in. In broader sense what gibbs sampling is doing is P(selected word | each topic) x P(neighboring words | each topic)
@jagadeeshakanihal
@jagadeeshakanihal 6 жыл бұрын
link for pdf of the presentation - www.cs.columbia.edu/~blei/talks/Blei_User_Behavior.pdf
@phpn99
@phpn99 7 жыл бұрын
I think the weakness in LDA is that it conflates semantics with words. Meaning arises via the relations between words; which totally escape LDA analysis. All LDA is good for, is to estimate the word promiximity between documents, but it's effectively incapable of extractive precise topics from documents; only generic topics.
@RPDBY
@RPDBY 7 жыл бұрын
its good enough if you have to deal with hundreds of documents containing thousands of words each
@phpn99
@phpn99 7 жыл бұрын
Sure; but what is it good at ? What is the semantic value of the (let's call it a cartesian distance) between two LDA signatures ? I know what I'm talking about. I worked for a couple of years on an LDA-based classification project and the semantic value of the topics extracted from the documents was too general to be truly useful. I think Blei et all have found an interesting statistical method and a cool idea, but what they fail to express in this entire approach is precisely in what way their metric and the methods by which they choose words yields any meaningful insights on the analyzed texts. I find this whole thing very superficial. Without connecting your word net to some semantic ontology, you are doing nothing but an arbitrary match; arbitrary in the sense that meaning in language occurs in more complex ways than with individual nouns, vebs and adjectives.
@aahirip737
@aahirip737 7 жыл бұрын
I'm a noob on this, few weeks into NLP and i'm trying to solve a usecase and i'm hitting exactly this issue. Ultimately LDA just gives me a bunch of topic ids with words that dont mean anything together. I read that i have to name the topics myself ! And i landed here looking for a 'solution'. hmm .. i'm not the only one. Meanwhile i found something interesting ..dont know its worth, ieeexplore.ieee.org/document/6405699/. This introduces the term 'concept' between topic and word - Could not find any implementations as yet.
@RPDBY
@RPDBY 7 жыл бұрын
Pritish N I applied lda to public speeches and was able to compare results to manual results (i.e. people read the speeches and distinguished the main topics) and lda performed rather well and discovered 12 out 15 distinct topics. For instance health care topic had such words as health care afford insurance cost at the top, so u won't confuse it with anything else. I also have a few topics that are hard to interpret, but it gave me the main topics I needed across 6000 documents. I need to mention that in addition to stopwords I had to exclude about 30 other words that were frequent but noninformative, such as year state always because etc, these will depend on your area, of course, but they pollute the results, and the exclusion helped a lot.
@phpn99
@phpn99 7 жыл бұрын
The problem is that the whole concept of the "topic" is grossly inflated. It has very shallow semantic value. A topic is a broad and ambiguous category.
@RPDBY
@RPDBY 7 жыл бұрын
how to make this graph from 2:30 in R?
@DILLIPKUMARSAHOOIITM
@DILLIPKUMARSAHOOIITM 7 жыл бұрын
Beautiful lecture on topic modeling. Thanks Prof Blei and Univ Edinburg for making this lecture available.
@tianqilingchi1666
@tianqilingchi1666 7 жыл бұрын
哈,机器人心理学
@soulkhaya
@soulkhaya 7 жыл бұрын
Awesome!! Could you make a video about Design Informatics MSc please? Like what do students learn and what kinds of jobs they will do. I'm really interested in that programme but I failed to find some useful information about DI :( thanks 〰
@edinschoolofinformatics
@edinschoolofinformatics 7 жыл бұрын
Hi Shiya, please find an Introduction to Design Informatics here: kzbin.info/www/bejne/qWrHnI2phdmWbJI Hope this gives you the information you need!
@soulkhaya
@soulkhaya 7 жыл бұрын
Thanks!! I'll check it out!! And i have another question.. Is DI (MSc) a STEM subject? I mean I'm not sure what fields MSc DI focuses on mainly.. human factors? design principles? CS? or AI? cause i found that the compulsory courses are not about tech at all.. but this programme is at the School of Informatics right? I know students who major in DI have option courses at Informatics but they can only select 40 credits.. compared with required courses at ECA, that's too less I think.. if this programme is a STEM subject.. I'm really a little confused about this so I'd super appreciate it if you could help me figure this out but it's also okay if you do not have time to reply me😆 Anyways thank you for your lovely reply again ~
@edinschoolofinformatics
@edinschoolofinformatics 7 жыл бұрын
Hi Shiya, thank you for this! Did you have the opportunity to have a look at the Design Informatics website? www.designinformatics.org/postgraduate There is plenty of information which will point you to the right places (such as the degree description www.ed.ac.uk/studying/postgraduate/degrees/index.php?r=site/view&id=803) and the right people to contact (please see at the bottom of the page).
@soulkhaya
@soulkhaya 7 жыл бұрын
Yay I've seen these before and I want more information about this programme.. anyways thanks a lot for your time and patience ~
@madinarysakova6134
@madinarysakova6134 7 жыл бұрын
Nazgul, I am proud of you!!!!!!