The DataTopics Podcast
0:26
Жыл бұрын
Пікірлер
@bledardo
@bledardo 3 ай бұрын
Hi, great presentation, thank you
@ThisElonMuskEZEL
@ThisElonMuskEZEL 5 ай бұрын
As bayrakları🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷
@pokepress
@pokepress 6 ай бұрын
Compensating the creators of the training data might sound like a good idea, but it breaks down in a few key areas: -Generative models can/will be created/modified/used locally by individuals and small groups, not just large companies, making enforcement difficult/impossible. -If you go so far as to ascribe authorship rights to generated works, it creates issues for licensing and copyright expiration. Additionally, the creator of the first instance of that element has likely been dead for over 70 years, which suggests the element should be in the public domain. -If the same element of a generated work is tied back to thousands (or more) of individuals, you can argue that element isn’t substantial enough to warrant copyright protection due to a lack of uniqueness.
@doomse150
@doomse150 7 ай бұрын
Given that her movies are publicly available, I think it's very likely that they just went data scraping after she denied their offer.
@TheRogueNinja1
@TheRogueNinja1 8 ай бұрын
Shitty Dutch accent
@jsoenen
@jsoenen 9 ай бұрын
Just checking whether Murilo keeps his word on answering the comments 🙃
@murilo-cunha
@murilo-cunha 8 ай бұрын
Hehehe I do my best 😅
@MrMahoark
@MrMahoark 9 ай бұрын
this is not real time :(
@LukasValatka
@LukasValatka 10 ай бұрын
Interesting thought process on whether we should care about writing dialect-inspecific SQL :) Maybe just embrace dialects and gain performance, clarity, usability, "you're not migrating everyday" - agree.
@Lelouchvv
@Lelouchvv Жыл бұрын
Thanks… 13:14 I have a question that why do you use precision, recall…. metrics (metrics for classification)? And how does model calculate that, because its not discrete value. I am a newbie
@anatolyalekseev101
@anatolyalekseev101 Жыл бұрын
I think the lecturer misunderstands how pruning works. For it to be effective, there should be opportunity to continue or skip evaluation. In boosting or trees case, it's partial_fit. If you don;t utilize it, it does not matter that you receive ShouldPrune signal from the pruner: you are not exploiting it anyway, as you have already finished training & scoring. That's why with "pruning" the author had the same runtime as without it. Btw, Optuna docs suffer from initializing and partitioning data within the objective function. I don't understand why is everyone copying that without any thinking.
@okpanda_
@okpanda_ Жыл бұрын
i know this comment is 2 years late, but does this work if i deploy my app to the internet, and it will access the users mic?
@nahianbeentefiruj4269
@nahianbeentefiruj4269 Жыл бұрын
Can't run the code for tensorlow version greater than 1.15.0. How can I resolve this?
@meg7617
@meg7617 2 жыл бұрын
Very informative. Thanks Frederick it's a great presentation.
@spicytuna08
@spicytuna08 2 жыл бұрын
thanks when you run a test, the results do not look good. for those with 'interaction' equal to 1, the prediction should be close to 1. but this is not the case.
@spicytuna08
@spicytuna08 2 жыл бұрын
thanks. when you test, you are using data from training. i am referring to this line: long_test = wide_to_long( ) the parameter should be data['test'].. please correct me if i am wrong,
@spicytuna08
@spicytuna08 2 жыл бұрын
thanks. i see a problem with calling make_tf_dataset() just once for training. this function returns a size of 512 in tensor type. you are using this data just once for training. i think you need to put this into a loop. or make the batch size bigger. am i missing out in understanding?
@spencerhailee9175
@spencerhailee9175 2 жыл бұрын
ᎮᏒᎧᎷᎧᏕᎷ
@ArmanAli-ww7ml
@ArmanAli-ww7ml 2 жыл бұрын
Do we need historical data to train our mode in reinforcement learning?
@dataroots
@dataroots 2 жыл бұрын
Hi Arman, well you can start building your reinforcement learning algorithm from scratch, this is called online training and thus the model will learn as it sees more examples and becomes better (hence not a great model at the beginning) or if you have historical data you can use offline learning to already get a first model before using online training to improve it :)
@ArmanAli-ww7ml
@ArmanAli-ww7ml 2 жыл бұрын
@@dataroots so if i have historical states data, i can use this to train RL agent ? This would be called as offline training ?
@efesencan8079
@efesencan8079 2 жыл бұрын
My second question is, if interactions were not encoded as binary, but encoded as the actual ratings (explicit feedback rather implicit feedback), does your provided code still produce meaningful ncf_predcitons?
@murilo-cunha
@murilo-cunha 2 жыл бұрын
I believe it should (it's been a while). The only thing you want to modify is to normalize the actual ratings between 0 and 1.
@efesencan8079
@efesencan8079 2 жыл бұрын
I did not really understand what these ncf_predictions means for the prediction. Does higher ncf_prediction value for specific (user_id,item_id) means they should be recommended to the user? Then, during the recommendation phase, for every (user_id,item_id) pair, should I recommend the item_id with the highest ncf value to that user?
@murilo-cunha
@murilo-cunha 2 жыл бұрын
Yes, the highest predicted values that the user has not already seen/bought should be recommended. The ncf_predictions is basically the models' "guess" of whether you'd buy/watch by yourself (and we approximate "watched" = "liked").
@efesencan8079
@efesencan8079 2 жыл бұрын
@@murilo-cunha Thank you for the answers. Do you also have any recommendations to reduce the training time of the NCF model. I currently have 138k users and 1470 items. It takes more than days to finish the training process.
@murilo-cunha
@murilo-cunha 2 жыл бұрын
@@efesencan8079 Hmm nothing in particular to this. You can always reduce the model size (layers, embedding size, etc.), scale your training up (get a more powerful machine - GPUs, etc.) or scale out (distributed training with SparkML or something). It's a bit hard to say without more specific info. Hope this helps!
@chetouanethiziri3831
@chetouanethiziri3831 2 жыл бұрын
hi @Efe Sencan can you give me the link to your dataset please, i am having trouble finding one, i am also working on social media users.
@eprohoda
@eprohoda 2 жыл бұрын
dataroots- well.
@ferdyanggara4440
@ferdyanggara4440 2 жыл бұрын
very good insight!
@dataroots
@dataroots 2 жыл бұрын
Thanks for watching!
@demohub
@demohub 2 жыл бұрын
Good demonstration. Thanks for sharing
@dataroots
@dataroots 2 жыл бұрын
Thanks for watching!
@pierangelocalanna7727
@pierangelocalanna7727 2 жыл бұрын
Hi Vitale and everyone, great presentation so far, thanks for sharing this with me. Have you guys worked with Object Tracking models?
@vitalesparacello6454
@vitalesparacello6454 2 жыл бұрын
Hello Pierangelo, thanks for the comment! Unfortunately I’ve never worked with these models, but I think it’s a very interesting topic. For example I saw that in London a company has implemented a tracking system to keep the queue of people ordering in a pub. Do you know other particular implementations?
@faouzibraza2635
@faouzibraza2635 2 жыл бұрын
Nice guys ! You look like rock stars !
@farhanuddinfazaluddinkazi7198
@farhanuddinfazaluddinkazi7198 2 жыл бұрын
Thank you, loved the explaination, you covered quite a lot in very less time and also very clearly
@dataroots
@dataroots 2 жыл бұрын
Glad you liked it
@gabrielfernandez4334
@gabrielfernandez4334 2 жыл бұрын
Great content, thanks guys!
@hetvipatel4894
@hetvipatel4894 2 жыл бұрын
Hii! Do you any code related to real time interview app with streamlit and python
@dataroots
@dataroots 2 жыл бұрын
Code for this demo: github.com/datarootsio/rootslab-streamlit-demo is about voice transferring with a lightweight streamlit application, no real time interviewing
@senantiasa
@senantiasa 2 жыл бұрын
For me, this was the clearest explanation..!!
@hatmous1734
@hatmous1734 3 жыл бұрын
👍🏼
@thecheeseking2757
@thecheeseking2757 3 жыл бұрын
Sounds like a halflife song from the soundtrack
@keresztes813
@keresztes813 3 жыл бұрын
I see what you mean
@bobbyiliev_
@bobbyiliev_ 3 жыл бұрын
Great episode! Really enjoyed it!
@dataroots
@dataroots 2 жыл бұрын
Glad you enjoyed it!
@riteshpathak7895
@riteshpathak7895 3 жыл бұрын
starting 12 min there is no content please remove it
@riteshpathak7895
@riteshpathak7895 3 жыл бұрын
thanks for sharing
@vasyay5307
@vasyay5307 3 жыл бұрын
Very useful and cool. Thank you! We use S3 in our installation <---->DataSync<--->EFS mounted in ECS
@fatmadehbi2946
@fatmadehbi2946 3 жыл бұрын
hello,thanks for this video if u can pls send me the code plz
@murilo-cunha
@murilo-cunha 3 жыл бұрын
There are some links in the description. For google colab: colab.research.google.com/github/murilo-cunha/inteligencia-superficial/blob/master/_notebooks/2020-09-11-neural_collaborative_filter.ipynb
@fatmadehbi2946
@fatmadehbi2946 3 жыл бұрын
@@murilo-cunha thanks a lot
@arvindchavan9759
@arvindchavan9759 3 жыл бұрын
thanks.. I have question on userid information... is it possible to provide user related information as input to model?
@murilo-cunha
@murilo-cunha 3 жыл бұрын
Yes you can. But then you are moving towards a more hybrid approach (as opposed to the collaborative filtering approach in the video).
@matthewdaw6485
@matthewdaw6485 3 жыл бұрын
Man, this sound quality is terrible. I can only understand maybe 20% of what you're saying. Even google captions think you're speaking German half of the time. Very disappointed.
@nemanjaradojkovic1224
@nemanjaradojkovic1224 3 жыл бұрын
Howdy, cowboys!
@dragon3602010
@dragon3602010 3 жыл бұрын
awesome I like it 😊 I have a question is there a way to put a debugger and then try some codes on the fly without rerun everything like in a notebook in python Thanks
@itmazter
@itmazter 3 жыл бұрын
nice explained video i don't find this path in github. can u plz help me with source link source = "datarootsio/ecs-airflow/aws"
@ivombi2643
@ivombi2643 3 жыл бұрын
Very good presentation.
@thomasdekelver74
@thomasdekelver74 3 жыл бұрын
Can you share the code you have showed during the demo to create the pipeline?
@Bawaromerali
@Bawaromerali 3 жыл бұрын
thanks , but the problem with these kind of videos is , you are talking to an expert guy who know all these things, but someone who does not know these things will not understand anything ! i hope in future videos be more detailed and slowly explain each steps not only read slides !
@jagicyooo2007
@jagicyooo2007 3 жыл бұрын
no one can hold your hands through everything; you need to do some research on ur own to get a feel for the context of this domain. I'd suggest you to do that first and then come back to re-watch the video.
@Bawaromerali
@Bawaromerali 3 жыл бұрын
@@jagicyooo2007 Thanks for replay , i learnt and already built a recommender system and i understood these kind of videos is wasting time ! people should learn how to implement it not just short videos and highlights .
4 жыл бұрын
Where I find out the code showed in this demo/live? Could you share with us?
@dataroots
@dataroots 4 жыл бұрын
github.com/datarootsio/rootslab-streamlit-demo
@retr0435
@retr0435 3 жыл бұрын
@@dataroots Thanks
@banji007
@banji007 4 жыл бұрын
How can I convert speech to text using Streamlit ?
@PrasunChakrabortyPC
@PrasunChakrabortyPC 4 жыл бұрын
@Anirban Banerjee try this code : import streamlit as st import speech_recognition as sr def takecomand(): r=sr.Recognizer() with sr.Microphone() as source: st.write("answer please....") audio=r.listen(source) try: text=r.recognize_google(audio) st.write("You said :",text) except: st.write("Please say again ..") return text if st.button("Click me"): takecomand()
@banji007
@banji007 4 жыл бұрын
@@PrasunChakrabortyPC thanks so much
@noeld.8363
@noeld.8363 4 жыл бұрын
Do you modify the similarity distance by giving a pairwise constraint to the model? What do you modify exactly when you give this new constraint? (this is more a question about semi-supervised learning and not interactive clustering)
@lykmeraid1
@lykmeraid1 4 жыл бұрын
samuel samuel does whatever samuel does can he swing from a web? no he can't he's just a devops guy watch ouuuut here comes samuel