As bayrakları🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷🇹🇷
@pokepress6 ай бұрын
Compensating the creators of the training data might sound like a good idea, but it breaks down in a few key areas: -Generative models can/will be created/modified/used locally by individuals and small groups, not just large companies, making enforcement difficult/impossible. -If you go so far as to ascribe authorship rights to generated works, it creates issues for licensing and copyright expiration. Additionally, the creator of the first instance of that element has likely been dead for over 70 years, which suggests the element should be in the public domain. -If the same element of a generated work is tied back to thousands (or more) of individuals, you can argue that element isn’t substantial enough to warrant copyright protection due to a lack of uniqueness.
@doomse1507 ай бұрын
Given that her movies are publicly available, I think it's very likely that they just went data scraping after she denied their offer.
@TheRogueNinja18 ай бұрын
Shitty Dutch accent
@jsoenen9 ай бұрын
Just checking whether Murilo keeps his word on answering the comments 🙃
@murilo-cunha8 ай бұрын
Hehehe I do my best 😅
@MrMahoark9 ай бұрын
this is not real time :(
@LukasValatka10 ай бұрын
Interesting thought process on whether we should care about writing dialect-inspecific SQL :) Maybe just embrace dialects and gain performance, clarity, usability, "you're not migrating everyday" - agree.
@Lelouchvv Жыл бұрын
Thanks… 13:14 I have a question that why do you use precision, recall…. metrics (metrics for classification)? And how does model calculate that, because its not discrete value. I am a newbie
@anatolyalekseev101 Жыл бұрын
I think the lecturer misunderstands how pruning works. For it to be effective, there should be opportunity to continue or skip evaluation. In boosting or trees case, it's partial_fit. If you don;t utilize it, it does not matter that you receive ShouldPrune signal from the pruner: you are not exploiting it anyway, as you have already finished training & scoring. That's why with "pruning" the author had the same runtime as without it. Btw, Optuna docs suffer from initializing and partitioning data within the objective function. I don't understand why is everyone copying that without any thinking.
@okpanda_ Жыл бұрын
i know this comment is 2 years late, but does this work if i deploy my app to the internet, and it will access the users mic?
@nahianbeentefiruj4269 Жыл бұрын
Can't run the code for tensorlow version greater than 1.15.0. How can I resolve this?
@meg76172 жыл бұрын
Very informative. Thanks Frederick it's a great presentation.
@spicytuna082 жыл бұрын
thanks when you run a test, the results do not look good. for those with 'interaction' equal to 1, the prediction should be close to 1. but this is not the case.
@spicytuna082 жыл бұрын
thanks. when you test, you are using data from training. i am referring to this line: long_test = wide_to_long( ) the parameter should be data['test'].. please correct me if i am wrong,
@spicytuna082 жыл бұрын
thanks. i see a problem with calling make_tf_dataset() just once for training. this function returns a size of 512 in tensor type. you are using this data just once for training. i think you need to put this into a loop. or make the batch size bigger. am i missing out in understanding?
@spencerhailee91752 жыл бұрын
ᎮᏒᎧᎷᎧᏕᎷ
@ArmanAli-ww7ml2 жыл бұрын
Do we need historical data to train our mode in reinforcement learning?
@dataroots2 жыл бұрын
Hi Arman, well you can start building your reinforcement learning algorithm from scratch, this is called online training and thus the model will learn as it sees more examples and becomes better (hence not a great model at the beginning) or if you have historical data you can use offline learning to already get a first model before using online training to improve it :)
@ArmanAli-ww7ml2 жыл бұрын
@@dataroots so if i have historical states data, i can use this to train RL agent ? This would be called as offline training ?
@efesencan80792 жыл бұрын
My second question is, if interactions were not encoded as binary, but encoded as the actual ratings (explicit feedback rather implicit feedback), does your provided code still produce meaningful ncf_predcitons?
@murilo-cunha2 жыл бұрын
I believe it should (it's been a while). The only thing you want to modify is to normalize the actual ratings between 0 and 1.
@efesencan80792 жыл бұрын
I did not really understand what these ncf_predictions means for the prediction. Does higher ncf_prediction value for specific (user_id,item_id) means they should be recommended to the user? Then, during the recommendation phase, for every (user_id,item_id) pair, should I recommend the item_id with the highest ncf value to that user?
@murilo-cunha2 жыл бұрын
Yes, the highest predicted values that the user has not already seen/bought should be recommended. The ncf_predictions is basically the models' "guess" of whether you'd buy/watch by yourself (and we approximate "watched" = "liked").
@efesencan80792 жыл бұрын
@@murilo-cunha Thank you for the answers. Do you also have any recommendations to reduce the training time of the NCF model. I currently have 138k users and 1470 items. It takes more than days to finish the training process.
@murilo-cunha2 жыл бұрын
@@efesencan8079 Hmm nothing in particular to this. You can always reduce the model size (layers, embedding size, etc.), scale your training up (get a more powerful machine - GPUs, etc.) or scale out (distributed training with SparkML or something). It's a bit hard to say without more specific info. Hope this helps!
@chetouanethiziri38312 жыл бұрын
hi @Efe Sencan can you give me the link to your dataset please, i am having trouble finding one, i am also working on social media users.
@eprohoda2 жыл бұрын
dataroots- well.
@ferdyanggara44402 жыл бұрын
very good insight!
@dataroots2 жыл бұрын
Thanks for watching!
@demohub2 жыл бұрын
Good demonstration. Thanks for sharing
@dataroots2 жыл бұрын
Thanks for watching!
@pierangelocalanna77272 жыл бұрын
Hi Vitale and everyone, great presentation so far, thanks for sharing this with me. Have you guys worked with Object Tracking models?
@vitalesparacello64542 жыл бұрын
Hello Pierangelo, thanks for the comment! Unfortunately I’ve never worked with these models, but I think it’s a very interesting topic. For example I saw that in London a company has implemented a tracking system to keep the queue of people ordering in a pub. Do you know other particular implementations?
@faouzibraza26352 жыл бұрын
Nice guys ! You look like rock stars !
@farhanuddinfazaluddinkazi71982 жыл бұрын
Thank you, loved the explaination, you covered quite a lot in very less time and also very clearly
@dataroots2 жыл бұрын
Glad you liked it
@gabrielfernandez43342 жыл бұрын
Great content, thanks guys!
@hetvipatel48942 жыл бұрын
Hii! Do you any code related to real time interview app with streamlit and python
@dataroots2 жыл бұрын
Code for this demo: github.com/datarootsio/rootslab-streamlit-demo is about voice transferring with a lightweight streamlit application, no real time interviewing
@senantiasa2 жыл бұрын
For me, this was the clearest explanation..!!
@hatmous17343 жыл бұрын
👍🏼
@thecheeseking27573 жыл бұрын
Sounds like a halflife song from the soundtrack
@keresztes8133 жыл бұрын
I see what you mean
@bobbyiliev_3 жыл бұрын
Great episode! Really enjoyed it!
@dataroots2 жыл бұрын
Glad you enjoyed it!
@riteshpathak78953 жыл бұрын
starting 12 min there is no content please remove it
@riteshpathak78953 жыл бұрын
thanks for sharing
@vasyay53073 жыл бұрын
Very useful and cool. Thank you! We use S3 in our installation <---->DataSync<--->EFS mounted in ECS
@fatmadehbi29463 жыл бұрын
hello,thanks for this video if u can pls send me the code plz
@murilo-cunha3 жыл бұрын
There are some links in the description. For google colab: colab.research.google.com/github/murilo-cunha/inteligencia-superficial/blob/master/_notebooks/2020-09-11-neural_collaborative_filter.ipynb
@fatmadehbi29463 жыл бұрын
@@murilo-cunha thanks a lot
@arvindchavan97593 жыл бұрын
thanks.. I have question on userid information... is it possible to provide user related information as input to model?
@murilo-cunha3 жыл бұрын
Yes you can. But then you are moving towards a more hybrid approach (as opposed to the collaborative filtering approach in the video).
@matthewdaw64853 жыл бұрын
Man, this sound quality is terrible. I can only understand maybe 20% of what you're saying. Even google captions think you're speaking German half of the time. Very disappointed.
@nemanjaradojkovic12243 жыл бұрын
Howdy, cowboys!
@dragon36020103 жыл бұрын
awesome I like it 😊 I have a question is there a way to put a debugger and then try some codes on the fly without rerun everything like in a notebook in python Thanks
@itmazter3 жыл бұрын
nice explained video i don't find this path in github. can u plz help me with source link source = "datarootsio/ecs-airflow/aws"
@ivombi26433 жыл бұрын
Very good presentation.
@thomasdekelver743 жыл бұрын
Can you share the code you have showed during the demo to create the pipeline?
@Bawaromerali3 жыл бұрын
thanks , but the problem with these kind of videos is , you are talking to an expert guy who know all these things, but someone who does not know these things will not understand anything ! i hope in future videos be more detailed and slowly explain each steps not only read slides !
@jagicyooo20073 жыл бұрын
no one can hold your hands through everything; you need to do some research on ur own to get a feel for the context of this domain. I'd suggest you to do that first and then come back to re-watch the video.
@Bawaromerali3 жыл бұрын
@@jagicyooo2007 Thanks for replay , i learnt and already built a recommender system and i understood these kind of videos is wasting time ! people should learn how to implement it not just short videos and highlights .
4 жыл бұрын
Where I find out the code showed in this demo/live? Could you share with us?
@dataroots4 жыл бұрын
github.com/datarootsio/rootslab-streamlit-demo
@retr04353 жыл бұрын
@@dataroots Thanks
@banji0074 жыл бұрын
How can I convert speech to text using Streamlit ?
@PrasunChakrabortyPC4 жыл бұрын
@Anirban Banerjee try this code : import streamlit as st import speech_recognition as sr def takecomand(): r=sr.Recognizer() with sr.Microphone() as source: st.write("answer please....") audio=r.listen(source) try: text=r.recognize_google(audio) st.write("You said :",text) except: st.write("Please say again ..") return text if st.button("Click me"): takecomand()
@banji0074 жыл бұрын
@@PrasunChakrabortyPC thanks so much
@noeld.83634 жыл бұрын
Do you modify the similarity distance by giving a pairwise constraint to the model? What do you modify exactly when you give this new constraint? (this is more a question about semi-supervised learning and not interactive clustering)
@lykmeraid14 жыл бұрын
samuel samuel does whatever samuel does can he swing from a web? no he can't he's just a devops guy watch ouuuut here comes samuel