Пікірлер
@nmanikiran
@nmanikiran 10 сағат бұрын
Yes, please make a separate video
@azam_izm
@azam_izm Күн бұрын
good
@Glitcheverywhere
@Glitcheverywhere Күн бұрын
Better you change the title of the video
@raoufayadi1583
@raoufayadi1583 2 күн бұрын
If you can’t explain it to a kid, you didn’t get at the first place. This is by far the best practical explanation of DRL, keep up the good work!
@MuchlisNopq
@MuchlisNopq 2 күн бұрын
Great 🎉
@saivarshith8554
@saivarshith8554 3 күн бұрын
Thank you very helpful
@cturyasiima
@cturyasiima 3 күн бұрын
Great video, appreciate you man👏
@aadhuff5691
@aadhuff5691 5 күн бұрын
simple
@CheikhPensaba
@CheikhPensaba 14 күн бұрын
w vid
@readmore8974
@readmore8974 15 күн бұрын
Hmm, pretty cool idea
@grapplerk2635
@grapplerk2635 24 күн бұрын
How can i use something like that but for creative style text generating?
@TheOpenSourceChannel
@TheOpenSourceChannel 23 күн бұрын
Stable Diffusion excels at generating images based on text prompts, it's not currently designed for directly creating creative style text generation. Trying LLMs(explore libraries like TensorFlow or PyTorch to interact with LLMs) could be an option for you as they are specifically trained on text data and could generate creative text formats.
@TheOpenSourceChannel
@TheOpenSourceChannel 25 күн бұрын
Source code: github.com/VatsalBhesaniya/Customer-Support-Bot
@godangle2141
@godangle2141 25 күн бұрын
Thank you ! I successfully fo it , got some issue related to accessing model from hugging face , then in API , over all i resolved it , and integrate Lamma model in the Google colab , thank you ❤
@emreylmaz3330
@emreylmaz3330 26 күн бұрын
Thank you
@mrwhite5198
@mrwhite5198 Ай бұрын
please the source code
@TheOpenSourceChannel
@TheOpenSourceChannel 25 күн бұрын
Please check the description or pinned comment.
@sakshisakhare9150
@sakshisakhare9150 Ай бұрын
can i get source code ?
@TheOpenSourceChannel
@TheOpenSourceChannel 25 күн бұрын
Please check the description or pinned comment for source code.
@shahbazbhai8525
@shahbazbhai8525 Ай бұрын
this is not working in flutter web project, using windows operating system
@TheOpenSourceChannel
@TheOpenSourceChannel 23 күн бұрын
path_provider does not support web platform. Web browsers do not have a standard concept of a local filesystem path. It does not provide a built-in API to access local file paths like the native file systems. If you want to pick files from a web app, you should explore file_picker library.
@remyb9495
@remyb9495 Ай бұрын
What is it with everyone who has a conplex flowchart calling it AI? Skyrim doesn't have AI, skyrim has a bag of quest modifiers it puts togwther to make it SEEM like it has more quests, but there's no real intelligence behind it and I bet the programmers who made it would laugh at the idea...
@joguns8257
@joguns8257 Ай бұрын
Superb illustration👍
@grapplerk2635
@grapplerk2635 Ай бұрын
Hi man,can we create ai playing MMO games ?i have different ideas,but lack technical skills..
@josejaimecome
@josejaimecome Ай бұрын
Good to start for Web Development with Framework that allow a lot of customization. I particular see this tutorial as useful to whom is starting use Flask!!!!
@wesoypiese1995
@wesoypiese1995 Ай бұрын
Since when basic algorithms are "AI"?
@brahimferjani3147
@brahimferjani3147 Ай бұрын
plt.subplot(2,3,1) must be plt.subplot(1,3,1)
@MathGuy-Tlime
@MathGuy-Tlime 2 ай бұрын
Wow
@benkay157
@benkay157 2 ай бұрын
Wow, this is amazing. I now understand exactly how RAG operates thanks to your excellent explanation.
@yesitstrue9809
@yesitstrue9809 2 ай бұрын
why not just add a quick SDK and then just a few more ai providers not just gemini - like what easybeam ai is doing?
@DevangBadiyani
@DevangBadiyani 2 ай бұрын
I completed the 100th like. . Felt nice
@TheOpenSourceChannel
@TheOpenSourceChannel 2 ай бұрын
Thank you! I am feeling nice too.
@pjmaas106
@pjmaas106 2 ай бұрын
Wonderful video. Thank you
@InfoSecGSO
@InfoSecGSO 2 ай бұрын
As someone who works on OO Python projects/code-bases intermittently , this video is absolutely one of the best concise, easy to understand explanations and demos of OOP.
@Hoaxre
@Hoaxre 2 ай бұрын
Amazing description brother!
@yuryitikhonoff9631
@yuryitikhonoff9631 2 ай бұрын
Thanks for such a detailed explanation
@Cukito4
@Cukito4 2 ай бұрын
Hello, why use separate files and not keep all the information in the main .py file? Also, how does the opening fknow where the file is stored in the hard drive?
@TheOpenSourceChannel
@TheOpenSourceChannel 2 ай бұрын
Hi @Cukito4, You can keep all the information in the main.py file but separate files keeps your code organized and reusable. When you use open('example.txt', 'r'), you're telling Python the exact location of the file. If you don't specify a path, Python might look in the current working directory for the file.
@seefinish_
@seefinish_ 2 ай бұрын
Cool
@Qamarvisitor6564
@Qamarvisitor6564 2 ай бұрын
how to make website for weapons
@suriyakumar2376
@suriyakumar2376 2 ай бұрын
The explanation in the video is amazing, keep up the good work 👍👍
@Lasvegasnowman1
@Lasvegasnowman1 3 ай бұрын
The calculations for this must be immense.
@delbertholsworth3334
@delbertholsworth3334 3 ай бұрын
No. Mario Kart CHEATS. there's a difference.
@0.4sks19
@0.4sks19 3 ай бұрын
where can i find your documentation code?
@souravbarua3991
@souravbarua3991 3 ай бұрын
Thank you for sharing. Looking forward to know more about this technique.
@abdulsalamaliyu2563
@abdulsalamaliyu2563 3 ай бұрын
Please how do you get the numeric representation of datas that you use to predict churn ?
@TheOpenSourceChannel
@TheOpenSourceChannel 3 ай бұрын
Please check the video at 4:30 There are two steps to get the numeric representation of data. Label Encoding: It converts categorical data into numerical labels. For example Female become 0 and Male become 1. One-Hot Encoding: It creates new binary features for each category to represent its independent effects. For example France become [1, 0, 0], Spain become [0, 0, 1], and Germany become [0, 1, 0].
@srishrachamalla9607
@srishrachamalla9607 3 ай бұрын
hey i have a qn should i use ML or DL when these kind of problems statements used?
@TheOpenSourceChannel
@TheOpenSourceChannel 3 ай бұрын
If you have a smaller dataset, you should start with an ML model. It is beginner friendly as you can use established ML algorithms. If you have a large and complex dataset, and want to learn complex features and relationships from your data, then DL might be worth exploring. Because DL requires more computational resources and complex to implement.
@yousefhajeb
@yousefhajeb 3 ай бұрын
great video
@방향-o7z
@방향-o7z 3 ай бұрын
Perfect! 👍👍
@방향-o7z
@방향-o7z 3 ай бұрын
1:12 initialize
@방향-o7z
@방향-o7z 3 ай бұрын
1:27
@방향-o7z
@방향-o7z 3 ай бұрын
1:37
@TheMaxKids
@TheMaxKids 4 ай бұрын
Absolutely the best video on the topic I've seen yet. Thank you.
@sjinja4321
@sjinja4321 4 ай бұрын
TLDR
@RACM27MD
@RACM27MD 4 ай бұрын
A few tips to run this as 5thf of August 2024 with Llama 3.1 8B Instruct: Next to pip install transformers add upgrade transformers: ``` !pip install transformers torch accelerate bitsandbytes !pip install --upgrade transformers ``` This is the full import section: ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, AutoConfig, pipeline from huggingface_hub import login ``` Hugging Face Login, modelid and config: ``` login(token=hf_token) model_id = 'meta-llama/Meta-Llama-3.1-8B-Instruct' bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4', bnb_4bit_compute_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token config = AutoConfig.from_pretrained(model_id) config.rope_scaling = { "type": "linear", "factor": 8.0 } # Adjust the factor as needed model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map='auto') ``` Text generator: ``` text_generator = pipeline( 'text-generation', model=model, tokenizer=tokenizer, max_new_tokens=512, ) ``` Everything else can stay the same Also, go to Runtime -> Change runtime type and select the GPU option. And don't forget to ask for access to Llama on hugging face. It won't work if you're not approved.
@TheOpenSourceChannel
@TheOpenSourceChannel 4 ай бұрын
Thank you for sharing!
@EgidijaM.
@EgidijaM. 4 ай бұрын
@RACM27MD - you saved me! Thanks a lot for such helpful notes!
@mdtsai4973
@mdtsai4973 4 ай бұрын
life saver. thank you for your good notes
@Rengoku1yu
@Rengoku1yu 4 ай бұрын
On the page it tells me that I am already authorized, but when I go to run the code it tells me that I am not authorized, I tried to do a test in VScode but it tells me the same thing, I need help please
@emreylmaz3330
@emreylmaz3330 26 күн бұрын
thx man
@lester3340
@lester3340 4 ай бұрын
I think you missed defining of access token here : model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=bnb_config, device_map="auto", token=accessToken )
@souravmallikrocks
@souravmallikrocks 5 ай бұрын
What techniques are available for measuring coherence, relevance, and fluency in the output refinement of the generator steps?
@TheOpenSourceChannel
@TheOpenSourceChannel 5 ай бұрын
To evaluate coherence there are metrics like BLEU, and ROUGE. These metrics compare the generated text to reference texts for consistency and logical flow. It returns score between 0 and 1. Higher score indicates the better performance. Relevance can be assessed through semantic similarity measures like cosine similarity with BERT embeddings. It ensures the generated content is pertinent to the given context. For fluency there are perplexity and human evaluation metrics that judge the smoothness and grammatical correctness of the generated text. It basically assess the language model's output quality.
@hasnainazam3746
@hasnainazam3746 5 ай бұрын
Could you please share the colab link ?
@SebasrianAquino
@SebasrianAquino 5 ай бұрын
Interesting