If you can’t explain it to a kid, you didn’t get at the first place. This is by far the best practical explanation of DRL, keep up the good work!
@MuchlisNopq2 күн бұрын
Great 🎉
@saivarshith85543 күн бұрын
Thank you very helpful
@cturyasiima3 күн бұрын
Great video, appreciate you man👏
@aadhuff56915 күн бұрын
simple
@CheikhPensaba14 күн бұрын
w vid
@readmore897415 күн бұрын
Hmm, pretty cool idea
@grapplerk263524 күн бұрын
How can i use something like that but for creative style text generating?
@TheOpenSourceChannel23 күн бұрын
Stable Diffusion excels at generating images based on text prompts, it's not currently designed for directly creating creative style text generation. Trying LLMs(explore libraries like TensorFlow or PyTorch to interact with LLMs) could be an option for you as they are specifically trained on text data and could generate creative text formats.
Thank you ! I successfully fo it , got some issue related to accessing model from hugging face , then in API , over all i resolved it , and integrate Lamma model in the Google colab , thank you ❤
@emreylmaz333026 күн бұрын
Thank you
@mrwhite5198Ай бұрын
please the source code
@TheOpenSourceChannel25 күн бұрын
Please check the description or pinned comment.
@sakshisakhare9150Ай бұрын
can i get source code ?
@TheOpenSourceChannel25 күн бұрын
Please check the description or pinned comment for source code.
@shahbazbhai8525Ай бұрын
this is not working in flutter web project, using windows operating system
@TheOpenSourceChannel23 күн бұрын
path_provider does not support web platform. Web browsers do not have a standard concept of a local filesystem path. It does not provide a built-in API to access local file paths like the native file systems. If you want to pick files from a web app, you should explore file_picker library.
@remyb9495Ай бұрын
What is it with everyone who has a conplex flowchart calling it AI? Skyrim doesn't have AI, skyrim has a bag of quest modifiers it puts togwther to make it SEEM like it has more quests, but there's no real intelligence behind it and I bet the programmers who made it would laugh at the idea...
@joguns8257Ай бұрын
Superb illustration👍
@grapplerk2635Ай бұрын
Hi man,can we create ai playing MMO games ?i have different ideas,but lack technical skills..
@josejaimecomeАй бұрын
Good to start for Web Development with Framework that allow a lot of customization. I particular see this tutorial as useful to whom is starting use Flask!!!!
@wesoypiese1995Ай бұрын
Since when basic algorithms are "AI"?
@brahimferjani3147Ай бұрын
plt.subplot(2,3,1) must be plt.subplot(1,3,1)
@MathGuy-Tlime2 ай бұрын
Wow
@benkay1572 ай бұрын
Wow, this is amazing. I now understand exactly how RAG operates thanks to your excellent explanation.
@yesitstrue98092 ай бұрын
why not just add a quick SDK and then just a few more ai providers not just gemini - like what easybeam ai is doing?
@DevangBadiyani2 ай бұрын
I completed the 100th like. . Felt nice
@TheOpenSourceChannel2 ай бұрын
Thank you! I am feeling nice too.
@pjmaas1062 ай бұрын
Wonderful video. Thank you
@InfoSecGSO2 ай бұрын
As someone who works on OO Python projects/code-bases intermittently , this video is absolutely one of the best concise, easy to understand explanations and demos of OOP.
@Hoaxre2 ай бұрын
Amazing description brother!
@yuryitikhonoff96312 ай бұрын
Thanks for such a detailed explanation
@Cukito42 ай бұрын
Hello, why use separate files and not keep all the information in the main .py file? Also, how does the opening fknow where the file is stored in the hard drive?
@TheOpenSourceChannel2 ай бұрын
Hi @Cukito4, You can keep all the information in the main.py file but separate files keeps your code organized and reusable. When you use open('example.txt', 'r'), you're telling Python the exact location of the file. If you don't specify a path, Python might look in the current working directory for the file.
@seefinish_2 ай бұрын
Cool
@Qamarvisitor65642 ай бұрын
how to make website for weapons
@suriyakumar23762 ай бұрын
The explanation in the video is amazing, keep up the good work 👍👍
@Lasvegasnowman13 ай бұрын
The calculations for this must be immense.
@delbertholsworth33343 ай бұрын
No. Mario Kart CHEATS. there's a difference.
@0.4sks193 ай бұрын
where can i find your documentation code?
@souravbarua39913 ай бұрын
Thank you for sharing. Looking forward to know more about this technique.
@abdulsalamaliyu25633 ай бұрын
Please how do you get the numeric representation of datas that you use to predict churn ?
@TheOpenSourceChannel3 ай бұрын
Please check the video at 4:30 There are two steps to get the numeric representation of data. Label Encoding: It converts categorical data into numerical labels. For example Female become 0 and Male become 1. One-Hot Encoding: It creates new binary features for each category to represent its independent effects. For example France become [1, 0, 0], Spain become [0, 0, 1], and Germany become [0, 1, 0].
@srishrachamalla96073 ай бұрын
hey i have a qn should i use ML or DL when these kind of problems statements used?
@TheOpenSourceChannel3 ай бұрын
If you have a smaller dataset, you should start with an ML model. It is beginner friendly as you can use established ML algorithms. If you have a large and complex dataset, and want to learn complex features and relationships from your data, then DL might be worth exploring. Because DL requires more computational resources and complex to implement.
@yousefhajeb3 ай бұрын
great video
@방향-o7z3 ай бұрын
Perfect! 👍👍
@방향-o7z3 ай бұрын
1:12 initialize
@방향-o7z3 ай бұрын
1:27
@방향-o7z3 ай бұрын
1:37
@TheMaxKids4 ай бұрын
Absolutely the best video on the topic I've seen yet. Thank you.
@sjinja43214 ай бұрын
TLDR
@RACM27MD4 ай бұрын
A few tips to run this as 5thf of August 2024 with Llama 3.1 8B Instruct: Next to pip install transformers add upgrade transformers: ``` !pip install transformers torch accelerate bitsandbytes !pip install --upgrade transformers ``` This is the full import section: ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, AutoConfig, pipeline from huggingface_hub import login ``` Hugging Face Login, modelid and config: ``` login(token=hf_token) model_id = 'meta-llama/Meta-Llama-3.1-8B-Instruct' bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4', bnb_4bit_compute_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token config = AutoConfig.from_pretrained(model_id) config.rope_scaling = { "type": "linear", "factor": 8.0 } # Adjust the factor as needed model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map='auto') ``` Text generator: ``` text_generator = pipeline( 'text-generation', model=model, tokenizer=tokenizer, max_new_tokens=512, ) ``` Everything else can stay the same Also, go to Runtime -> Change runtime type and select the GPU option. And don't forget to ask for access to Llama on hugging face. It won't work if you're not approved.
@TheOpenSourceChannel4 ай бұрын
Thank you for sharing!
@EgidijaM.4 ай бұрын
@RACM27MD - you saved me! Thanks a lot for such helpful notes!
@mdtsai49734 ай бұрын
life saver. thank you for your good notes
@Rengoku1yu4 ай бұрын
On the page it tells me that I am already authorized, but when I go to run the code it tells me that I am not authorized, I tried to do a test in VScode but it tells me the same thing, I need help please
@emreylmaz333026 күн бұрын
thx man
@lester33404 ай бұрын
I think you missed defining of access token here : model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=bnb_config, device_map="auto", token=accessToken )
@souravmallikrocks5 ай бұрын
What techniques are available for measuring coherence, relevance, and fluency in the output refinement of the generator steps?
@TheOpenSourceChannel5 ай бұрын
To evaluate coherence there are metrics like BLEU, and ROUGE. These metrics compare the generated text to reference texts for consistency and logical flow. It returns score between 0 and 1. Higher score indicates the better performance. Relevance can be assessed through semantic similarity measures like cosine similarity with BERT embeddings. It ensures the generated content is pertinent to the given context. For fluency there are perplexity and human evaluation metrics that judge the smoothness and grammatical correctness of the generated text. It basically assess the language model's output quality.