Thank you for this, I was able to reproduce your results for the face recognition code. My settings were the default "Contrastive Search" settings except I lowered the temperature to 0.95. In addition, my code called the same xml file as yours did but I had to move the harrcascade_frontalface_default.xml file into the same folder as the .py file. Thanks the video, it was really interesting and it helps a lot when trying to figure this stuff out.
@NerdyRodent Жыл бұрын
Yup. Contrastive Search Is What You Need For Neural Text Generation ftw! Table 9 appendix A settings are good too (beam_width, alpha, decoding_len = 4, 0.6, 512)
@AaronALAI Жыл бұрын
@@NerdyRodent WOW! I used your settings suggestion (Where does Table 9 appendix A come from?) and used this as the input: # Python3 code that detects a face in an image import cv2 import numpy as np import os #Load the image img = cv2.imread('test.jpg') and it gave me the remainder perfectly! #Convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #Load the cascade face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') #Look for faces in the image using the loaded cascade file faces = face_cascade.detectMultiScale(gray, 1.1, 4) #Draw a rectangle around every found face for (x, y, w, h) in faces: cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2) #Display the result cv2.imshow('img', img) cv2.waitKey()
@amj2048 Жыл бұрын
Thank you so much for doing this Nerdy Rodent! this is such awesome information!
@NerdyRodent Жыл бұрын
You are so welcome!
@4.0.4 Жыл бұрын
LoRAs for text models will be groundbreaking! Can you imagine the possibilities? Wow! Think of what LoRAs can do for stable diffusion already 🤯
@NerdyRodent Жыл бұрын
I can’t wait!
@AC-zv3fx Жыл бұрын
LoRA was originally created for finefuning text models, as a fun fact 😊
@4.0.4 Жыл бұрын
@@thepaddedcell8140 LoRAs can do styles, concepts and characters, and you can combine them. Plus they train very fast.
@ArielTavori Жыл бұрын
@The Padded Cell depends. Try "FotoAssisted" model. The LoRA i extracted from my DreamBooth looks better in it than the DreamBooth, and I've seen it do the same with other LoRA files and even TI embeddings... As with any of these things, you do still have to experiment with settings to get the best results.
@ArielTavori Жыл бұрын
@The Padded Cell also, my LoRA folder is already approaching 80gb (x2 since i have a backup), imagine if those were all DreamBooth checkpoints? Lol!
@facelessvoice Жыл бұрын
Thank you very much for these kind of videos, testing different models etc.
@NerdyRodent Жыл бұрын
You are welcome!
@musumo1908 Жыл бұрын
Another fine rodent release! Yay..to open source...for the peoples! Nerdy Rodent is that your real voice? Or a naturally soothing voice over?
@NerdyRodent Жыл бұрын
That one is real, however… (watch this space)
@kenmillionx Жыл бұрын
So great to watch😊😊
@NerdyRodent Жыл бұрын
😄
@shaiknaseer1490 Жыл бұрын
getting this error "Can't determine model type from model name. Please specify it manually using --model_type argument", cant load Codegen what to do
@MariusBLid Жыл бұрын
Very cool! I tried the multi model but was not able to make it work. It just started writing weird stuff. You said you needed to edit some code to make it work? What is the difference between multi and mono by the way? Awesome video!
@flethacker Жыл бұрын
can you please make a video or leave a comment on what kind of home server id need to run this? or can it run on a laptop?
@4.0.4 Жыл бұрын
Anything with an Nvidia GPU, ideally the more VRAM the better. With some luck, AMD GPUs or M1 Macs. The "money isn't an issue" is an RTX 4090, but an RTX 3060 12GB is a great value option. If you're made of money, one or more Nvidia A100.
@NerdyRodent Жыл бұрын
It’s in the install video, but people have been running things on the raspberry pi 😆 For the most part a low end pc will do, even something with a 4 GB GPU can do the job!
@oktayalpkabac Жыл бұрын
@@NerdyRodent Can you say for which models? Do you have any recommendations for conversational models?
@NerdyRodent Жыл бұрын
@@oktayalpkabac Loads of them do conversations. Any of the gpts would do nicely!
@Player-unknown937 күн бұрын
I have ollama on a old netbook works find for the 8b
@pashute12 Жыл бұрын
Could you please test the three (gpt, gen and santa) with the following: "This is the code. when I run it I get an error of so and so. how do I fix that". Also, did you ever compare Sourceforge Cody AI, and Codium ?
@nathanbanks2354 Жыл бұрын
That fill-in-the-middle sounds useful! I'd love it if Santacoder could write my docstrings/comments for me...
@NerdyRodent Жыл бұрын
Right?! 😀
@banzai316 Жыл бұрын
But, but...definitely, geeks can't be replaced
@NerdyRodent Жыл бұрын
One day… who knows! 😎
@Vyviel Жыл бұрын
Is it possile to increase the prompt size to greater than 2048? say 30000+?
@guzu672 Жыл бұрын
Nothing seems to work for me. How do I get the correct "Generation Parameters Preset"? I don't have NR-CodeGen-v3 in the list.
@NerdyRodent Жыл бұрын
Start with a contrastive search preset 😉
@swannschilling474 Жыл бұрын
Brilliant!! 😮
@baptiste6436 Жыл бұрын
waiting for the moment we can feed many files or an entire codebase to those kinds of bots
@nathanbanks2354 Жыл бұрын
The LoRA support may be able to do this (15:20). I was thinking of fine-tuning the bot with whatever library I happen to be working on, but haven't run anything yet.
@SuenoXIII. Жыл бұрын
NGL you sound like the guy from the rust update videos! :D
@tiagotiagot Жыл бұрын
07:15 I think that first one assumed a different version of Python. If I remember correctly, before Python 3, you didn't need to put the arguments for print inside parenthesis.
@NerdyRodent Жыл бұрын
Poor GPT-J is old ;)
@XaYaZaZa Жыл бұрын
Awesome!! Thanks!! I was getting frustrated with gpt3 not doing simple things like make a fart poem about my daughter. How dare it refuse when we're both sitting here asking it for it. Needless to say most of my time is spent arguing with it because it's being stupid - and programmed intentionally that way..
@NerdyRodent Жыл бұрын
GPT Neo is great at fart related stuff 😉
@saintkamus14 Жыл бұрын
When alpaca or at least llama guide?
@NerdyRodent Жыл бұрын
Waiting for my download ☹️
@rashedulkabir6227 Жыл бұрын
They removed alpaca, right?
@Mind_of_a_fool Жыл бұрын
It will sure be interesting to see how far it'll go in a couple of years Will we suddenly be oh yeah we got full ai now or will we even notice only time will tell
@hipjoeroflmto4764 Жыл бұрын
Does it have the problem of gpt where it stops typing the script?
@NerdyRodent Жыл бұрын
As shown, you can just continue or edit as you like!
@hipjoeroflmto4764 Жыл бұрын
@@NerdyRodent nice :)
@qolathewfangarm Жыл бұрын
Could you please show us how to train a model using our own code and documentation? That's the first most important thing I am wondering about. ChatGPT was not able to provide any correct answers in my field of work, it gave incorrect answers 90% of the time so I had to give up on it. So I want to teach a new model and train it much more heavily for my field of work.
@NerdyRodent Жыл бұрын
Playing with that now… just need a dataset 😆
@TheMemesofDestruction Жыл бұрын
More Nerdy Magic! ^.^
@riggitywrckd4325 Жыл бұрын
Very cool, can't wait to try one of these models out. Do you think one of the llama models would code well. A 3090 will run the 13b version of the model nicely in 8 bit. They are talking about it in the issues pages of oogabooga. A 3090 can run a 30b if you have at least 64gb of ram as well (they have 4bit working). Can't wait to see what everyone does with these tools.
@NerdyRodent Жыл бұрын
Maybe. Still waiting for my Llama download!
@riggitywrckd4325 Жыл бұрын
@@NerdyRodent I hope you get one. I may jump the line... for science ;-)
@baptiste6436 Жыл бұрын
would u say codegen is better than gpt-4 at coding?
@nathanbanks2354 Жыл бұрын
Probably not...but being able to run it yourself is great!
@aaronversiontwo4995 Жыл бұрын
You know what would be ACTUALLY groundbreaking? An A.I that could install pytorch and cuda and all kinds of other cool stuff and make environments and install requirements WITHOUT any errors or things going wrong hahah.
@NerdyRodent Жыл бұрын
pip3 install torch torchvision torchaudio - there, I did it! XD
@aaronversiontwo4995 Жыл бұрын
@@NerdyRodent "Torch not compiled with CUDA enabled" I give up lol
@NerdyRodent Жыл бұрын
@@aaronversiontwo4995 You will need an Nvidia GPU, of course 😉 Pytorch.org has the full grid of options for Linux with pip or conda, etc
@aaronversiontwo4995 Жыл бұрын
@@NerdyRodent Trust me I know all this. I just bought an RTX 3060 even. I even ran the cuda installer added the cudnnn dlls. Added everything to my PATH. I ran the conda command to install it in my environment. I used the pip way to install it in the folder. I literally tried everything. nvcc shows cuda is all good. I have looked up guides. I have tried downgrading. I have literally tried everything. I don't know why it just won't play nice. When I try to do it myself it always screws up but then something like automatic1111 repo that does it all for you works everytime. go figure.
@NerdyRodent Жыл бұрын
@@aaronversiontwo4995 Ah, Microsoft Windows. Best to just conda install everything there!
@Starius2 Жыл бұрын
If it can't code, can I teach it to code??
@NerdyRodent Жыл бұрын
For science!
@Starius2 Жыл бұрын
@@NerdyRodent Im serious tho. In stable diffusion you can "teach" that ai what you want with Automatic1111, I was wondering if I could do the same with these ai?? Like give them documentation and such.
@NerdyRodent Жыл бұрын
I'm serious too - it's called "fine tuning" :)
@Starius2 Жыл бұрын
@@NerdyRodent WAIT REALLY?? I can teach it programming?? Teach me to teach it! I would love you forever and serve you!
@NerdyRodent Жыл бұрын
@@Starius2 lol 😆 For the most part you need pretty beefy hardware, but people have been doing things like the Alpaca Lora for a bit of fine tuning. Take a look at github.com/tloen/alpaca-lora for a good fine tuning option which will run on a 4090.
@andrecook4268 Жыл бұрын
Wow.
@NerdyRodent Жыл бұрын
😀
@zyxyuv1650 Жыл бұрын
People are not making enough videos about how OpenAI is ClosedAI, it's kind of insulting to our human intelligence to even call it Open.
@FRareDom Жыл бұрын
amazing tech
@hipjoeroflmto4764 Жыл бұрын
👀👀👀👀👀
@SovereignVis Жыл бұрын
Sounds interesting. I would like to see how well this could be used for coding games. I have just about all the art skills needed to make a game, but don't yet know enough about coding to make a working game. I'm slowly learning, but would be interesting to see if something like this could help me both code/learn faster.
@NerdyRodent Жыл бұрын
It could probably be an aid to learning. Certainly in fixing code 😉
@SovereignVis Жыл бұрын
@@NerdyRodent I tried doing a search on the 'Hugging Face', and it doesn't look like there is one for GDscript. 🤔
@Player-unknown937 күн бұрын
Yea I walked into roblox studios like move out my way soon as I walked it I relised it's all mostly code the ai helps but holy shit it's hard as hell good luck