I Ran ChatGPT on a Raspberry Pi Locally!

  Рет қаралды 146,893

Data Slayer

Data Slayer

9 ай бұрын

Full Tutorial Instructions Here: / quick-start-step-by-st...
Product Links (some are affiliate links)
- Raspberry Pi 5 👉 amzn.to/48Qgy4O
GitHub Repo: git clone github.com/antimatter15/alpac...
Model Weights: huggingface.co/Sosaka/Alpaca-...
Here are the instructions to run a ChatGPT-like model locally on your device:
1. Download the zip file corresponding to your operating system from the latest release (github.com/antimatter15/alpac.... On Windows, download `alpaca-win.zip`, on Mac (both Intel or ARM) download `alpaca-mac.zip`, and on Linux (x64) download `alpaca-linux.zip`.
2. Download ggml-alpaca-7b-q4.bin(huggingface.co/Sosaka/Alpaca-...) and place it in the same folder as the `chat` executable in the zip file.
3. Once you've downloaded the model weights and placed them into the same directory as the `chat` or `chat.exe` executable, run `./chat` in the terminal (for MacOS and Linux) or `.\\Release\\chat.exe` (for Windows).
4. You can now type to the AI in the terminal and it will reply.
If you prefer building from source, follow these instructions:
For MacOS and Linux:
1. Clone the repository using `git clone github.com/antimatter15/alpac...`
2. Navigate to the cloned repository using `cd alpaca.cpp`.
3. Run `make chat`.
4. Run `./chat` in the terminal.
1. Download the weights via any of the links in "Get started" above, and save the file as `ggml-alpaca-7b-q4.bin` in the main Alpaca directory.
2. In the terminal window, run `.\\Release\\chat.exe`.
3. You can now type to the AI in the terminal and it will reply.
As part of Meta’s commitment to open science, today we are publicly releasing LLaMA (Large Language Model Meta AI), a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field.
making LLaMA available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a LLaMA model card that details how we built the model in keeping with our approach to Responsible AI practices.
Over the last year, large language models - natural language processing (NLP) systems with billions of parameters - have shown new capabilities to generate creative text, solve mathematical theorems, predict protein structures, answer reading comprehension questions, and more. They are one of the clearest cases of the substantial potential benefits AI can offer at scale to billions of people.
Smaller models trained on more tokens - which are pieces of words - are easier to retrain and fine-tune for specific potential product use cases. We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens.
Like other large language models, LLaMA works by taking a sequence of words as an input and predicts a next word to recursively generate text. To train our model, we chose text from the 20 languages with the most speakers, focusing on those with Latin and Cyrillic alphabets.
There is still more research that needs to be done to address the risks of bias, toxic comments, and hallucinations in large language models. Like other models, LLaMA shares these challenges. As a foundation model, LLaMA is designed to be versatile and can be applied to many different use cases, versus a fine-tuned model that is designed for a specific task. By sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating these problems in large language models. We also provide in the paper a set of evaluations on benchmarks evaluating model biases and toxicity to show the model’s limitations and to support further research in this crucial area.
To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license focused on research use cases. Access to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world. People interested in applying for access can find the link to the application in our research paper.
We believe that the entire AI community - academic researchers, civil society, policymakers, and industry - must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular. We look forward to seeing what the community can learn - and eventually build - using LLaMA.

Пікірлер: 167
@aaronjennings8385
@aaronjennings8385 6 ай бұрын
Quantization, in plain English, is a process of representing something in a simplified or discrete form. It involves reducing the complexity or precision of something to make it easier to work with or understand. Think of it like taking a detailed painting and converting it into a pixelated image. Instead of having many different shades and colors, the pixelated image uses a limited number of colors or pixels to represent the overall image. This simplification makes it easier to store, transmit, or process the image. In the context of data or numbers, quantization involves reducing the number of possible values or levels that can be used to represent a measurement or a quantity. For example, instead of representing a measurement with infinite decimal places, quantization rounds it to a specific level of precision, such as rounding a decimal to the nearest whole number or a certain number of decimal places. Quantization is commonly used in various fields, including digital signal processing, image and video compression, and data storage. It allows for more efficient use of resources, faster computations, and simpler representations, while still preserving the essential information or characteristics of the original data.
@user-tw2nw2up7g
@user-tw2nw2up7g 5 ай бұрын
AI wrote this
@3DComputing
@3DComputing 6 ай бұрын
Thank you, I knew it was small I didnt realise just how small. 10/10 short sweet concise.
@DiscontinuedRBASIDOWK
@DiscontinuedRBASIDOWK 9 ай бұрын
Great video, a little misleading to call it ChatGPT considering the power of ChatGPT compared to this much smaller model but still a great video. Well done.
@DataSlayerMedia
@DataSlayerMedia 8 ай бұрын
Fair enough! But Llama is competitive with GPT 3.5!
@zappy9880
@zappy9880 8 ай бұрын
@@DataSlayerMedialol no its not. you used a 7b model on top of that you used llama 1 and not llama 2. right now, the only model comparable to gpt 3.5 is falcon 180b and even then it still falls behind gpt in terms of coding capabilities.
@stevewall7044
@stevewall7044 6 ай бұрын
​@@zappy9880dolphin 7b 2.2 is pretty good.
@abhiprojectz2995
@abhiprojectz2995 6 ай бұрын
​@@zappy9880Obviously he did that for lots of views, don't you understand ?
6 ай бұрын
Guy just using a little clickbait to kickstart his channel. No shame in that... well, maybe just a smidge.
@rodrigo_plp
@rodrigo_plp 5 ай бұрын
So true. Many models have heavy requirement to run, like 16 GB of RAM, but depending on your use case you can get away with a lot less. I got surprising results using a vector database and Llama 2 even with 8 GB of RAM and 4 CPUs. In Supawiki (disclosure: built by me) I am using a bit more than that, and the results are impressive. Exciting stuff indeed.
@fredygerman_
@fredygerman_ 5 ай бұрын
did you run that without a GPU?
@rodrigo_plp
@rodrigo_plp 5 ай бұрын
Yes, without GPUs. Ollama can run entirely on CPUs. It uses all you got and it is a bit slow, but works.@@fredygerman_
@phambinhan17
@phambinhan17 3 ай бұрын
​@@fredygerman_ You can run Llama 2 without GPU, by using llama.cpp
@StephenBrown88
@StephenBrown88 5 ай бұрын
Your passion for this stuff is magnetic!
@Diogenes20111
@Diogenes20111 4 ай бұрын
Thank you so much for this educational video!
@wood6454
@wood6454 5 ай бұрын
That is impressive. I'm able to run 7B Q6 models on my old pc with an RX 580 and small language models like Phi 2 runs faster that I can read. I believe the future of LLMs is gonna be local instead of cloud due to privacy as you said.
@MorrWorm8
@MorrWorm8 9 ай бұрын
Yo, I took a Pi 4 (8gb) with argon 40 case that has the m2 put a (1TB) SSD added Ubuntu and I love it! Loads fast responsive. I have my M1 MacBook, my Mac mini & a older HP running windows. Now I have a Linux desktop. I know I could have ran a VM. I enjoy bare metal. Great video. Liked & subscribed
@UncleDavid
@UncleDavid 8 ай бұрын
telling us ur life story is crazy
@georgeshafik3281
@georgeshafik3281 8 ай бұрын
Great video, simple steps to follow everything worked the first time. It was slow and I used identical hardware to yours. Really interested in using a larger Lama model with Nvidia Jetson👍
@whatisrokosbasilisk80
@whatisrokosbasilisk80 4 ай бұрын
Use smallers models then
@agustinbmed
@agustinbmed 5 ай бұрын
I’m wondering if you can use it to train say your files stored in a hdd and let it do its gpt part on it? Like ask it if you have a document that has x or y content
@user-oh3de3pq2l
@user-oh3de3pq2l 4 ай бұрын
Would this run more cleanly on a stronger device with lots more RAM available, or is it more limited by the base model? for example, if I ran your pirate speak example with the same setup on, say a dual xeon server with 256 GB of RAM and ssd raid, would it have a chance to actually perform more properly?
@gn7026
@gn7026 8 ай бұрын
I'm eagerly waiting to see the video on running the model on the Jetson Nano!
@Atmatan_Kabbaher
@Atmatan_Kabbaher 7 ай бұрын
Would've been where I started, personally..
@scottbruce5376
@scottbruce5376 6 ай бұрын
I worked through this on ESXi 8 Ubuntu VM and had no problems. What's the next step? Web Interface? I have a docker that connects to LLA online, would I setup API next to connect to next?
@gasmonkey1000
@gasmonkey1000 5 ай бұрын
Silly question but would the similar method also work with other gpts like GPT 4chan? Thanks and agod bless ya
@Renetegro
@Renetegro 6 ай бұрын
2040: i created a new universe using phone parts
@chrisarmstrong2721
@chrisarmstrong2721 6 ай бұрын
Fantastic, when do you think it will be able to also do images like the latest update so gpt4 now natively pulls from Dalle!
@onghiem
@onghiem 6 ай бұрын
can you integrate USB Coral AI accelerator to make this RPI faster, or could you run this on a PI cluster?
@davidtorrens8647
@davidtorrens8647 3 ай бұрын
Yes please me too like to know that.
@OblivifrekTV
@OblivifrekTV 7 ай бұрын
Wonder if it would work better with a Raspberry Pi Cluster
@legend_6483
@legend_6483 8 ай бұрын
Nice tutorial it works perfectly on my pi
@DataSlayerMedia
@DataSlayerMedia 8 ай бұрын
Any interesting conversations? 👀
@legend_6483
@legend_6483 8 ай бұрын
@DataSlayerMedia lol, not really since the cpu was on fire, and the speed was very slow, but I liked the concept
@rawsomeone1
@rawsomeone1 8 ай бұрын
@@legend_6483 🤣😂😅
@jacquesb5248
@jacquesb5248 4 ай бұрын
pondering training my own model
@NicksonNg
@NicksonNg 6 ай бұрын
I tried running on a pi 5,and its still not very usable even tho theres performance boost
@DarthCrumbie
@DarthCrumbie 6 ай бұрын
Would it be possible to use this or similar setup to replace Google Home or Amazon Echo? Ever since the story broke about the person that got their amazon account suspended by an amazon delivery driver I've wanted to find a way to isolate my smart home from the internet.
@DataSlayerMedia
@DataSlayerMedia 6 ай бұрын
Should be! Look in ESPHome, there’s a whole community around this.
@getzybaggins
@getzybaggins 6 ай бұрын
at the end, do you mean vram or virtualized memory? awesome video trying this later
@khangvutien2538
@khangvutien2538 4 ай бұрын
Great video. Thanks. Is it on purpose that you use the word “ChatGPT” in the title ? I don’t know whether “ChatGPT” trademarked but it seems to me that you didn’t use the OpenAI LLM, but LLaMA 2?
@DontTreadOnMyLiberty
@DontTreadOnMyLiberty 29 күн бұрын
Could this be used to be trained to search a local pdf library? I have seen people make cyberdecks with Wikipedia and other preparedness related PDF documents. It would be incredible to not have to read a whole document, but rather put a question into a chat box and it search for specific information from said PDF libraries!
@YannMetalhead
@YannMetalhead 8 ай бұрын
Great video.
@alexiscolonfpv3534
@alexiscolonfpv3534 Ай бұрын
hi i have a error on my PI main: seed = 1712111644 llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ... llama_model_load: ggml ctx size = 6065.34 MB Segmentation fault can you help thanks
@sqribe
@sqribe 7 ай бұрын
anyone else getting a segmentation fault(core dumped) error? (i am running the raspi4 4gb.) everything compiled w/out error, but when i run ./chat i get the segmentation fault (core dumped)
@mrguiltyfool
@mrguiltyfool 6 ай бұрын
how do you get tts to read out the chatbot output
@sh4dyweather11
@sh4dyweather11 4 ай бұрын
I’m wanting to do this with a pi 5 and add a screen/camera and give him some wheel so journey with me. You should try it as well maybe on one of those racing drones, that would be sickkk.
@swingtag1041
@swingtag1041 5 ай бұрын
Love the background music. What is it?
@szmonszmon
@szmonszmon 6 ай бұрын
So, you don't know what you run? "I ran ChatGPT". I skipped some parts of video to see what you really have and I saw llama and alpaca. I was really curious where did you found the ChatGPT source code... No, llama, alpaca and others are not the same as ChatGPT. They don't understand other languages than english and they have issues with other code languages than python. So, im some circumstances they are similar to ChatGPT, but only in certain use cases...
@Tofu3435
@Tofu3435 6 ай бұрын
Of course, you can't run gpt3 locally, only smaller models
@aaronjennings8385
@aaronjennings8385 6 ай бұрын
Still useful?
@poogle9368
@poogle9368 6 ай бұрын
I know its dumb because its still an impressive feat that a generative ai can run on that cheap of a computer so why lie? Well we know why.
@TheRealUsername
@TheRealUsername 5 ай бұрын
But better models like Mistral 7b gives almost the same performance as ChatGPT for the same efficiency as Llama or Alpaca, it is difficult to see the difference with ChatGPT when you run an efficient high ranked model from Huggingface leaderboard.
@blacksailstudio
@blacksailstudio 5 ай бұрын
I quite assure you that there are local LLMs you can find on hugging face that understand other languages including German
@abzs5811
@abzs5811 3 ай бұрын
He’s living lavish 🤙🏽
@polloman15
@polloman15 4 ай бұрын
I'm the only one that feels a little bit of nostalgia realizing that the world where I grew up in is already gone? When my parents were young, they had these room sized computers. My mother used to be a typewriter secretary. My father used to be a mechanic back when cars were carbureted at around high school. When I was a child, maybe 4 years old, I remember my father had a thiccc IBM laptop from work. Our first digital camera had only 256MB of memory. Today we're running AI models in a computer a little bit bigger than a wallet. I can only imagine what is waiting for us in a couple of years. Life's good :)
@matteoricci9129
@matteoricci9129 8 ай бұрын
Fish baiting to the max
@Mauroplcr
@Mauroplcr 6 ай бұрын
Hi, nice tuto, but dosent work to i have only 4gb of ram llama_model_load: ggml ctx size = 6065.34 MB Segmentation fault (core dumped)
@gerardniks3636
@gerardniks3636 4 ай бұрын
only 50gb vram for gpt4? where am I going wrong, I'm making a smaller model and I'm already at 1tb ram
@user-vq5gq6bv2d
@user-vq5gq6bv2d 2 ай бұрын
Thank you so much and its working, i have tried my kind of stuffs for getting this output but no repo made me this much easy and working properly, the prompting was very smooth, and the tokens producing was very low .... i dont know why
@user-be5zi4xq7n
@user-be5zi4xq7n 9 ай бұрын
Subbed. That was great.
@DataSlayerMedia
@DataSlayerMedia 8 ай бұрын
Welcome aboard!
@BogdanOlteanu-profile
@BogdanOlteanu-profile 5 ай бұрын
I don't know what's scarier. The fact that if you give the community a finger they will take the entire hand and they do their best to optimize what others couldn't and run an AI model on a board like raspberry pi or the fact that it's possible :D
@thatguy1306
@thatguy1306 5 ай бұрын
How did you get sound with the text output of the program
@Muffiz_
@Muffiz_ 2 ай бұрын
it was edited in, ssh dusnt transfer sound
@nihilsaboo6142
@nihilsaboo6142 8 ай бұрын
Will the performance be boosted with a Google Coral Accelerator?
@brianbecking1
@brianbecking1 6 ай бұрын
I would also love to know this.
@roryleitner1532
@roryleitner1532 Ай бұрын
How can I train a chat ai on a specific very large body of text from a person from the past to bring them back to life? What would the possibilities of that chat be like?
@_iseeyou_luca7529
@_iseeyou_luca7529 8 ай бұрын
Does it work on 32 Bit Systems?
@armisis
@armisis Ай бұрын
Hmmm we need a way to cluster this in a parellel processing raspberry pi 4 and 5 cluster.
@indieartsmidwest4042
@indieartsmidwest4042 6 ай бұрын
I'm so close but ran into a segmentation fault while trying to run the program🤷‍♂
@queerzard
@queerzard 6 ай бұрын
If i download an LLM and run it offline on my raspi.. Does that mean I have most the worlds knowledge packed into 4GB always accessible?
@DataSlayerMedia
@DataSlayerMedia 6 ай бұрын
Yes, you would have the broad strokes with probably some inaccuracies. But this isn’t really remarkable considering Wikipedia (the text) is ~10 Gigabytes.
@Matt-es1wn
@Matt-es1wn 5 ай бұрын
You wont even need a computer, you won't even need electricity
@drealph90
@drealph90 7 ай бұрын
That's a dick move trying to charge money for the text version of the tutorial
@zekeriyaatilgan521
@zekeriyaatilgan521 6 ай бұрын
Does it support different languages? Or is it just English?
@Jim_the_Hermit
@Jim_the_Hermit 4 ай бұрын
I'll wait for voice recognition Chat GPT on a chip
@OriginalAceXD
@OriginalAceXD 27 күн бұрын
now my question is can i run it on Asus Tinker board RK3288
@oglothenerd
@oglothenerd Ай бұрын
OMG!!! Facebook actually did something good for open source!?
@dalivanwyngarden3204
@dalivanwyngarden3204 3 ай бұрын
The download link in your bio is not working anymore unfortunately. Can you provide a new one?
@jan5504
@jan5504 6 ай бұрын
nice been planning to build my personal ai for linux commands so I don't have to memorize all of it.
@7reflection7
@7reflection7 3 ай бұрын
How well can it handle python coding?
@ScottWinterringer
@ScottWinterringer 4 ай бұрын
hmm, I wonder how some of the blokes models would do
@xevilstar
@xevilstar 4 ай бұрын
did you know that you can install the system directly on the ssd and boot from usb ? I use nvme disks on my pi puppies :no ssd card needed :)
@thisisthanish
@thisisthanish 5 күн бұрын
The thing is this guy is running this on raspberry pi 4 and the new raspberry pi 5 is 2.5x times faster just think how fast the ai would be
@1234kdy
@1234kdy 3 ай бұрын
Could you make one on a zima board with a GPU in it's PCI port Maybe using a GPT build from hugging face? or better?
@alexanderyang126
@alexanderyang126 5 ай бұрын
Hello Elon, I think this project could be a useful tool for families, like a mini Wikipedia. Would it be possible to add an audio function based on the work that has already been done? I mean, speak directly to the Pi and give the answer back.
@JinKee
@JinKee 6 ай бұрын
make it pass the butter
@slightlyarrogant
@slightlyarrogant 5 ай бұрын
You could build it on Coral dev board it would probably be faster and cheap as well
@darthwater999
@darthwater999 4 ай бұрын
>trusting google
@olafschermann1592
@olafschermann1592 4 ай бұрын
Awesome
@mandelafoggie9359
@mandelafoggie9359 3 ай бұрын
If the LLM could connect to the internet, may get better responses.
@aimademerich
@aimademerich 5 ай бұрын
Phenomenal
@Blooper1980
@Blooper1980 7 ай бұрын
Sssoooooo... Not GPT!
@JLXT7
@JLXT7 7 ай бұрын
Can i use this in a Pi cluster?
@JarppaGuru
@JarppaGuru 6 ай бұрын
yes if its build for cluster its not magic will raspberrypi desktop work on cluster. no. bcoz its not build for cluster. think 100 RPI and run desktop and its faster than your desktop pc if you make python script ans make it work cluster. then that script work on cluster. all beta testing make script work. when its done you allready know answer. you not script anymore hahaha
@pawelhorna4801
@pawelhorna4801 6 ай бұрын
ARR array! XD
@godofdream9112
@godofdream9112 4 ай бұрын
now we are talking..
@tschmidhuber
@tschmidhuber 4 ай бұрын
And where exactly is now ChatGPT running on your Raspberry Pi?
@WebTable
@WebTable 4 ай бұрын
Looks like it needs more learning. Lots of answers seem to be wrong and relationships between values are also wrong. It listed Iceland's total area as 30,759 sq km (12,286 sq mi). First, I think Icelanders would sad to hear that they lost about 70,000 sq km but the conversion between sq km and sq mi is not even right (off by a little). The problem gets much worse with greenland whose total area is apparently 836,319 sq mi (217,090 sq km). The actual area of greenland is about 2.1 million sq km but even with an incorrect answer, the sq km value is smaller than the sq mi value. Apparently not trained well in D&D but it did give me an accurate synopsis of the MacGyver tv show.
@MisiSzucs
@MisiSzucs 28 күн бұрын
Good video, but the title is misleading. A Llama-1-7b with only just 4 bit is really far from ChatGPT. ChatGPT has 175b parameters compared to 7b parameters in 4 bit. I would say ChatGPT passible outperforms this local Llm by 200-500% in every task.
@vanhetgoor
@vanhetgoor 5 ай бұрын
PIS is not the plural of Pi. A wise American Philosopher ones said: Don't eat the yellow snow!" Keep that in mind.
@OVERLOARD949494
@OVERLOARD949494 2 күн бұрын
50gb vram? I have 60.
@makoado6010
@makoado6010 4 ай бұрын
its compared to openia liek ford t model compared to tesla plaid.
@Joooooooooooosh
@Joooooooooooosh 5 ай бұрын
Except you didn't.
@matteoricci9129
@matteoricci9129 8 ай бұрын
And you are cutting waiting time
@shiftednrifted
@shiftednrifted 6 ай бұрын
did you run chatgpt? or did you run one of them broke ass local llms that lose the thread on conversations close to immediately, run out of tokens way too fast to be useful for most workloads, take forever to infer even on the highest end consumer hardware, and otherwise don't even slightly compare to chatgpt. its cool you ran it on a raspberry pi, but it is NOT comparable.
@ldandco
@ldandco 4 ай бұрын
Clickbait announcement: Thats not ChatGPT
@clipbastler
@clipbastler 7 күн бұрын
The single most significant innovation in history is certainly NOT the internet. AI very much aligned with contemporary human overconfidence...
@chnebleluzern
@chnebleluzern Ай бұрын
will it finally tell you how to build illegal stuff
@Atmatan_Kabbaher
@Atmatan_Kabbaher 7 ай бұрын
Cute proof but ultimately rather meaningless. Everyone knew it was only a matter of time until the models were quantized for lower end hardware. The real miracle is in maintaining benchmark performance and speeds on low end hardware. Good luck.
@whatisrokosbasilisk80
@whatisrokosbasilisk80 4 ай бұрын
How will I now what you're saying unless you write every word on the screen!
@pablocarballeira4443
@pablocarballeira4443 3 ай бұрын
You are running in a pi with 8gb,try to run in 2gb and no luck
@W00ge
@W00ge 5 ай бұрын
can you please stop randomly flashing words on the screen. we can hear you
@hand-eye4517
@hand-eye4517 3 ай бұрын
its not very accurate to say "ubuntu is more tried and true" when ubuntu is just riced debian with skethcy cannonical flower touches
@anthonyrussano
@anthonyrussano 9 ай бұрын
nice try but no
@stevewall7044
@stevewall7044 6 ай бұрын
???
@hexisXz
@hexisXz 2 ай бұрын
Bro what ☠️
@BankFrederick-gk3nj
@BankFrederick-gk3nj 4 ай бұрын
brooooi
@matteoricci9129
@matteoricci9129 8 ай бұрын
Dude you are using alpaca, not chat gpt there is a difference
@SlinkyGangStudios
@SlinkyGangStudios 6 ай бұрын
The title isn’t just misleading, it’s a complete lie to bring in viewers. Just be honest 🤷‍♂️ (and before you start with some bs: No LLM you can run locally on a Pi is comparable to GPT)
@_specialneeds
@_specialneeds 18 күн бұрын
Elon? Elon is a criminal.
@ric5643
@ric5643 6 ай бұрын
Title click bait......
@foursixnine
@foursixnine Ай бұрын
I couldn’t help it but lol at calling Ubuntu “tried and true” linux distribution. While it has its merits, Ubuntu is a Debian downstream.
@supercurioTube
@supercurioTube 3 ай бұрын
Thumb down for clickbait with an obvious lie in the current title: "I Ran ChatGPT on a Raspberry Pi Locally!"
@dawidblachowski
@dawidblachowski 5 ай бұрын
You did not run ChatGPT on a Raspberry Pi Locally. You run it's clone. Precision maters.
@309electronics5
@309electronics5 5 ай бұрын
If you are really that pressed... It also is far more limmited
@pythonner3644
@pythonner3644 6 ай бұрын
Clickbait,
@Cyber_Gas
@Cyber_Gas 5 ай бұрын
Fedora is better than ubuntu
@corey_deroche
@corey_deroche Ай бұрын
Clickbait, Chat GPT does not run locally and a Raspberry Pi is not even close to being capable of supporting it if it could.
@solveit1304
@solveit1304 5 ай бұрын
Clickbait!
Local AI Just Got Easy (and Cheap)
13:27
Data Slayer
Рет қаралды 231 М.
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,1 МЛН
顔面水槽をカラフルにしたらキモ過ぎたwwwww
00:59
はじめしゃちょー(hajime)
Рет қаралды 34 МЛН
Be kind🤝
00:22
ISSEI / いっせい
Рет қаралды 17 МЛН
Когда на улице Маябрь 😈 #марьяна #шортс
00:17
Google Just Turned the RPi into a Supercomputer...
5:42
Data Slayer
Рет қаралды 538 М.
13 Stunning Raspberry Pi Projects for 2024!!!
10:23
ToP Projects Compilation
Рет қаралды 179 М.
EA Won't Let Me Play This Game - So I Hacked It
8:49
Nathan Baggs
Рет қаралды 288 М.
Raspberry Pi 5: EVERYTHING you need to know
20:32
Jeff Geerling
Рет қаралды 1,1 МЛН
I ditched my Raspberry Pi for this
18:45
NetworkChuck
Рет қаралды 444 М.
Training Your Own AI Model Is Not As Hard As You (Probably) Think
10:24
Steve (Builder.io)
Рет қаралды 404 М.
Why You Need To Put A Raspberry Pi CM4 Inside Your PSP Right Now!
26:49
Macho Nacho Productions
Рет қаралды 935 М.
Raspberry Pi versus AWS // How to host your website on the RPi4
8:39
12 New AI Projects using Raspberry-Pi, Jetson Nano & more
7:50
ToP Projects Compilation
Рет қаралды 624 М.
Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM
11:13
Индуктивность и дроссель.
1:00
Hi Dev! – Электроника
Рет қаралды 1,5 МЛН
XL-Power Best For Audio Call 📞 Mobile 📱
0:42
Tech Official
Рет қаралды 772 М.
😱НОУТБУК СОСЕДКИ😱
0:30
OMG DEN
Рет қаралды 2,4 МЛН
5 НЕЛЕГАЛЬНЫХ гаджетов, за которые вас посадят
0:59
Кибер Андерсон
Рет қаралды 221 М.
🤔Почему Samsung ПОМОГАЕТ Apple?
0:48
Technodeus
Рет қаралды 450 М.