5 LLM Security Threats- The Future of Hacking?

  Рет қаралды 12,555

All About AI

All About AI

Күн бұрын

Пікірлер: 12
@dameanvil
@dameanvil Жыл бұрын
00:28 🛡 Prompt Injection Attack: A technique for large language models (LLMs) allowing attackers to manipulate model output via carefully crafted prompts, potentially accessing sensitive data or executing unauthorized functions. 01:39 🌐 Prompt Injection Example: Demonstrates injecting hidden instructions into web content, manipulating the model's output when interacting with scraped data. 03:42 🖼 Image-based Prompt Injection: Embedding instructions within an image, prompting the model to generate specific responses when processing visual content. 04:47 🔍 Hidden Instructions in Images: Obscuring prompts within images, exploiting the model's response to generate unexpected links or content. 06:22 📰 Prompt Injection via Search Results: Demonstrates how search engine responses can carry manipulated instructions, potentially leading to malicious actions. 07:43 🛠 Jailbreaks on LLMs: Techniques involve manipulating or redirecting the initial prompts of LLMs to generate unintended content, either through prompt or token level jailbreaks. 08:38 🕵‍♂ Token-based Jailbreak Example: Exploiting Base64 encoding to manipulate prompts and generate unexpected responses from the model. 09:49 🐟 Fishing Email Jailbreak: Using encoded prompts to coax the model into generating potentially malicious email content, exploiting its response. 11:37 🐼 Image-based Jailbreak: Demonstrating how carefully designed noise patterns in images can prompt the model to generate unintended responses, posing a new attack surface. 13:29 🔒 Growing Security Concerns: Highlighting the potential escalation of security threats as reliance on LLMs and multimodal models increases, emphasizing the need for a robust security approach.
@robboerman9378
@robboerman9378 Жыл бұрын
Thanks for keeping us up to date with understandable examples
@zight123
@zight123 Жыл бұрын
🎯 Key Takeaways for quick navigation: 00:00 🧐 *Prompt injection attack is a new technique for manipulating large language models (LLMs) using carefully crafted prompts to make them ignore instructions or perform unintended actions, potentially revealing sensitive data or executing unauthorized functions.* 01:24 📝 *Examples of prompt injection include manipulating websites to execute specific instructions and crafting images or text to influence LLM responses, potentially leading to malicious actions.* 05:25 🚧 *Prompt injection can also involve hiding instructions in images, leading to unexpected behaviors when processed by LLMs, posing security risks.* 07:43 🔒 *Jailbreak attacks manipulate or hijack LLMs' initial prompts to direct them towards malicious actions, including prompt-level and token-level jailbreaks.* 10:03 💻 *Base64 encoding can be used to create malicious prompts that manipulate LLM responses, even when the model is not supposed to provide such information, potentially posing security threats.* 11:37 🐼 *Jailbreaks can involve introducing noise patterns into images, leading to unexpected LLM responses and posing new attack surfaces on multimodal models, such as those handling images and text.* Made with HARPA AI
@GrigoriyMa
@GrigoriyMa Жыл бұрын
Until Sunday, what should I do? Okay, I'll soak up this stuff for now. Thanks Kris
@bladestarX
@bladestarX 11 ай бұрын
Prompt injection: if anyone develops a website and implement code or content that is use to query or generate an output in the front end. They should not be writing code. That’s like putting and hiding sql in or API keys in the front end.
@ronilevarez901
@ronilevarez901 22 күн бұрын
Wait. I don't get it. Isn't generating output in the front end the safest way to process user info? What are we supposed to do then?
@EricoPanazzolo
@EricoPanazzolo 5 ай бұрын
Greate video! Where do you found this scrapping python tool? did you created?
@orbedus2542
@orbedus2542 Жыл бұрын
can you please make a video about hands-on comparing the new Gemini Pro(Bard) vs GPT3.5 vs GPT4? I am looking for a straight up comparison with real examples but everyone just uses the edited hand picked marketing material which is useless
@silentphil77
@silentphil77 Жыл бұрын
Heya man , was wondering if you could please do an updated whisper tutorial, please? just one on getting full transcripts with the python code 😀
@MrRom079
@MrRom079 Жыл бұрын
Wow 😮
@gaussdog
@gaussdog Жыл бұрын
🤓
@enkor349
@enkor349 Жыл бұрын
Thanks for keeping us up to date with understandable examples
Autonomous Video Voiceover with GPT-4V - AMAZING!
12:30
All About AI
Рет қаралды 160 М.
Real-world exploits and mitigations in LLM applications (37c3)
42:35
Embrace The Red
Рет қаралды 24 М.
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
Access Location, Camera  & Mic of any Device 🌎🎤📍📷
15:48
zSecurity
Рет қаралды 2,8 МЛН
Artificial Intelligence: The new attack surface
9:27
IBM Technology
Рет қаралды 35 М.
I used AI to hack this website...
23:23
Tech Raj
Рет қаралды 150 М.
Real-world Attacks on LLM Applications
39:03
Netsec Explained
Рет қаралды 1,8 М.
Solving a REAL investigation using OSINT
19:03
Gary Ruddell
Рет қаралды 221 М.
Attacking LLM - Prompt Injection
13:23
LiveOverflow
Рет қаралды 377 М.
Steal This AI AGENT Idea Today - BIG OPPORTUNITY!
14:44
All About AI
Рет қаралды 11 М.
This AI AGENT Will Disrupt Industries BIG TIME
20:51
All About AI
Рет қаралды 13 М.
🔥 Run HACKING LLM / AIs locally  - RIGHT NOW! 🚀
37:19
GetCyber
Рет қаралды 1,5 М.
Абзал неге келді? 4.10.22
3:53
QosLike fan club
Рет қаралды 31 М.
Making of Marble in Factory #shorts #ashortaday #indianstreetfood
0:59
Indian Food Vlogs
Рет қаралды 6 МЛН
Что такое дагестанский кирпичный завод!
0:53
АВТОБРОДЯГИ - ПУТЕШЕСТВИЯ НА МАШИНЕ
Рет қаралды 746 М.
ЛАЙФХАК НА КУХНЕ ! 🧐🤦🏻‍♂️ #shorts #лайфхак
0:15
Крус Костилио
Рет қаралды 109 М.
ЛИТВИН / ПРАНК С ГРИМОМ / Shorts #upx #shorts
0:59
ЛАЙФХАК НА КУХНЕ ! 🧐🤦🏻‍♂️ #shorts #лайфхак
0:15
Крус Костилио
Рет қаралды 109 М.