5 LLM Security Threats- The Future of Hacking?

  Рет қаралды 8,651

All About AI

All About AI

Күн бұрын

5 LLM Security Threats- The Future of Hacking?
👊 Become a member and get access to GitHub:
/ allaboutai
Get a FREE 45+ ChatGPT Prompts PDF here:
📧 Join the newsletter:
www.allabtai.com/newsletter/
🌐 My website:
www.allabtai.com
Andrej K:
• [1hr Talk] Intro to La...
Today we check what could be the future of hacking and LLM attacks with Jailbreaks and Prompt Injections on LLMs and Multimodal Models
00:00 LLM Attacks Intro
00:18 Prompt Injection Attacks
07:39 Jailbreak Attacks

Пікірлер: 10
@enkor349
@enkor349 6 ай бұрын
Thanks for keeping us up to date with understandable examples
@dameanvil
@dameanvil 6 ай бұрын
00:28 🛡 Prompt Injection Attack: A technique for large language models (LLMs) allowing attackers to manipulate model output via carefully crafted prompts, potentially accessing sensitive data or executing unauthorized functions. 01:39 🌐 Prompt Injection Example: Demonstrates injecting hidden instructions into web content, manipulating the model's output when interacting with scraped data. 03:42 🖼 Image-based Prompt Injection: Embedding instructions within an image, prompting the model to generate specific responses when processing visual content. 04:47 🔍 Hidden Instructions in Images: Obscuring prompts within images, exploiting the model's response to generate unexpected links or content. 06:22 📰 Prompt Injection via Search Results: Demonstrates how search engine responses can carry manipulated instructions, potentially leading to malicious actions. 07:43 🛠 Jailbreaks on LLMs: Techniques involve manipulating or redirecting the initial prompts of LLMs to generate unintended content, either through prompt or token level jailbreaks. 08:38 🕵‍♂ Token-based Jailbreak Example: Exploiting Base64 encoding to manipulate prompts and generate unexpected responses from the model. 09:49 🐟 Fishing Email Jailbreak: Using encoded prompts to coax the model into generating potentially malicious email content, exploiting its response. 11:37 🐼 Image-based Jailbreak: Demonstrating how carefully designed noise patterns in images can prompt the model to generate unintended responses, posing a new attack surface. 13:29 🔒 Growing Security Concerns: Highlighting the potential escalation of security threats as reliance on LLMs and multimodal models increases, emphasizing the need for a robust security approach.
@GrigoriyMa
@GrigoriyMa 6 ай бұрын
Until Sunday, what should I do? Okay, I'll soak up this stuff for now. Thanks Kris
@bladestarX
@bladestarX 5 ай бұрын
Prompt injection: if anyone develops a website and implement code or content that is use to query or generate an output in the front end. They should not be writing code. That’s like putting and hiding sql in or API keys in the front end.
@silentphil77
@silentphil77 6 ай бұрын
Heya man , was wondering if you could please do an updated whisper tutorial, please? just one on getting full transcripts with the python code 😀
@orbedus2542
@orbedus2542 6 ай бұрын
can you please make a video about hands-on comparing the new Gemini Pro(Bard) vs GPT3.5 vs GPT4? I am looking for a straight up comparison with real examples but everyone just uses the edited hand picked marketing material which is useless
@zight123
@zight123 6 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 🧐 *Prompt injection attack is a new technique for manipulating large language models (LLMs) using carefully crafted prompts to make them ignore instructions or perform unintended actions, potentially revealing sensitive data or executing unauthorized functions.* 01:24 📝 *Examples of prompt injection include manipulating websites to execute specific instructions and crafting images or text to influence LLM responses, potentially leading to malicious actions.* 05:25 🚧 *Prompt injection can also involve hiding instructions in images, leading to unexpected behaviors when processed by LLMs, posing security risks.* 07:43 🔒 *Jailbreak attacks manipulate or hijack LLMs' initial prompts to direct them towards malicious actions, including prompt-level and token-level jailbreaks.* 10:03 💻 *Base64 encoding can be used to create malicious prompts that manipulate LLM responses, even when the model is not supposed to provide such information, potentially posing security threats.* 11:37 🐼 *Jailbreaks can involve introducing noise patterns into images, leading to unexpected LLM responses and posing new attack surfaces on multimodal models, such as those handling images and text.* Made with HARPA AI
@MrRom079
@MrRom079 6 ай бұрын
Wow 😮
@gaussdog
@gaussdog 6 ай бұрын
🤓
@robboerman9378
@robboerman9378 6 ай бұрын
Thanks for keeping us up to date with understandable examples
Real-world exploits and mitigations in LLM applications (37c3)
42:35
Embrace The Red
Рет қаралды 21 М.
OMG🤪 #tiktok #shorts #potapova_blog
00:50
Potapova_blog
Рет қаралды 17 МЛН
孩子多的烦恼?#火影忍者 #家庭 #佐助
00:31
火影忍者一家
Рет қаралды 10 МЛН
Vivaan  Tanya once again pranked Papa 🤣😇🤣
00:10
seema lamba
Рет қаралды 23 МЛН
Luck Decides My Future Again 🍀🍀🍀 #katebrush #shorts
00:19
Kate Brush
Рет қаралды 8 МЛН
Hacking with ChatGPT: Five A.I. Based Attacks for Offensive Security
11:25
The CISO Perspective
Рет қаралды 53 М.
The new AI Cyber Defense  you need to know about
37:47
David Bombal
Рет қаралды 173 М.
Introduction to LLM languages+ConceptScript
36:22
Weston Beecroft
Рет қаралды 99
Free Hacking API courses (And how to use AI to help you hack)
53:46
The AI Cybersecurity future is here
26:42
David Bombal
Рет қаралды 150 М.
Hypnotized AI and Large Language Model Security
13:22
IBM Technology
Рет қаралды 7 М.
Tactics of Physical Pen Testers
44:17
freeCodeCamp Talks
Рет қаралды 889 М.
Неразрушаемый смартфон
1:00
Status
Рет қаралды 1,9 МЛН
Ждёшь обновление IOS 18? #ios #ios18 #айоэс #apple #iphone #айфон
0:57
Asus  VivoBook Винда за 8 часов!
1:00
Sergey Delaisy
Рет қаралды 1,1 МЛН
В России ускорили интернет в 1000 раз
0:18
Короче, новости
Рет қаралды 236 М.