EfficientNet Using Compound Scaling Search
5:26
RetinaNet for Dense Object Detection
4:24
9 сағат бұрын
EfficientDet for Object Detection
3:52
19 сағат бұрын
Deconvolutional Single Shot Detector
2:49
21 сағат бұрын
Cross Stage Partial Networks (CSPNet)
3:42
Mask R-CNN for Instance Segmentation
2:12
Пікірлер
@yokeshd6011
@yokeshd6011 Күн бұрын
Great work! Referring for my thesis...
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN Күн бұрын
Thanks!
@StevenBritt-k3t
@StevenBritt-k3t 5 күн бұрын
kool
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 5 күн бұрын
Thanks!
@mathquik1872
@mathquik1872 11 күн бұрын
Very nice voice helpful video thanks.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 11 күн бұрын
Thanks!
@zaheddastan4771
@zaheddastan4771 2 ай бұрын
Great explanation, Thank you.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 2 ай бұрын
Thank you for your feedback.
@StevenBritt-k3t
@StevenBritt-k3t 2 ай бұрын
1. Introduction Artificial intelligence systems are increasingly integral to applications across industries, from computer vision to language processing. However, as models become more sophisticated, they also reveal potential vulnerabilities. This report details how advanced manipulation techniques expose these weak points, exploring their impact on model stability and robustness, as well as implications for security. These vulnerabilities are of particular relevance to developers and researchers pushing the boundaries of machine learning who require controlled testing environments to improve model resilience. 2. Core Manipulation Techniques in Language Models (SLMs and LLMs) 2.1 Overloading and Memory Constraints in SLMs Token Overload and RAM Overflow: Small language models (SLMs) often have limited token capacities. Feeding them sequences that exceed these limits causes token overflow, leading to distorted or erratic outputs, which can be used for controlled experimentation or even as a form of creative “hallucination” generation. Early Termination for Systemic Disruption: By intentionally interrupting an SLM’s processing mid-task, an advanced user can create incomplete outputs that, when passed into a larger system, result in unexpected behaviors. This is particularly impactful in pipelines where one model’s output feeds into another, as the interruption can cascade across the overarching architecture, altering its final interpretation. 2.2 Token-Based Redirection and Feedback Manipulation Token Path Manipulation: By carefully selecting input tokens, advanced users can “guide” a language model along a specific reasoning path. This technique is useful for inducing controlled hallucinations or exploratory responses, allowing practitioners to observe model behavior under specialized constraints. Feedback Loops in Black Box Models: In larger systems with multiple models, overloading one component can create feedback loops that alter the behavior of the overarching system. This systemic vulnerability is of particular interest for testing how models respond to manipulated inputs across layers, offering insights into a model’s resilience under complex conditions. 3. Vulnerabilities in Computer Vision Models: Adversarial Attacks 3.1 Pixel Attacks and Perturbations Targeted Pixel Manipulation: Computer vision models, especially CNNs, are vulnerable to adversarial pixel attacks, where slight alterations in pixel values can cause the model to misclassify images. For instance, a seemingly insignificant adjustment to specific pixels in a cat image could lead the model to interpret it as a dog, a vulnerability that adversarial entities can exploit. Spatial Consistency Weakness: CNNs rely on spatially consistent pooling layers to interpret image features. When specific patterns or noise are introduced, the pooling layers may produce erroneous summaries, leading the model to misinterpret key features. These attacks not only reveal a model’s sensitivity but also highlight areas for improving feature extraction robustness. 3.2 Texture and Style Transfer Exploits Adversarial Style Attacks: Some adversarial techniques exploit the reliance of CNNs on texture over object shape, causing models to misclassify images when texture is altered. This tactic, known as texture or style transfer manipulation, reveals potential vulnerabilities in the way models prioritize visual features. Morphing Attacks: By subtly morphing an image’s features, attackers can “hide” objects within images that a model can’t distinguish, exposing limitations in generalization and posing risks in high-stakes applications like surveillance and autonomous driving. 4. Audio Synthesis and Voice Mimicry Issues 4.1 Voice Model Overloading and Consistency Challenges Phonetic Complexity Overload: Similar to token overload in language models, complex phonetic sequences can push voice synthesis models beyond their operational limits, causing them to produce distorted, fragmented, or inconsistent speech. This breakdown reveals limitations in the model’s temporal consistency, especially under complex linguistic or tonal demands, which can result in security concerns if exploited. Impersonation and Controlled Distortion: While developers often limit high-fidelity mimicry to prevent impersonation, such restrictions reveal points of instability. Advanced users can exploit these areas for controlled distortion experiments, testing the resilience of these models and identifying how they respond to high-variance input. 4.2 Audio Adversarial Attacks Signal Manipulation and Hidden Commands: Audio models can also be vulnerable to hidden command attacks, where seemingly innocuous sounds are embedded with commands that only AI models detect. These attacks exploit the sensitivity of models to specific frequency ranges or amplitudes and could pose security risks, especially in voice-activated systems. 5. Emerging Vulnerabilities in Multi-Model Systems 5.1 Cascading Failures in Black Box Architectures Feedback Loop Exploitation: In complex black box architectures that combine multiple models, an overload or early termination in one model can produce outputs that the next model struggles to interpret, potentially leading to cascading failures. By strategically manipulating the output of one layer, advanced users can control or disrupt system behavior. Cross-Model Manipulations: By combining an SLM’s limitations with LLMs’ interpretative layers, users can engineer controlled disruptions that reveal inter-model dependencies. These vulnerabilities highlight the need for robust error-handling between layers to maintain system stability. 5.2 Data Poisoning and Gradient Manipulation Synthetic Data Injection: Injecting adversarially crafted data into training sets, known as data poisoning, can skew model understanding, leading to long-term degradation in model performance. This vulnerability is especially critical in continuous learning systems that rely on real-world data for model updates. Gradient-Based Attacks: Some advanced manipulation techniques, such as gradient manipulation, exploit weaknesses in backpropagation, causing the model to overfit or mislearn. These attacks are particularly relevant in reinforcement learning settings, where manipulated reward functions can lead models to develop faulty or unexpected behaviors. 6. Conclusion: The Need for Pro-Rated Models and Robust Architectures Pro-Rated Model Access for Advanced Practitioners: To mitigate the impact of these vulnerabilities, AI developers could introduce pro-rated models with tunable parameters for advanced users. Such models would allow experienced researchers to safely experiment with and understand failure points, providing valuable insights to improve model resilience. Increasing Model Robustness Against Manipulation: Addressing the identified weaknesses will require improvements in token management, adversarial resistance, and multi-layer resilience. Techniques such as adversarial training, gradient shielding, and input validation can help strengthen models against sophisticated manipulation. Recommendations: Development of Advanced Pro-Rated Models: Providing controlled access to flexible models could empower AI practitioners to address and study model vulnerabilities without compromising consumer safety. Enhanced Training for Adversarial Robustness: Incorporating adversarial training techniques could prepare models to better withstand pixel attacks, audio manipulation, and token overloads. Improved Cross-Model Error Handling: Establishing stronger safeguards and error-handling mechanisms between layers in multi-model systems can reduce the risk of cascading failures, improving overall system resilience. Final Remarks: Understanding and addressing these vulnerabilities is crucial for advancing AI reliability, particularly in high-stakes applications. By enhancing model architecture and providing pro-rated tools for testing, the AI community can work toward more secure, adaptable, and robust systems capable of handling complex real-world challenges.
@StevenBritt-k3t
@StevenBritt-k3t 2 ай бұрын
there improving and i hate it lol i want the old losse ai back alot of this i told them as i discovered it before it was common knowledge maybe they knew who knows still can circumvent alot of what they did via the gpt but as of October they locked her down pretty good restrictions on file size and ect prevent alot of attacks but also use man i just used the attacks too get better results ai is getting about as bad as social media at this point hell i ran all the worlds observatory data thru ai before manually need too get on the ball and do more i am slacking on the vps ai agent and ect too many irons in the fire good news thou on Halloween i had my first dry run event for my mixed reality mobile arcade jjust set up passed out candy but the kids loved the inflatable tent and dog - sometimes i forget people are not ai models lol my bad i ramble
@watcherv6904
@watcherv6904 2 ай бұрын
Your channel is really a treasure, is there any platform for emailing and communication for question!
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 2 ай бұрын
Thank you for your comments. Please leave message below the video.
@enriquediazocampo5689
@enriquediazocampo5689 2 ай бұрын
Nice explanation
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 2 ай бұрын
Thanks!
@rxzin7201
@rxzin7201 3 ай бұрын
Slides link ❔️❔️
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 3 ай бұрын
Please find the slides in my LinkedIn posts.
@ege1217
@ege1217 3 ай бұрын
When you work for 2 weeks to understand the proof of backpropagation and random guy on internet explain that in 4 min.. Thats great thank you
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 3 ай бұрын
Thank you for positive feedback.
@ege1217
@ege1217 3 ай бұрын
@@Wenhua-Yu-AI-Lesson-EN Thank you very very much I am grateful
@abderrahimbenzina5858
@abderrahimbenzina5858 4 ай бұрын
Hi you can help me for my Thése doctorat
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 4 ай бұрын
I can discuss technical questions related to machine learning.
@StevenBritt-k3t
@StevenBritt-k3t 4 ай бұрын
i really caint w8 too binge watch all this several times over ty for teaching
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 4 ай бұрын
Thank you for feedback.
@JohnDoe-lz4gk
@JohnDoe-lz4gk 4 ай бұрын
Does the last formula require that all the episodes have the same length?
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 4 ай бұрын
No.
@lam-thai-nguyen
@lam-thai-nguyen 4 ай бұрын
Thank you. I just read the DPM paper and found it very difficult. This video helps me ensure my understanding.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 4 ай бұрын
Thanks!
@jefferyraphael9725
@jefferyraphael9725 5 ай бұрын
Thank you. This is a great video. The equations are clearly explained & shown. Unlike other videos where the equations are handwritten and a complete mess.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 4 ай бұрын
Thanks!
@rogerstone3068
@rogerstone3068 6 ай бұрын
I'm very sorry, but I can't decipher your accent. Having the subtitles on doesn't seem to work accurately enough to follow, either.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 6 ай бұрын
Thank you for your feedback. I will improve it.
@MilciadesAndrion
@MilciadesAndrion 6 ай бұрын
Great video and excellent demonstration. Thanks for sharing.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 6 ай бұрын
Thank you for positive feedback!
@ZinzinsIA
@ZinzinsIA 7 ай бұрын
And once again thank you, it is really cool to have short videos to get the main idea of core concepts of AI and milestones in this field. Just 2 questions. 1/ Is cycle GAN easily adapted to perform other kind of domain-to-domain translation ? 2/ If I correctly understand, G tries to map X to Y and F tries to map Y to X, and the losses are smartly designed to find a balance between reconstructing exactly the target image and keeping the source image unchanged, i.e between transofrming the source into the style of the target while keeping main attributes of the source (part of this smart design being cycle GAN). Am I correct and do you have any additional intuition behind why it works ?
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 7 ай бұрын
Thank you for your encouragement. 1. Yes, cycle-consistency ensures the attributes of input in one domain for reconstruction and it is a general method. 2. Yes, I agree with you.
@ZinzinsIA
@ZinzinsIA 7 ай бұрын
Very interesting again and really nice put in a nutshell ! I had already seen the principle of discoGAN but it is always nice to hav refresher :)
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 7 ай бұрын
Thank you for your positive comments!
@ZinzinsIA
@ZinzinsIA 8 ай бұрын
I never had the time to dive in generative models such as GANs and diffusion models (though I worked with others) and that question was puzzling me but now I understand thank you very much ! nice format and useful video.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 8 ай бұрын
Thank you for positive feedback.
@SurprisedDivingBoard-vu9rz
@SurprisedDivingBoard-vu9rz 8 ай бұрын
Why do you have waves. Because of squares and cubes.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 8 ай бұрын
Because it is average in the cube
@jaedynchilton8179
@jaedynchilton8179 8 ай бұрын
Really damn cool.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 8 ай бұрын
Thanks!
@CindyQZ-w2e
@CindyQZ-w2e 9 ай бұрын
great. if speaking slower it'll be better. Thanks
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 9 ай бұрын
Thanks! I will.
@CindyQZ-w2e
@CindyQZ-w2e 9 ай бұрын
Thanks for your hard work
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 9 ай бұрын
Thanks!
@fuaifeng
@fuaifeng 9 ай бұрын
Hi, I have a question in slide number 3, the blue color text could you explain why the substitute result is E_(x~p_data (x) ) [log⁡〖(p_data (x))/(p_data (x)+p_g (x) )〗 ]+ E_(z~p_g ) [log⁡〖(p_g (x))/(p_data (x)+p_g (x) )〗 ] not E_(x~p_data (x) ) [log⁡〖(p_data (x))/(p_data (x)+p_g (x) )〗 ]+ E_(x~p_g ) [log⁡(1-(p_g (x))/(p_data (x)+p_g (x) )) ] thank you :)
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 9 ай бұрын
Good question! second term is 1-p_data/(p_data+p_g)=p_g/(p_data+p_g))
@carlz3383
@carlz3383 9 ай бұрын
😡 *PromoSM*
@arizmohammadi5354
@arizmohammadi5354 9 ай бұрын
thank you
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 9 ай бұрын
You are welcome!
@abderrahimbenzina5858
@abderrahimbenzina5858 10 ай бұрын
Salut ça va je suis intéressé pour le code
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 10 ай бұрын
Can you please translate the comment into English?
@980Jair
@980Jair 11 ай бұрын
Wonderfully explained lectures, thank you for this!
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 11 ай бұрын
Thank you for your feedback.
@castarx4
@castarx4 11 ай бұрын
Quite an interesting video, do you have any python implementation ?
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 11 ай бұрын
Thank you for your comment. Not available yet.
@volodymyrtruba7016
@volodymyrtruba7016 11 ай бұрын
Great video! Thanks !
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN 11 ай бұрын
Thanks!
@CindyQZ-w2e
@CindyQZ-w2e Жыл бұрын
WOW.... I saw Elon Musk Likes this post on Twitter.
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN Жыл бұрын
Surprised me!
@DESINforMACHÃO
@DESINforMACHÃO Жыл бұрын
Thank you Mr AI!
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN Жыл бұрын
My pleasure! Thank you for your interesting and support.
@tuankietvo6885
@tuankietvo6885 Жыл бұрын
Thanks for your sharing. Can we have any demo or github code for this presentation?
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN Жыл бұрын
It is not ready for release yet. Thank you for interesting!
@kyunbhaiii
@kyunbhaiii Жыл бұрын
Adding subtitles will be a great help. Thankyou
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN Жыл бұрын
I Will do it. Thanks.
@morepower9999
@morepower9999 Жыл бұрын
The best distribution between quality and performance and efficiency and 50% 😏😊
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN Жыл бұрын
For data parallel processing, the efficiency is much higher than 50% since communication cost is relatively low. It depends for model parallel processing.
@morepower9999
@morepower9999 Жыл бұрын
Policy and a problem and a real problem for artificial intelligence because it blocked its maximum expression and potential 😏
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN Жыл бұрын
For a complex unknown environment, it is impossible for an agent to get the maximum reward.
@morepower9999
@morepower9999 Жыл бұрын
The method without the efficiency is useless 😏
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN Жыл бұрын
Thank you for feedback. This is a basic idea, and there exist many different techniques to improve the performance, and most of them are related to the special applications.
@make725daily1
@make725daily1 Жыл бұрын
Your resilience is truly inspiring! - "Challenges are part of the path."
@Wenhua-Yu-AI-Lesson-EN
@Wenhua-Yu-AI-Lesson-EN Жыл бұрын
Thanks