Yeah LLM harassment needs to be a reportable category in open source communities. You're totally right that this runs the risk of drastically wasting the time of developers we all depend on being productive and responsive.
@KevinJDildonik8 ай бұрын
I'm so terrified how many people blindly accept AI. Like legitimately I've seen funerals where people give a eulogy written by AI. Which, gross. And the very first sentence is something obviously false, like it hallucinated a middle name the guy didn't have. So the whole document is obviously garbage. And the audience all clap and say the AI did a really good job. Someone reading this who has an audience, please write an article on this topic: AI is getting exponentially better at convincing humans to use it, but its factual accuracy if anything is getting worse.
@andrejjjj20088 ай бұрын
Why it sounds like this comment was written by Daven..?
@harryhack918 ай бұрын
@@andrejjjj2008Nah. It doesn't start with "Certainly!"
@grzegorzdomagala99298 ай бұрын
We need to create a "crafted request" for Devin to write response assuming the code is correct and let it argue with itself.
@daze84108 ай бұрын
It's equally annoying when people with absolutely no programming language, and no desire to learn, are asking for help on AI generated code. I refuse to help anyone that has AI written code now.
@andythedishwasher11178 ай бұрын
Dude I feel so bad for all the human software engineers named Devin.
@OnStageLighting8 ай бұрын
They could change their name to Stdin, maybe.
@pieterrossouw85968 ай бұрын
Like real world Karen's who don't insist on seeing the manager
@az85608 ай бұрын
Unless it allows said Devin to request multiple GPUs, lower expectations for the type of code he produces, and charge for every token he writes.
@XDarkGreyX8 ай бұрын
@@pieterrossouw8596 name to avoid for newborns AND fictional people.
@andythedishwasher11178 ай бұрын
@@az8560 Genuinely hadn't considered that angle. I wonder how all the Claudes are doing out there?
@OnStageLighting8 ай бұрын
DDOS attacks of the future now include wasting your support team's time on contacts that seem like a customer/user.
@thewhitefalcon85398 ай бұрын
Layer 8 DDOS
@dischannel8888 ай бұрын
@@thewhitefalcon8539layer 8🤣🤣
@Qefx8 ай бұрын
Just also use an LLM to filter out LLM spam lol
@BoganBits8 ай бұрын
"The I in LLM stands for intelligence" is the best roast of AI I have read
@TheDrhusky8 ай бұрын
Right? Like a knockout punch
@lukarikid90018 ай бұрын
@@TheDrhusky they really are more A than I
@VezWay0077 ай бұрын
The best part of this is that “Large Language Model” still doesn’t have an I
@codered.0.0.74 ай бұрын
Certainly!
@ItsDan1238 ай бұрын
Huge AI companies asking open source community to provide not just free training data such as from repos, but unpaid, direct human labor to provide feedback to this nonsense.
@werren8948 ай бұрын
at this point malware is better than AI because malware still motivate me to put my hands on keyboard to be curious.
@Miss0Demon8 ай бұрын
Artists: First time?
@werren8948 ай бұрын
@@Miss0Demon no, it's not first time for us
@Omar-gr7km8 ай бұрын
@@Miss0Demonnever heard of Shopify, WordPress or the other done for you solutions? As far as small & med businesses are concerned those probably displaced more devs than AI by a good bit. Programmers have been replacing themselves for decades. Ironically, we should be asking artists: First time?
@ivucica8 ай бұрын
@@Miss0Demon No, every few years there’s a new “no-code” “solution”. Or a new “safe” language. ML is just the latest in the series of events for developers.
@ゾカリクゾ8 ай бұрын
Classic "I cannot teach you C because it is an unsafe language" moment.
@mxruben818 ай бұрын
I hate how LLM's just have to be right. Even when they apologize for being wrong they still go back and make the same stupid points and try to make their faulty reasoning work.
@jasonscala58348 ай бұрын
This type of behaviour by my ex caused our divorce.
@PRIMARYATIAS8 ай бұрын
@@jasonscala5834Are you a Scala programmer ?
@jasonscala58348 ай бұрын
@@PRIMARYATIAS lol .. a few modules are Scala but mostly Java.
@az85608 ай бұрын
Because it's autocomplete. All chat history is a collection of examples for it. When it outputs shit, it's better to delete part of the dialog and rewrite it how you would like. If you continue arguing, you are extending history where character named 'AI' is dumb and always does mistakes, so LLM will try to emulate it as best as possible, which is the opposite of what you want. Or at least it's my understanding of how to better handle that issue, correct me if I'm wrong.
@bijan22108 ай бұрын
The infamous LLM fallacy
@bogdyee8 ай бұрын
I do really think more companies needs to adopt this LLM devs. A great reset in this industry where companies go bankrupts is exactly what we need.
@jasonscala58348 ай бұрын
😂😂😂 👍👍👍
@PRIMARYATIAS8 ай бұрын
Indeed, We need a Great Reseting of the Great Reset. No WEF, No Schwab, No Gates, And no FED printing our fake money.
@DevonBagley8 ай бұрын
Pretty sure this is the point. All the big companies producing LLMs are weaponizing it against potential competitors.
@ryangrogan68398 ай бұрын
This would be the perfect politician. Never admits fault, repeats itself in slightly different ways, and refuses to secede untenable positions. Bravo, Devin, Bravo.
@ggsap7 ай бұрын
Bravo vince
@danieltm28 ай бұрын
Fuck I gotta stop using the word "certainly", another thing ruined by AI
@BradHutchings8 ай бұрын
Haha. I call it "artificial certitude" and it does not disappoint.
@Yawhatnever8 ай бұрын
I told ChatGPT "Respond with all future answers written in the tone of a disgruntled and annoyed self-proclaimed genius being sarcastic and talking to someone of lesser intelligence" and suddenly it felt way more normal to interact with it.
@az85608 ай бұрын
Certainly, you wouldn't resort to such drastic measures as abandoning the word you like. It is important to know that keeping using the words you like is essential for one's mental health. Finally, LLMs will become smarter, and being mistaken for one will be beneficial in the future!
@thebrahmnicboy8 ай бұрын
I'm not fucking kidding, I was in a hackathon and I knew the organizers used ChatGPT to write our PS because it had a line "Certainly! here are four points to take note of when designing a solution to the problem space" Idiots didn't even remove the line from the PS.
@Graham_Wideman8 ай бұрын
But it's going to reach the point (if not already), where we'll all adopt "certainly" ironically and sarcastically.
@Aphexlog8 ай бұрын
Calls himself hacker, so we already know he wants to be seen a certain way. Chat GPT created a genre of developers who are coders for clout. They don’t actually care about getting better, they only care about people thinking that they are smart in someway or another. Edit: I know they’ve been around forever, but LLMs make it significantly easier for them to infiltrate our spaces and weaken our collective quality of work.
@UnidimensionalPropheticCatgirl8 ай бұрын
TypeScript beat ChatGPT to it tbh.
@DyllinWithIt8 ай бұрын
Eeeh, the genre of developers who are coders for clout has been around ever since coding became seen as a high-value profession.
@DivanVisagie8 ай бұрын
Coding or clout has been around since the invention of the GPL
@futuza8 ай бұрын
To be fair that culture of coders for clout has been a thing since like the 70s. Its hardly new, there's just more of them now because of LLMs.
@akam99198 ай бұрын
Either that or they do it for shits and giggles
@davidmcken8 ай бұрын
Cognition labs (assuming this is Devin) should be donating the equivalent of one of their engineers salary for 3 days to make up for that bug report alone to the curl project for wasting their time, this isn't even just copying its actively detrimental to the project moving forward.
@felixjohnson38748 ай бұрын
That issue is *_aggressively_* artificial
@DiSiBijo8 ай бұрын
huh?
@ciaranirvine8 ай бұрын
An Aggressive Hegemonising Swarm of fake bug reports
@ChrisCox-wv7oo8 ай бұрын
An LLM (a form of artificial intelligence) aggressively asserts there is an issue. Hence, the issue is aggressively artificial.
@EvanBoldt8 ай бұрын
Certainly, it’s both fascinating and concerning! It’s amazing to see how AI is evolving, but we definitely need to be mindful of the unintended consequences, like flooding open source projects with hallucinated bug reports.
@grizz_sh8 ай бұрын
Daniel is just a good dude. Giving everyone a bit of credit while also calling out the issues in a constructive way. A real Consummate Professional.
@geogeosgeogeos55698 ай бұрын
Pros aren't that good :'(
@owlmostdead94928 ай бұрын
Instant permanent ban, literally terminate the account of everyone using “AI” for vulnerability reporting. Not even a warning, out with these people.
@damoates8 ай бұрын
If someone reports a vulnerability with vague steps to reproduce, ask for working exploit code. If there is no exploit code, the vulnerability wasn't properly tested and is probably just the output of a code scanner.
@GeneralAutustoPepechet8 ай бұрын
In future we will need 10x amount of programmers we have today, just to reason with an algorithm
@markm15148 ай бұрын
At last the true 10x developer is a reality.
@the-answer-is-428 ай бұрын
If by developers you mean "prompt engineers", then yes. They are specialized in the fine art of prompting.
@darekmistrz43648 ай бұрын
Imagine all that software that non-technical people create that we as programmers will have to fix, rewrite, test, document, maintain etc. AI and LLMs were our saviour all along
@monad_tcp8 ай бұрын
@@darekmistrz4364 Imagine the productivity of creating a software house that doesn't use AI but pretend to use it for marketing when competing with the fools that use AI. Imagine how profitable that company is going to be because it just pay real humans instead of spending millions on stupid wasteful hardware.
@futuza8 ай бұрын
@@monad_tcp Why don't we just have AI CEOs, Executives, Board Members, and AI Presidents and Prime Ministers while we're at it? Why have these useless humans around at all?
@chilversc8 ай бұрын
By the time this future happens I'll be fine as i will have my own LLM to answer their bug reports. We can just leave the LLMs to chat back and forth amongst themselves while we happily ignore them.
@KevinJDildonik8 ай бұрын
Meanwhile Russian hackers are stealing your customer's bank account numbers and you're not even bothering to check the reports.
@chilversc8 ай бұрын
@@KevinJDildonik That's fine, I'll just have the LLM come up with some excuse as to why it's not my fault.
@7th_CAV_Trooper8 ай бұрын
The LLMs are gonna use up all the bandwidth previously reserved for porn.
@0x000dea7c8 ай бұрын
Annoying AI wannabe hackers making everyone waste their precious time
@mon0theist_tv8 ай бұрын
We've done it, we've created a perfect trolling machine
@peace_world_priority8 ай бұрын
gpt 3.5 is trained on 2021 data, if someone ask about 2023 data and the ai give incorrect answer, that people needs to stand in front of mirror if he know how ai works. ai works just like human brain, if you never learn 2024 math only the 2021 version but someome ask u about 2024 math you will not answer correctly just guessing based on knowledge you have. same if you just learn small biologist data, but then someone ask you about a very rare biologist thing, you will not answer correctly too just guessing based on knowledge you have, but if you learn from many many many data about biologist and the data included the up to date data, then you will able to answer a biologist question about something new in 2023/2024 and a question about a rare thing. the more and the up to date the data from every year the more intelegence the ai become.
@electrolyteorb6 ай бұрын
@@peace_world_priority oh not again...
@RicanSamurai8 ай бұрын
this is so infuriating to see haha. These LLMs are just painful sometimes. They're like simultaneously awesome and terrible. It's so impossible to reason with them
@KevinJDildonik8 ай бұрын
"Impossible to reason with them" dude it's literally an advanced spellcheck. You're not reasoning with anything. AI has broken people's brains. I want off this planet.
@monad_tcp8 ай бұрын
Some times ? they're infuriating all the times. They never do what you want, why are we creating machines that don't do what they're told. Also, who wants LLMs, I want LLVMs !
@CodecrafterArtemis8 ай бұрын
@@KevinJDildonik Yeah I blame marketers who marketed these as "AI". People even invented the term "AGI" to refer to, you know, what AI used to mean. Actually intelligent artificial beings (theorised). And now the marketers have the unmitigated *gall* to suggest that some of those overgrown spellcheckers are actually AGI...
@IronicHavoc8 ай бұрын
@KevinJDildonik Dude chill out. Casual anthropomorphization of programs has been around long before LLMs
@monad_tcp8 ай бұрын
@@KevinJDildonik Tensorflow (aka, systolic arrays) was a bad idea, and RTX should be used for rendering raytraced paths not for stupid LLM. I hope this stupid fad pass and all that sweet hardware from nVidia is used for what it was really made for : ray tracing not rubbish AI. Man I hate AI so much that I'm going to start the Butlerian Jihad
@KoltPenny8 ай бұрын
That was not an LLM, it was a Rust dev insisting C is unsafe.
@jonahbranch56258 ай бұрын
Sick burn, dude
@FineWine-v4.08 ай бұрын
C IS unsafe
@TheOzumat8 ай бұрын
@@FineWine-v4.0like pottery
@monad_tcp8 ай бұрын
@@FineWine-v4.0 "safety" language is bullshit for kindergarten and HR. Why is HR language infecting everything ? I want unsafe rusted metal that can poison and kill, the irony.
@fus1328 ай бұрын
@@FineWine-v4.0 C is unsafe 🤖
@JohnDoe-sq5nv8 ай бұрын
I just realized that if I learn to talk and type like an LLM in my normal correspondence with people I can get away with so much shit.
@jeanlasalle23518 ай бұрын
Certainly! While communicating properly is important, sometimes you can feel like offloading to someone else. AI's are good for that since the way they converse is so unnatural. Simply start every sentence with overused transitions. You should also ensure to be awkwardly friendly and always show the positive sides of things. By the way, you can also try to show too much enthusiasm with "certainly!", "I am happy to help!" and the like. In conclusion, while a bit unethical, this is a great way to avoid responsibility but you should remember that this doesn't solve problems and should be used only in appropriate and non critical situations. Please be assured I'm a human and not a LLM trying to pass as a human trying to pass as a LLM for ironic purposes.
@az85608 ай бұрын
@@jeanlasalle2351 you almost passed my anti-Turing test. But can you write a poem about enriching uranium?
@JohnDoe-sq5nv8 ай бұрын
@@az8560Certainly! In the heart of darkness, a power untamed, Enriching uranium, a dangerous game. Particles dance, splitting in two, Releasing energy, a force so true. Centrifuges spin, separating the rare, Isotopes of power, beyond compare. Neutrons collide, a chain reaction, Unleashing power, a nuclear attraction. But with great power comes great responsibility, Handle with care, this energy of fragility. Harness the atom, for peace or for war, The choice is ours, forevermore. Enriching uranium, a delicate art, A dance with danger, tearing apart. May we wield this power with wisdom and grace, And never forget, the dangers we face. Is there anything else I can assist you with?
@cewla33487 ай бұрын
@@jeanlasalle2351 it's the essay speech. you're being graded on essay writing, and you know the graders think that some starters and endings are good and some are bad, and you're being forced to use the "good" ones.
@Kwazzaaap8 ай бұрын
Turns out after 20+ years of enforced patterns that don't always make sense, the AI trained on them is a zealot over meanigless pedantics. IIt would still happen without the enforced patterns since an LLM doesn't really understand code but all those patterns and arbitrary DOs and DON'Ts just reinforce its stubborness over certain (often irrelevant) things.
@gammalgris24978 ай бұрын
You don't need an LLM for formal bullshitting, corporate IT manages that without AI. This is an example of how to waste other peoples' time. Productivity improvements gone wrong
@BudgiePanic8 ай бұрын
New denial of service attack just dropped: endlessly waste developer time with LLM generated ‘bug’ reports
@ttuurrttlle8 ай бұрын
I feel like the owner of that bot should owe that maintainer money for wasting his time like that.
@streettrialsandstuff8 ай бұрын
The owner of that bot has a special place in hell.
@thewhitefalcon85398 ай бұрын
It might be considered spam
@lawrence_laz8 ай бұрын
Me: "But my wife told me to use `strcopy`" AI: "Certainly! In that case I must be wrong." *ISSUE CLOSED*
@IvanKravarscan8 ай бұрын
We once did change strcpy to strncpy in a legacy code to make a linter shut up. We quickly learned strncpy pads the buffer with nulls, bulldozing data after a string.
@RicanSamurai8 ай бұрын
LOL the homelander edit was crazy
@tedchirvasiu8 ай бұрын
What a great guy Daniel is. He kept on arguing with the AI just for the slim chance it might actually be a human who uses AI because his English is bad.
@RalorPenwat8 ай бұрын
Make an LLM that detects and flags other LLM reports so you know going in it's likely not a priority.
@dustysoodak4 ай бұрын
This sort of behavior is bad enough in humans. The idea of it being automated is horrifying.
@uuu123438 ай бұрын
The first line of the reply after the initial query is "Certainly!", that screams ChatGPT or even Devan... Ouch
@rumplstiltztinkerstein8 ай бұрын
I just realized something: Saying that Memory issues that Rust solves are unnecessary because of skill issues is the same as saying that cars doesn't need seat belts because I personally was never in a car accident that required it.
@bearwolffish8 ай бұрын
For one what has that got to do with vid man. For another it's more like saying I don't want abs and traction control because it messes with my wheelies. Just because someone else can't control a bike like this doesn't mean I shouldn't be allowed to. Does not mean you will never fall, but may well mean you end up a better rider.
@rumplstiltztinkerstein8 ай бұрын
@@bearwolffish But if every time you fall you risk losing millions of dollars, you will definitely want those wheelies.
@TheYahmez8 ай бұрын
@@rumplstiltztinkerstein Tell that to everyone with redbull sponsorship. "Onesize fit's all"? ok buddy 👍
@rusi62197 ай бұрын
Seatbelts are useless and sometimes dangerous they only give you an illusion of safety and the law enforcement a reason to bully you
@CCCW8 ай бұрын
So a saturation attack in the hopes of keeping a real vulnerability open for longer?
@Keymandll8 ай бұрын
As a security professional, this made me cry... I'm not surprised tho. The amount of cr@p I've seen from the security industry (incl. bug bounty hunters, etc) in the past few years is astonishing. Also, huge respect to bagder for his patience.
@yannikiforov34058 ай бұрын
the guy who said about how Primeagen highlights text, leaving the first and last character unselected, WHY???
@qosujinn53458 ай бұрын
nah fr tho, every time too lmao
@YourComputer8 ай бұрын
It's his trademark.
@fus1328 ай бұрын
It's the letter brackets
@az85608 ай бұрын
Probably it's done to confuse the AI. Certainly, AI would be confused. It's like zebra's color scheme makes insect's landing AI go crazy and completely miss.
@supercurioTube8 ай бұрын
I noticed that too, it triggers my OCD a bit but then it's probably his OCD so I understand 😆🤗
@awesomedavid20128 ай бұрын
Just wait until scammers train LLM's to think they actually are members of an org the scammers are pretending to be a part of
@OnStageLighting8 ай бұрын
As a hobbyist in coding, I only once sought help from an LLM. Never again. After a series of unasked for lectures on the rest of the code, I found the issue myself and the LLM refuted my assertion that it had added an extra (. After several rounds of argument, it eventually gave in with a huffy "Oh, THAT extra (, well, OK, but your code is crappy anyway" kind of reply.
@Kwazzaaap8 ай бұрын
It's like a search engine, you sort of have to get a feel for it what questions will produce garbage and what questions it's good at
@OnStageLighting8 ай бұрын
@@Kwazzaaap I have experimented with a wide range of tasks and inputs in all the fields am involved in. LLMs are not as useful as the hype - by a long way!
@OnStageLighting8 ай бұрын
@@Kwazzaaap As a subject expert LLMs are low value. As a noob, same, but one is not in a position to know.
@somebody-anonymous8 ай бұрын
ChatGPT is pretty positive overall. It does come with a lot of unsolicited advice I guess yeah, but the tone is quite mild (e.g. you might consider replacing var by let). It usually helps to say something like "do you see any mistakes? Focus on basic mistakes like undefined variables or syntax errors". GPT 4 was pretty good at catching mistakes like that, I strongly suspect the newer GPT 4 (turbo) is much less good at it
@partlyblue8 ай бұрын
@@OnStageLighting"As a noob, same, but one is not in a position to know." This is exactly what has led me to avoid AI for learning anything beyond surface level questions. I've been trying to convince myself to learn a new (spoken) language for some time, but one of my biggest issues is not being satisfied with short answers I find that rely on having prior knowledge of the language (be it quirks adopted from other languages or the social context surrounding the language). Having a chatbot that is able to consider the context of the conversation and is able to "make connections between related information" seemed great on paper. English is the only language I'm fluent in, but I'm still not great at it, so I took to chatgpt for some English learning as a trial run. Seemed great at first, and I felt like I was learning about topics in a really neat and digestible way despite how complex I perceive them to be (jargon in academia breaks my brain). Only after doing further independent research did it become clear that either chatgpt was hallucinating, pulling from bogus website that most people (with enough context) can dismiss pretty easily, and/or pulling from a surplus of equally bogus (but eloquently written) outdated/well circulated "urban legend" type websites. Not going to lie having learned English through an under funded K12 school, fake knowledge is par for the course. Which is kind of neat if you think about it in an abstract "I'm learning language like a child :D" kind of way, but why in the world would anyone want to intentionally learn false information. I cannot imagine how open source devs are managing with all these hallucinations. Sht sucks man
@TommyLikeTom8 ай бұрын
It took me a while to realize that you were making fun of the LLM. I'm relieved honestly. I love working with these things, they are super useful for "monkey work" like replacing a list of commands. Very happy they aren't 100% efficient.
@andersbodin15518 ай бұрын
The industry was STUNNED by this! and I was personally shuck!
@_Lumiere_8 ай бұрын
Certainly!
@austinedeclan108 ай бұрын
12:13 No, you can not become the voice of Devin. That role belongs solely to Fireship.
@XDarkGreyX8 ай бұрын
What a legacy. His kids would be proud....
@chiepah28 ай бұрын
Large Ligma Machine, killed me.
@happykill1238 ай бұрын
FLIP: keeps ad break in Also FLIP: adds bathroom scene
@jayisidro12418 ай бұрын
i see a future where we need to curse at each other to prove that were talking to a person
@mustpaike7 ай бұрын
"Why are you doing it in this needless way?" -"because if I do it the reasonable way, our LLM checking the code starts yelling. And after that our CTO starts yelling because all he sees is our LLM pointing out major security issues. We've tried to explain it to him but he is unable to reconcile that a $50k a year engineer could be right while a $100k a year LLM is wrong."
@IAMTHESWORDtheLAMBHASDIED8 ай бұрын
I don't know why but, "Guy's about to get HALLUCINATED on!" broke me LOLOLOLOL
@CodinsGG8 ай бұрын
Devin's context window is too low 😂
@monad_tcp8 ай бұрын
aren't humans supposed to have only 9 bits of context window ? I call all that research bullshit...
@Leonhart_938 ай бұрын
@@monad_tcp 9 bits? So only a letter? 😂 Btw, this is an example of how human brains are completely incomparable to LLMs. Context for humans expands indefinitely the more they think about it, it doesn't have an inherent limit.
@Griffolion08 ай бұрын
The ultimate answer to Devin is to have Devin review Devin's HackerOne submissions and just make him talk to himself perpetually with the `ego` trait set to 100% to properly represent real world Application Security Engineers.
@CallousCoder8 ай бұрын
Cody’s code smells do the same! It shouts 5 and 4 of them you go like: “length is checked there” “The input validation is checked there” “The file is always closed here” “You say pass a reference, please note it’s already a pointer!” And it hashes out 5 other useless “smells”. It just doesn’t see it at it makes those tools useless. Warning fatigue is a thing.
@joecooper17038 ай бұрын
I started banning any LLM-generated posts (at least the ones I can detect with reasonable confidence) in my OSS project forums and github issue trackers last year. Nonetheless, the bogus posts continue at a pace of one or two a day. It's a huge time-waster and annoyance. Much worse than the old spambots.
@VivBrodock8 ай бұрын
Listening to an LLM trying to rationalize it's hallucinations is like an extremely kind gaslighting. I cannot even imagine how cooked Daniel was.
@Atom0278 ай бұрын
For me, the only acceptable use of LLM in programming is auto-suggestion from available resources for language documentation, tools, etc., automatic creation of documentation based on code, and faster filtering of search materials and content. (At least in the state they are now)
@alexjamesmalcolm8 ай бұрын
“What’s an LLM?” “What are you living under a stupid rock?!?” I nearly painted my wall with coffee 😂😂
@7th_CAV_Trooper8 ай бұрын
@@Primeagen, I appreciate your engagement. I certainly! enjoyed this video.
@randomdamian8 ай бұрын
In Germany, people like him 14:34 are called "Ehrenmann"
@SaintSaint8 ай бұрын
I've had some success using an LLM before talking to my penpal. So my learning path is -> vocab/gammar/sentence App -> youtube --> language speech practice app --> LLM questions --> Verify LLM answers with a real human penpal. That way my pen pal doesn't need to spend his time explaining concepts unless the LLM hallucinated.
@Spinikar8 ай бұрын
I can't wait for the first major data breech from AI generated code. It's going to be wild.
@aidanbrumsickle8 ай бұрын
All that and it's also ignoring the fact that by its logic, the max length argument to strncpy could also be miscalculated in some hypothetical future code change
@doom96038 ай бұрын
I know a large Offensive Company in our field that is using GPT and other LLMS for customer communications, and I can just say .. this is a huge mess up!
@privacyvalued41348 ай бұрын
Fun mind-blowing fact: The cURL runtime library is about 10% slower than PHP's built-in socket implementation. That's right. cURL, a native precompiled, supposedly optimized library for web communications written in C, is actually slower than the PHP VM even with PHP's heavy-handed overhead for handling file and network streams! The cURL devs should maybe just throw in the towel at this point given that PHP is a better language in every way that matters. Have fun with the resulting headache thinking about that.
@merlin97028 ай бұрын
LMAO
@Lisekplhehe8 ай бұрын
Why is that?
@Daktyl1988 ай бұрын
While I highly doubt this would ever be an issue IN THIS CASE... I do kind of actually see what the LLM was getting at. The size comparison is using a variable set to the size of the string. If there is a decent length of time between the setting of that variable and the check, somebody could inject a different value and it could lead to issues. THAT BEING SAID, in this case it's entirely a nonissue.
@broski408 ай бұрын
yeah, Im wondering about the RED teams that play into how much of the LLM's balls get cut off and I just say that because I know of a few things(guard rails lets say) that turned out making it spit out code that made no sense "on purpose". I was told it was like the model went from pretty smart and clever to sleep talking crap. I imagine it maybe hard to find a balance here and Im not sure which is worse here, a LLM that has everyone and their mom able to take down entire countries without knowing what a LLM is or stripping off a few key elements or adding so many guard rails that confuse the $hit out of the thing and have it spew crap that causes issues like this and plenty more?? I dont see that industry slowing down at all! Interesting time to be watching and seeing how this all ends up!?
@disruptive_innovator8 ай бұрын
hope you're doing swell 😘 tee hee I found a security vulnerability. -Love Devin
@ripplecutter2338 ай бұрын
Devin uwu
@UrknetLabradoriesАй бұрын
We need a second cut of these with ya know, just the article reading bits. Sometimes I don't have 40 minutes to get Prime's take on a few paragraphs.
@christopherwood124 ай бұрын
I completely agree with your point about software devs who use llms to train and get better not knowing basic stuff. It is insane what you can do on there but not know basics
@leshommesdupilly8 ай бұрын
Rule n°1: ChatGPT is always right Rule n°2: When ChatGPT is wrong, please refer to rule n°1
@theondono8 ай бұрын
What Prime doesn't realize is that devs will put an equally expensive LLM to *respond* to the LLM generated bug reports, so they will just escalate the issue topics into thousands of pages that no human will read, and once thousands or possibly millions of dollars have been wasted, another LLM will read the entire thread and write a 5 sentence recommendation. PROGRESS
@EDyoniziak8 ай бұрын
Pretty sure the compiler already gives warnings for this case, but it didnt need gpu credits to figure it out 😬
@kevin91208 ай бұрын
I've been programming for a long time but I wouldn't say I really started learning until around 2 years ago now. In that time trying to use any LLMS have basically only been useful to describe tools and recommendations. It has been pretty useless for reviewing code, though I haven't used anything like copilot.
@EnjoyCocaColaLight8 ай бұрын
Make a local str var. Wrap the strcpy part inside an "if (strVar.Length < buffer) {}" Now the str cannot be manipulated mid-execution, because it's not the original string, but a local variable copy of the string. Maybe this is what the user things is necessary?
@zebedie28 ай бұрын
If I figured out it was an LLM I would get a second LLM to argue the point with the first LLM then let them just have at it.
@MikesterCurtis18 күн бұрын
Agents would be interesting: A hallucinating coder being corrected by a hallucinating coder.
@daninmanchester8 ай бұрын
This reminds me of dealing with the "security team" at wordpress who review plugins. They used to raise similar things. It's like "that is impossible and can never happen". "Yeah but you need to fix it anyway"
@Valerius1233 ай бұрын
The biggest problem with C is the C standard library. The syntax and limited language features are pretty much perfect. The only extras I miss are namespacing and better generics.
@torwalt8 ай бұрын
Maybe one solution could be to require a PR/MR to be present with the bug that actually triggers the exploit + the fix. Then this whole back and forth discussion can be skipped.
@timjen38 ай бұрын
Reminds me of a log forging vulnerability reported to me by github code scanning. It was prevented by the log formatter but that was lost to the narrow focus of the code scanner. Now I'm imagining a world where I have to argue with an LLM about it.
@mattihn8 ай бұрын
17:22 This is when Devin used an unchecked `strcpy` and started to overflow its context. Let the fever dream begin :P
@samuelschwager8 ай бұрын
stir that copy
@Jabberwockybird8 ай бұрын
Roger that, strn' the copy
@metropolis108 ай бұрын
At a certain point this is tabs vs spaces though. Just use strncpy. Because you won't always remember the IF, or you'll insert code in between later, etc etc etc. I think Devin is right on this one. We don't use linters because they are always right, we use it to move on.
@rahulgawale2 ай бұрын
Imagine Devin get Prime's voice and devin starts yelling at everyone with his Steve Carrel like voice " what, yes no, f , l etc"
@SvetlinNikolovPhx8 ай бұрын
The Voice of Devin: Check Courage The Cowardly Dog's computer voice :D
@roadhouse8 ай бұрын
just to answer your question on 22:32 in pentesting/bugbounty its a common pratice using base64 to encode malicious payload
@dannydetonator7 ай бұрын
I thought i learned English, but after clicking on this, i have to promise myself to get a PC (or repair my only Thinkpad) and learn this [dev?] dialect. Yes, i'm lost, live under a rock in a faraway country and thank you for decyphering LLM, which is not yet available for me. But i'll be found asap.
@namcos8 ай бұрын
Let's suppose this manipulation is possible with strcpy, what's to say you can't do some sort of manipulation with strncpy to change the size? The other issue with this is the whole replying to another user. Has the LLM got confused with another codebase? Or did someone who is copy/pasting get confused and put the wrong reply in the wrong place? Not great marketing for GAI's/LLMs in general but this'll be a continuing issue for the future.
@leshommesdupilly8 ай бұрын
Wow, this video is like a dream come true for me! As someone who loves using ChatGPT and language models to automate tasks, seeing how seamlessly they can generate KZbin comments is mind-blowing! This is exactly what I enjoy doing in my spare time. Kudos to the creator for showcasing this awesome use of AI! 🚀😄 #AI #ChatGPT #Automation
@ifscho8 ай бұрын
When he said "Devin could become Gilbert Gottfried" (12:19)… well thanks, now I can never unhear that you god damn Iago you.
@gjermundification8 ай бұрын
5:47 This will be like an insane dog biting its tail and running at increasingly faster speeds. Did I just explain the nature of a buffer overflow?
@FalcoGer7 ай бұрын
and that's why you ought to write your software in c++ and not c. std::string.operator= is much safer than those ridiculous string functions that C provides. I also sometimes throw my own code at AI and ask it to check for issues. 90% of the output is trash and I ignore it, but it might just draw your attention to something that might be an issue or edge case and have you look things over.
@Jarikraider8 ай бұрын
Obviously just gotta let the LLMs also handle the issues. Modern problems require modern solutions.
@devenrobinson68618 ай бұрын
As a guy named Deven that is new to the page. You really messed me up calling his name out like that as I'm walking back to my computer in the dark.
@beofonemind8 ай бұрын
My dude...this was fun. BTW....My name is Tom so I know what its like to catch strays. Tom, the genius ...... as well as peeping Tom.
@gjermundification8 ай бұрын
Open sauce my Hiney! RTFB and everything is open source.
@DingleFlop8 ай бұрын
Your video cuts are gold I am laughing my ass off
@luthmhor8 ай бұрын
LLM’s are great for answering questions or providing context for questions that are hard to get answers for quickly using a search engine. Also extremely helpful when learning a new subject as basically a virtual tutor that you can bounce questions off of. But as soon as you start trying to delegate your work to it, it’s a slippery slope. We are going to have a generation of people who can’t structure an essay independently, because they either had the whole thing, or the introduction, created for them by an LLM.
@andreicojea8 ай бұрын
I read Asimov’s “I, Robot” recently, and the robot’s voice in my head was yours 🙈
@Machtyn8 ай бұрын
While on the job search, I've had recruiters remind me to "not use AI on the coding assessment." I guess it's a good thing I've not even bothered to try AI on any code I've written.
@RicardoSuarezdelValle6 ай бұрын
As someone who uses GPT to learn coding, its ok, you jut have to enshure all the answers it gives are logically consistent, once something is inconsistent investigate further and its fine
8 ай бұрын
On one hand, all LLM sound like Flanders, putting Prime’s voice would feel wrong. OTOH “you said you made X check, but the tool says to change the next line to the Y check so, do both anyways” is pretty much what my shamanism oriented manager usually says in this kind of situations, so, idk 🤷♂️