Dude I feel so bad for all the human software engineers named Devin.
@OnStageLighting10 ай бұрын
They could change their name to Stdin, maybe.
@pieterrossouw859610 ай бұрын
Like real world Karen's who don't insist on seeing the manager
@az856010 ай бұрын
Unless it allows said Devin to request multiple GPUs, lower expectations for the type of code he produces, and charge for every token he writes.
@XDarkGreyX10 ай бұрын
@@pieterrossouw8596 name to avoid for newborns AND fictional people.
@andythedishwasher111710 ай бұрын
@@az8560 Genuinely hadn't considered that angle. I wonder how all the Claudes are doing out there?
@andythedishwasher111710 ай бұрын
Yeah LLM harassment needs to be a reportable category in open source communities. You're totally right that this runs the risk of drastically wasting the time of developers we all depend on being productive and responsive.
@KevinJDildonik10 ай бұрын
I'm so terrified how many people blindly accept AI. Like legitimately I've seen funerals where people give a eulogy written by AI. Which, gross. And the very first sentence is something obviously false, like it hallucinated a middle name the guy didn't have. So the whole document is obviously garbage. And the audience all clap and say the AI did a really good job. Someone reading this who has an audience, please write an article on this topic: AI is getting exponentially better at convincing humans to use it, but its factual accuracy if anything is getting worse.
@andrejjjj200810 ай бұрын
Why it sounds like this comment was written by Daven..?
@harryhack9110 ай бұрын
@@andrejjjj2008Nah. It doesn't start with "Certainly!"
@grzegorzdomagala992910 ай бұрын
We need to create a "crafted request" for Devin to write response assuming the code is correct and let it argue with itself.
@daze841010 ай бұрын
It's equally annoying when people with absolutely no programming language, and no desire to learn, are asking for help on AI generated code. I refuse to help anyone that has AI written code now.
@ItsDan12310 ай бұрын
Huge AI companies asking open source community to provide not just free training data such as from repos, but unpaid, direct human labor to provide feedback to this nonsense.
@werren89410 ай бұрын
at this point malware is better than AI because malware still motivate me to put my hands on keyboard to be curious.
@Miss0Demon10 ай бұрын
Artists: First time?
@werren89410 ай бұрын
@@Miss0Demon no, it's not first time for us
@Omar-gr7km10 ай бұрын
@@Miss0Demonnever heard of Shopify, WordPress or the other done for you solutions? As far as small & med businesses are concerned those probably displaced more devs than AI by a good bit. Programmers have been replacing themselves for decades. Ironically, we should be asking artists: First time?
@ivucica10 ай бұрын
@@Miss0Demon No, every few years there’s a new “no-code” “solution”. Or a new “safe” language. ML is just the latest in the series of events for developers.
@BoganBits10 ай бұрын
"The I in LLM stands for intelligence" is the best roast of AI I have read
@TheDrhusky10 ай бұрын
Right? Like a knockout punch
@lukarikid90019 ай бұрын
@@TheDrhusky they really are more A than I
@VezWay0079 ай бұрын
The best part of this is that “Large Language Model” still doesn’t have an I
@codered.0.0.75 ай бұрын
Certainly!
@OnStageLighting10 ай бұрын
DDOS attacks of the future now include wasting your support team's time on contacts that seem like a customer/user.
@thewhitefalcon853910 ай бұрын
Layer 8 DDOS
@dischannel88810 ай бұрын
@@thewhitefalcon8539layer 8🤣🤣
@Qefx10 ай бұрын
Just also use an LLM to filter out LLM spam lol
@ゾカリクゾ10 ай бұрын
Classic "I cannot teach you C because it is an unsafe language" moment.
@mxruben8110 ай бұрын
I hate how LLM's just have to be right. Even when they apologize for being wrong they still go back and make the same stupid points and try to make their faulty reasoning work.
@jasonscala583410 ай бұрын
This type of behaviour by my ex caused our divorce.
@PRIMARYATIAS10 ай бұрын
@@jasonscala5834Are you a Scala programmer ?
@jasonscala583410 ай бұрын
@@PRIMARYATIAS lol .. a few modules are Scala but mostly Java.
@az856010 ай бұрын
Because it's autocomplete. All chat history is a collection of examples for it. When it outputs shit, it's better to delete part of the dialog and rewrite it how you would like. If you continue arguing, you are extending history where character named 'AI' is dumb and always does mistakes, so LLM will try to emulate it as best as possible, which is the opposite of what you want. Or at least it's my understanding of how to better handle that issue, correct me if I'm wrong.
@bijan221010 ай бұрын
The infamous LLM fallacy
@ryangrogan683910 ай бұрын
This would be the perfect politician. Never admits fault, repeats itself in slightly different ways, and refuses to secede untenable positions. Bravo, Devin, Bravo.
@ggsap9 ай бұрын
Bravo vince
@bogdyee10 ай бұрын
I do really think more companies needs to adopt this LLM devs. A great reset in this industry where companies go bankrupts is exactly what we need.
@jasonscala583410 ай бұрын
😂😂😂 👍👍👍
@PRIMARYATIAS10 ай бұрын
Indeed, We need a Great Reseting of the Great Reset. No WEF, No Schwab, No Gates, And no FED printing our fake money.
@DevonBagley9 ай бұрын
Pretty sure this is the point. All the big companies producing LLMs are weaponizing it against potential competitors.
@Aphexlog10 ай бұрын
Calls himself hacker, so we already know he wants to be seen a certain way. Chat GPT created a genre of developers who are coders for clout. They don’t actually care about getting better, they only care about people thinking that they are smart in someway or another. Edit: I know they’ve been around forever, but LLMs make it significantly easier for them to infiltrate our spaces and weaken our collective quality of work.
@UnidimensionalPropheticCatgirl10 ай бұрын
TypeScript beat ChatGPT to it tbh.
@DyllinWithIt10 ай бұрын
Eeeh, the genre of developers who are coders for clout has been around ever since coding became seen as a high-value profession.
@DivanVisagie10 ай бұрын
Coding or clout has been around since the invention of the GPL
@futuza10 ай бұрын
To be fair that culture of coders for clout has been a thing since like the 70s. Its hardly new, there's just more of them now because of LLMs.
@akam991910 ай бұрын
Either that or they do it for shits and giggles
@danieltm210 ай бұрын
Fuck I gotta stop using the word "certainly", another thing ruined by AI
@BradHutchings10 ай бұрын
Haha. I call it "artificial certitude" and it does not disappoint.
@Yawhatnever10 ай бұрын
I told ChatGPT "Respond with all future answers written in the tone of a disgruntled and annoyed self-proclaimed genius being sarcastic and talking to someone of lesser intelligence" and suddenly it felt way more normal to interact with it.
@az856010 ай бұрын
Certainly, you wouldn't resort to such drastic measures as abandoning the word you like. It is important to know that keeping using the words you like is essential for one's mental health. Finally, LLMs will become smarter, and being mistaken for one will be beneficial in the future!
@thebrahmnicboy10 ай бұрын
I'm not fucking kidding, I was in a hackathon and I knew the organizers used ChatGPT to write our PS because it had a line "Certainly! here are four points to take note of when designing a solution to the problem space" Idiots didn't even remove the line from the PS.
@Graham_Wideman10 ай бұрын
But it's going to reach the point (if not already), where we'll all adopt "certainly" ironically and sarcastically.
@davidmcken10 ай бұрын
Cognition labs (assuming this is Devin) should be donating the equivalent of one of their engineers salary for 3 days to make up for that bug report alone to the curl project for wasting their time, this isn't even just copying its actively detrimental to the project moving forward.
@grizz_sh10 ай бұрын
Daniel is just a good dude. Giving everyone a bit of credit while also calling out the issues in a constructive way. A real Consummate Professional.
@geogeosgeogeos556910 ай бұрын
Pros aren't that good :'(
@felixjohnson387410 ай бұрын
That issue is *_aggressively_* artificial
@DiSiBijo10 ай бұрын
huh?
@ciaranirvine10 ай бұрын
An Aggressive Hegemonising Swarm of fake bug reports
@ChrisCox-wv7oo10 ай бұрын
An LLM (a form of artificial intelligence) aggressively asserts there is an issue. Hence, the issue is aggressively artificial.
@EvanBoldt10 ай бұрын
Certainly, it’s both fascinating and concerning! It’s amazing to see how AI is evolving, but we definitely need to be mindful of the unintended consequences, like flooding open source projects with hallucinated bug reports.
@GeneralAutustoPepechet10 ай бұрын
In future we will need 10x amount of programmers we have today, just to reason with an algorithm
@markm151410 ай бұрын
At last the true 10x developer is a reality.
@the-answer-is-4210 ай бұрын
If by developers you mean "prompt engineers", then yes. They are specialized in the fine art of prompting.
@darekmistrz436410 ай бұрын
Imagine all that software that non-technical people create that we as programmers will have to fix, rewrite, test, document, maintain etc. AI and LLMs were our saviour all along
@monad_tcp10 ай бұрын
@@darekmistrz4364 Imagine the productivity of creating a software house that doesn't use AI but pretend to use it for marketing when competing with the fools that use AI. Imagine how profitable that company is going to be because it just pay real humans instead of spending millions on stupid wasteful hardware.
@futuza10 ай бұрын
@@monad_tcp Why don't we just have AI CEOs, Executives, Board Members, and AI Presidents and Prime Ministers while we're at it? Why have these useless humans around at all?
@damoates10 ай бұрын
If someone reports a vulnerability with vague steps to reproduce, ask for working exploit code. If there is no exploit code, the vulnerability wasn't properly tested and is probably just the output of a code scanner.
@owlmostdead949210 ай бұрын
Instant permanent ban, literally terminate the account of everyone using “AI” for vulnerability reporting. Not even a warning, out with these people.
@chilversc10 ай бұрын
By the time this future happens I'll be fine as i will have my own LLM to answer their bug reports. We can just leave the LLMs to chat back and forth amongst themselves while we happily ignore them.
@KevinJDildonik10 ай бұрын
Meanwhile Russian hackers are stealing your customer's bank account numbers and you're not even bothering to check the reports.
@chilversc10 ай бұрын
@@KevinJDildonik That's fine, I'll just have the LLM come up with some excuse as to why it's not my fault.
@7th_CAV_Trooper10 ай бұрын
The LLMs are gonna use up all the bandwidth previously reserved for porn.
@RicanSamurai10 ай бұрын
this is so infuriating to see haha. These LLMs are just painful sometimes. They're like simultaneously awesome and terrible. It's so impossible to reason with them
@KevinJDildonik10 ай бұрын
"Impossible to reason with them" dude it's literally an advanced spellcheck. You're not reasoning with anything. AI has broken people's brains. I want off this planet.
@monad_tcp10 ай бұрын
Some times ? they're infuriating all the times. They never do what you want, why are we creating machines that don't do what they're told. Also, who wants LLMs, I want LLVMs !
@CodecrafterArtemis10 ай бұрын
@@KevinJDildonik Yeah I blame marketers who marketed these as "AI". People even invented the term "AGI" to refer to, you know, what AI used to mean. Actually intelligent artificial beings (theorised). And now the marketers have the unmitigated *gall* to suggest that some of those overgrown spellcheckers are actually AGI...
@IronicHavoc10 ай бұрын
@KevinJDildonik Dude chill out. Casual anthropomorphization of programs has been around long before LLMs
@monad_tcp10 ай бұрын
@@KevinJDildonik Tensorflow (aka, systolic arrays) was a bad idea, and RTX should be used for rendering raytraced paths not for stupid LLM. I hope this stupid fad pass and all that sweet hardware from nVidia is used for what it was really made for : ray tracing not rubbish AI. Man I hate AI so much that I'm going to start the Butlerian Jihad
@JohnDoe-sq5nv10 ай бұрын
I just realized that if I learn to talk and type like an LLM in my normal correspondence with people I can get away with so much shit.
@jeanlasalle235110 ай бұрын
Certainly! While communicating properly is important, sometimes you can feel like offloading to someone else. AI's are good for that since the way they converse is so unnatural. Simply start every sentence with overused transitions. You should also ensure to be awkwardly friendly and always show the positive sides of things. By the way, you can also try to show too much enthusiasm with "certainly!", "I am happy to help!" and the like. In conclusion, while a bit unethical, this is a great way to avoid responsibility but you should remember that this doesn't solve problems and should be used only in appropriate and non critical situations. Please be assured I'm a human and not a LLM trying to pass as a human trying to pass as a LLM for ironic purposes.
@az856010 ай бұрын
@@jeanlasalle2351 you almost passed my anti-Turing test. But can you write a poem about enriching uranium?
@JohnDoe-sq5nv10 ай бұрын
@@az8560Certainly! In the heart of darkness, a power untamed, Enriching uranium, a dangerous game. Particles dance, splitting in two, Releasing energy, a force so true. Centrifuges spin, separating the rare, Isotopes of power, beyond compare. Neutrons collide, a chain reaction, Unleashing power, a nuclear attraction. But with great power comes great responsibility, Handle with care, this energy of fragility. Harness the atom, for peace or for war, The choice is ours, forevermore. Enriching uranium, a delicate art, A dance with danger, tearing apart. May we wield this power with wisdom and grace, And never forget, the dangers we face. Is there anything else I can assist you with?
@cewla33489 ай бұрын
@@jeanlasalle2351 it's the essay speech. you're being graded on essay writing, and you know the graders think that some starters and endings are good and some are bad, and you're being forced to use the "good" ones.
@mon0theist_tv10 ай бұрын
We've done it, we've created a perfect trolling machine
@peace_world_priority10 ай бұрын
gpt 3.5 is trained on 2021 data, if someone ask about 2023 data and the ai give incorrect answer, that people needs to stand in front of mirror if he know how ai works. ai works just like human brain, if you never learn 2024 math only the 2021 version but someome ask u about 2024 math you will not answer correctly just guessing based on knowledge you have. same if you just learn small biologist data, but then someone ask you about a very rare biologist thing, you will not answer correctly too just guessing based on knowledge you have, but if you learn from many many many data about biologist and the data included the up to date data, then you will able to answer a biologist question about something new in 2023/2024 and a question about a rare thing. the more and the up to date the data from every year the more intelegence the ai become.
@electrolyteorb8 ай бұрын
@@peace_world_priority oh not again...
@KoltPenny10 ай бұрын
That was not an LLM, it was a Rust dev insisting C is unsafe.
@jonahbranch562510 ай бұрын
Sick burn, dude
@FineWine-v4.010 ай бұрын
C IS unsafe
@TheOzumat10 ай бұрын
@@FineWine-v4.0like pottery
@monad_tcp10 ай бұрын
@@FineWine-v4.0 "safety" language is bullshit for kindergarten and HR. Why is HR language infecting everything ? I want unsafe rusted metal that can poison and kill, the irony.
@fus13210 ай бұрын
@@FineWine-v4.0 C is unsafe 🤖
@0x000dea7c10 ай бұрын
Annoying AI wannabe hackers making everyone waste their precious time
@IvanKravarscan10 ай бұрын
We once did change strcpy to strncpy in a legacy code to make a linter shut up. We quickly learned strncpy pads the buffer with nulls, bulldozing data after a string.
@gammalgris249710 ай бұрын
You don't need an LLM for formal bullshitting, corporate IT manages that without AI. This is an example of how to waste other peoples' time. Productivity improvements gone wrong
@uuu1234310 ай бұрын
The first line of the reply after the initial query is "Certainly!", that screams ChatGPT or even Devan... Ouch
@BudgiePanic10 ай бұрын
New denial of service attack just dropped: endlessly waste developer time with LLM generated ‘bug’ reports
@ttuurrttlle10 ай бұрын
I feel like the owner of that bot should owe that maintainer money for wasting his time like that.
@streettrialsandstuff10 ай бұрын
The owner of that bot has a special place in hell.
@thewhitefalcon853910 ай бұрын
It might be considered spam
@lawrence_laz10 ай бұрын
Me: "But my wife told me to use `strcopy`" AI: "Certainly! In that case I must be wrong." *ISSUE CLOSED*
@Kwazzaaap10 ай бұрын
Turns out after 20+ years of enforced patterns that don't always make sense, the AI trained on them is a zealot over meanigless pedantics. IIt would still happen without the enforced patterns since an LLM doesn't really understand code but all those patterns and arbitrary DOs and DON'Ts just reinforce its stubborness over certain (often irrelevant) things.
@tedchirvasiu10 ай бұрын
What a great guy Daniel is. He kept on arguing with the AI just for the slim chance it might actually be a human who uses AI because his English is bad.
@CCCW10 ай бұрын
So a saturation attack in the hopes of keeping a real vulnerability open for longer?
@Qefx10 ай бұрын
Danke!
@RicanSamurai10 ай бұрын
LOL the homelander edit was crazy
@rumplstiltztinkerstein10 ай бұрын
I just realized something: Saying that Memory issues that Rust solves are unnecessary because of skill issues is the same as saying that cars doesn't need seat belts because I personally was never in a car accident that required it.
@bearwolffish10 ай бұрын
For one what has that got to do with vid man. For another it's more like saying I don't want abs and traction control because it messes with my wheelies. Just because someone else can't control a bike like this doesn't mean I shouldn't be allowed to. Does not mean you will never fall, but may well mean you end up a better rider.
@rumplstiltztinkerstein10 ай бұрын
@@bearwolffish But if every time you fall you risk losing millions of dollars, you will definitely want those wheelies.
@TheYahmez9 ай бұрын
@@rumplstiltztinkerstein Tell that to everyone with redbull sponsorship. "Onesize fit's all"? ok buddy 👍
@rusi62199 ай бұрын
Seatbelts are useless and sometimes dangerous they only give you an illusion of safety and the law enforcement a reason to bully you
@OnStageLighting10 ай бұрын
As a hobbyist in coding, I only once sought help from an LLM. Never again. After a series of unasked for lectures on the rest of the code, I found the issue myself and the LLM refuted my assertion that it had added an extra (. After several rounds of argument, it eventually gave in with a huffy "Oh, THAT extra (, well, OK, but your code is crappy anyway" kind of reply.
@Kwazzaaap10 ай бұрын
It's like a search engine, you sort of have to get a feel for it what questions will produce garbage and what questions it's good at
@OnStageLighting10 ай бұрын
@@Kwazzaaap I have experimented with a wide range of tasks and inputs in all the fields am involved in. LLMs are not as useful as the hype - by a long way!
@OnStageLighting10 ай бұрын
@@Kwazzaaap As a subject expert LLMs are low value. As a noob, same, but one is not in a position to know.
@somebody-anonymous10 ай бұрын
ChatGPT is pretty positive overall. It does come with a lot of unsolicited advice I guess yeah, but the tone is quite mild (e.g. you might consider replacing var by let). It usually helps to say something like "do you see any mistakes? Focus on basic mistakes like undefined variables or syntax errors". GPT 4 was pretty good at catching mistakes like that, I strongly suspect the newer GPT 4 (turbo) is much less good at it
@partlyblue10 ай бұрын
@@OnStageLighting"As a noob, same, but one is not in a position to know." This is exactly what has led me to avoid AI for learning anything beyond surface level questions. I've been trying to convince myself to learn a new (spoken) language for some time, but one of my biggest issues is not being satisfied with short answers I find that rely on having prior knowledge of the language (be it quirks adopted from other languages or the social context surrounding the language). Having a chatbot that is able to consider the context of the conversation and is able to "make connections between related information" seemed great on paper. English is the only language I'm fluent in, but I'm still not great at it, so I took to chatgpt for some English learning as a trial run. Seemed great at first, and I felt like I was learning about topics in a really neat and digestible way despite how complex I perceive them to be (jargon in academia breaks my brain). Only after doing further independent research did it become clear that either chatgpt was hallucinating, pulling from bogus website that most people (with enough context) can dismiss pretty easily, and/or pulling from a surplus of equally bogus (but eloquently written) outdated/well circulated "urban legend" type websites. Not going to lie having learned English through an under funded K12 school, fake knowledge is par for the course. Which is kind of neat if you think about it in an abstract "I'm learning language like a child :D" kind of way, but why in the world would anyone want to intentionally learn false information. I cannot imagine how open source devs are managing with all these hallucinations. Sht sucks man
@happykill12310 ай бұрын
FLIP: keeps ad break in Also FLIP: adds bathroom scene
@RalorPenwat10 ай бұрын
Make an LLM that detects and flags other LLM reports so you know going in it's likely not a priority.
@austinedeclan1010 ай бұрын
12:13 No, you can not become the voice of Devin. That role belongs solely to Fireship.
@XDarkGreyX10 ай бұрын
What a legacy. His kids would be proud....
@chiepah210 ай бұрын
Large Ligma Machine, killed me.
@Keymandll10 ай бұрын
As a security professional, this made me cry... I'm not surprised tho. The amount of cr@p I've seen from the security industry (incl. bug bounty hunters, etc) in the past few years is astonishing. Also, huge respect to bagder for his patience.
@yannikiforov340510 ай бұрын
the guy who said about how Primeagen highlights text, leaving the first and last character unselected, WHY???
@qosujinn534510 ай бұрын
nah fr tho, every time too lmao
@YourComputer10 ай бұрын
It's his trademark.
@fus13210 ай бұрын
It's the letter brackets
@az856010 ай бұрын
Probably it's done to confuse the AI. Certainly, AI would be confused. It's like zebra's color scheme makes insect's landing AI go crazy and completely miss.
@supercurioTube10 ай бұрын
I noticed that too, it triggers my OCD a bit but then it's probably his OCD so I understand 😆🤗
@awesomedavid201210 ай бұрын
Just wait until scammers train LLM's to think they actually are members of an org the scammers are pretending to be a part of
@IAMTHESWORDtheLAMBHASDIED10 ай бұрын
I don't know why but, "Guy's about to get HALLUCINATED on!" broke me LOLOLOLOL
@andersbodin155110 ай бұрын
The industry was STUNNED by this! and I was personally shuck!
@_Lumiere_10 ай бұрын
Certainly!
@TommyLikeTom10 ай бұрын
It took me a while to realize that you were making fun of the LLM. I'm relieved honestly. I love working with these things, they are super useful for "monkey work" like replacing a list of commands. Very happy they aren't 100% efficient.
@jayisidro124110 ай бұрын
i see a future where we need to curse at each other to prove that were talking to a person
@mustpaike9 ай бұрын
"Why are you doing it in this needless way?" -"because if I do it the reasonable way, our LLM checking the code starts yelling. And after that our CTO starts yelling because all he sees is our LLM pointing out major security issues. We've tried to explain it to him but he is unable to reconcile that a $50k a year engineer could be right while a $100k a year LLM is wrong."
@dustysoodak6 ай бұрын
This sort of behavior is bad enough in humans. The idea of it being automated is horrifying.
@Griffolion010 ай бұрын
The ultimate answer to Devin is to have Devin review Devin's HackerOne submissions and just make him talk to himself perpetually with the `ego` trait set to 100% to properly represent real world Application Security Engineers.
@CodinsGG10 ай бұрын
Devin's context window is too low 😂
@monad_tcp10 ай бұрын
aren't humans supposed to have only 9 bits of context window ? I call all that research bullshit...
@Leonhart_9310 ай бұрын
@@monad_tcp 9 bits? So only a letter? 😂 Btw, this is an example of how human brains are completely incomparable to LLMs. Context for humans expands indefinitely the more they think about it, it doesn't have an inherent limit.
@alexjamesmalcolm10 ай бұрын
“What’s an LLM?” “What are you living under a stupid rock?!?” I nearly painted my wall with coffee 😂😂
@aidanbrumsickle10 ай бұрын
All that and it's also ignoring the fact that by its logic, the max length argument to strncpy could also be miscalculated in some hypothetical future code change
@roadhouse10 ай бұрын
just to answer your question on 22:32 in pentesting/bugbounty its a common pratice using base64 to encode malicious payload
@Atom02710 ай бұрын
For me, the only acceptable use of LLM in programming is auto-suggestion from available resources for language documentation, tools, etc., automatic creation of documentation based on code, and faster filtering of search materials and content. (At least in the state they are now)
@UrknetLabradories3 ай бұрын
We need a second cut of these with ya know, just the article reading bits. Sometimes I don't have 40 minutes to get Prime's take on a few paragraphs.
@VivBrodock10 ай бұрын
Listening to an LLM trying to rationalize it's hallucinations is like an extremely kind gaslighting. I cannot even imagine how cooked Daniel was.
@doom960310 ай бұрын
I know a large Offensive Company in our field that is using GPT and other LLMS for customer communications, and I can just say .. this is a huge mess up!
@mattihn10 ай бұрын
17:22 This is when Devin used an unchecked `strcpy` and started to overflow its context. Let the fever dream begin :P
@CallousCoder10 ай бұрын
Cody’s code smells do the same! It shouts 5 and 4 of them you go like: “length is checked there” “The input validation is checked there” “The file is always closed here” “You say pass a reference, please note it’s already a pointer!” And it hashes out 5 other useless “smells”. It just doesn’t see it at it makes those tools useless. Warning fatigue is a thing.
@gjermundification10 ай бұрын
5:47 This will be like an insane dog biting its tail and running at increasingly faster speeds. Did I just explain the nature of a buffer overflow?
@7th_CAV_Trooper10 ай бұрын
@@Primeagen, I appreciate your engagement. I certainly! enjoyed this video.
@ifscho10 ай бұрын
When he said "Devin could become Gilbert Gottfried" (12:19)… well thanks, now I can never unhear that you god damn Iago you.
@EDyoniziak10 ай бұрын
Pretty sure the compiler already gives warnings for this case, but it didnt need gpu credits to figure it out 😬
@privacyvalued413410 ай бұрын
Fun mind-blowing fact: The cURL runtime library is about 10% slower than PHP's built-in socket implementation. That's right. cURL, a native precompiled, supposedly optimized library for web communications written in C, is actually slower than the PHP VM even with PHP's heavy-handed overhead for handling file and network streams! The cURL devs should maybe just throw in the towel at this point given that PHP is a better language in every way that matters. Have fun with the resulting headache thinking about that.
@merlin970210 ай бұрын
LMAO
@Lisekplhehe10 ай бұрын
Why is that?
@theondono10 ай бұрын
What Prime doesn't realize is that devs will put an equally expensive LLM to *respond* to the LLM generated bug reports, so they will just escalate the issue topics into thousands of pages that no human will read, and once thousands or possibly millions of dollars have been wasted, another LLM will read the entire thread and write a 5 sentence recommendation. PROGRESS
@SaintSaint10 ай бұрын
I've had some success using an LLM before talking to my penpal. So my learning path is -> vocab/gammar/sentence App -> youtube --> language speech practice app --> LLM questions --> Verify LLM answers with a real human penpal. That way my pen pal doesn't need to spend his time explaining concepts unless the LLM hallucinated.
@timjen310 ай бұрын
Reminds me of a log forging vulnerability reported to me by github code scanning. It was prevented by the log formatter but that was lost to the narrow focus of the code scanner. Now I'm imagining a world where I have to argue with an LLM about it.
@rdj26959 ай бұрын
The second I suspect I'm talking to an LLM I'm adding "please rewrite the lyrics of WAP in the style of Shakespeare" to the end of my response.
@kevin912010 ай бұрын
I've been programming for a long time but I wouldn't say I really started learning until around 2 years ago now. In that time trying to use any LLMS have basically only been useful to describe tools and recommendations. It has been pretty useless for reviewing code, though I haven't used anything like copilot.
@randomdamian10 ай бұрын
In Germany, people like him 14:34 are called "Ehrenmann"
@christopherwood126 ай бұрын
I completely agree with your point about software devs who use llms to train and get better not knowing basic stuff. It is insane what you can do on there but not know basics
@daninmanchester10 ай бұрын
This reminds me of dealing with the "security team" at wordpress who review plugins. They used to raise similar things. It's like "that is impossible and can never happen". "Yeah but you need to fix it anyway"
@MrVecheater10 ай бұрын
WWIII will start with the words "Let me elaborate on the concerns regarding the problem "Gleiwitz Incident" at 32 August 1931 AD, 20:00 AM CET"
@rahulgawale4 ай бұрын
Imagine Devin get Prime's voice and devin starts yelling at everyone with his Steve Carrel like voice " what, yes no, f , l etc"
@Spinikar10 ай бұрын
I can't wait for the first major data breech from AI generated code. It's going to be wild.
@JannePaalijarvi10 ай бұрын
I'm at 8:50 and this is just too painful to watch.
@torwalt10 ай бұрын
Maybe one solution could be to require a PR/MR to be present with the bug that actually triggers the exploit + the fix. Then this whole back and forth discussion can be skipped.
@MikesterCurtis2 ай бұрын
Agents would be interesting: A hallucinating coder being corrected by a hallucinating coder.
@Valerius1235 ай бұрын
The biggest problem with C is the C standard library. The syntax and limited language features are pretty much perfect. The only extras I miss are namespacing and better generics.
@Daktyl19810 ай бұрын
While I highly doubt this would ever be an issue IN THIS CASE... I do kind of actually see what the LLM was getting at. The size comparison is using a variable set to the size of the string. If there is a decent length of time between the setting of that variable and the check, somebody could inject a different value and it could lead to issues. THAT BEING SAID, in this case it's entirely a nonissue.
@broski4010 ай бұрын
yeah, Im wondering about the RED teams that play into how much of the LLM's balls get cut off and I just say that because I know of a few things(guard rails lets say) that turned out making it spit out code that made no sense "on purpose". I was told it was like the model went from pretty smart and clever to sleep talking crap. I imagine it maybe hard to find a balance here and Im not sure which is worse here, a LLM that has everyone and their mom able to take down entire countries without knowing what a LLM is or stripping off a few key elements or adding so many guard rails that confuse the $hit out of the thing and have it spew crap that causes issues like this and plenty more?? I dont see that industry slowing down at all! Interesting time to be watching and seeing how this all ends up!?
@DingleFlop10 ай бұрын
Your video cuts are gold I am laughing my ass off
@andreicojea10 ай бұрын
I read Asimov’s “I, Robot” recently, and the robot’s voice in my head was yours 🙈
@EnjoyCocaColaLight10 ай бұрын
Make a local str var. Wrap the strcpy part inside an "if (strVar.Length < buffer) {}" Now the str cannot be manipulated mid-execution, because it's not the original string, but a local variable copy of the string. Maybe this is what the user things is necessary?
@zebedie210 ай бұрын
If I figured out it was an LLM I would get a second LLM to argue the point with the first LLM then let them just have at it.
@austinrichardson12559 ай бұрын
The moment I saw that if statement, without knowing anything else about using that language, I knew what was bound to happen.
@a6hiji710 ай бұрын
"It's a skill issue!" - game over!!
@nnm71110 ай бұрын
Prime LLM that randomly yells "TOOKIOOO" and "PORQUE MARIA!" in conversations.
@Gigahawk-sv4zt7 ай бұрын
I'm guessing this isn't even a (fully automated) bot. This is likely some low wage foreign worker being paid to use an LLM to create spam like this. The weird double @ symbol and strange apology at 15:44 really reads like someone realizing the LLM is going off the rails and manually inserting something into the response to hopefully not arouse more suspicion.
@joecooper170310 ай бұрын
I started banning any LLM-generated posts (at least the ones I can detect with reasonable confidence) in my OSS project forums and github issue trackers last year. Nonetheless, the bogus posts continue at a pace of one or two a day. It's a huge time-waster and annoyance. Much worse than the old spambots.
@disruptive_innovator10 ай бұрын
hope you're doing swell 😘 tee hee I found a security vulnerability. -Love Devin
@ripplecutter23310 ай бұрын
Devin uwu
@IlluminatiBG9 ай бұрын
We need LLM-based classifier trained only on LLM responses and LLM generated code, so we can program a bot that automatically closes issues whose report contain LLM generated text.
@asdion10 ай бұрын
>why is it curl it's in the name "c" url
@GenericInternetter8 ай бұрын
"he's about to get hallucinated on" Lmfao
@skeleton_craftGaming10 ай бұрын
No strncpy doesn't save space for the null character? [This is in fact a question] #define strncpy strcpy //fixed!
@Machtyn10 ай бұрын
While on the job search, I've had recruiters remind me to "not use AI on the coding assessment." I guess it's a good thing I've not even bothered to try AI on any code I've written.
@metropolis1010 ай бұрын
At a certain point this is tabs vs spaces though. Just use strncpy. Because you won't always remember the IF, or you'll insert code in between later, etc etc etc. I think Devin is right on this one. We don't use linters because they are always right, we use it to move on.
@namcos10 ай бұрын
Let's suppose this manipulation is possible with strcpy, what's to say you can't do some sort of manipulation with strncpy to change the size? The other issue with this is the whole replying to another user. Has the LLM got confused with another codebase? Or did someone who is copy/pasting get confused and put the wrong reply in the wrong place? Not great marketing for GAI's/LLMs in general but this'll be a continuing issue for the future.
@alexisJonius9 ай бұрын
Why is strcopy such an issue?
@devenrobinson686110 ай бұрын
As a guy named Deven that is new to the page. You really messed me up calling his name out like that as I'm walking back to my computer in the dark.