Being hard on code and neutral on LLMs is not a contradiction. If someone submits terrible code it doesnt matter if they used AI or not, if they submit good code it also doesnt matter. The point is that you judge code on the merit of the code, not the source. Its honestly really strange to try so hard to apply a moral or even value judgement to something which is just an inanimate tool that can be used or misused.
@idkwhattonamethisshti3 ай бұрын
Yeah it just a tool, but knowing how much Linus hated C++ it's kinda surprising that he is so neutrals towards AI when there is so much garbage code produced by it.
@JorgetePanete3 ай бұрын
doesn't* It's*
@freezingcicada68523 ай бұрын
Its cause if its shit you dont really improve anything and have no idea where to even look to make it not shitty. Its fine if your end game is immediate satisfaction. But if your aiming to exceed the average. Then relying on a "tool" that averages isnt the way to go
@jean-michelgilbert81363 ай бұрын
You have to understand the context of the situation. This was a live interview. Linus couldn't afford to be as much a keyboard warrior as he naturally is. I'm completely convinced that would blow a fuse if someone submitted a patch to the kernel containing LLM code.
@k-yo3 ай бұрын
this
@claaaaaire3 ай бұрын
So basically "AI is fine because you all suck anyway"
@verrigo3 ай бұрын
Which is a Linus take as a Linus take can be :D
@NiNgem-bb6lc3 ай бұрын
🤣🤣
@waltercapa52653 ай бұрын
TRUUUUU
@navi27103 ай бұрын
Bro why do you have to hit so hard.
@Karurosagu3 ай бұрын
A slap in da face
@Gregorius_3 ай бұрын
The I in LLM stands for intelligence.
@Kane01233 ай бұрын
The I in Linus stands for Intelligence.
@Wusaruful3 ай бұрын
I mean if you use a new library and search a specific function it can safe a few minutes
@hamm89343 ай бұрын
I too can copy and paste
@SoftBreadSoft3 ай бұрын
The G in Tom stands for genius.
@MikkoRantalainen3 ай бұрын
And the S in LLM stands for security.
@zsi3 ай бұрын
I think Linus is ambivalent or neutral about LLM coding because he doesn't direct his anger towards unconscious, inanimate agents. What he gets upset about is when a human, who should know better, tries to merge garbage code generated by an LLM without understanding what they are attempting to merge.
@morezombies96853 ай бұрын
I think that you are correct but Id also like to point out that getting worked up over llms is fruitless at its core. Its an exercise for people who dont understand the world.
@tukib_3 ай бұрын
Yeah. There's a lot of money to be made in capturing attention by selling outrage stories of LLMs and just AI in general, when it's really just repackaging people problems into a new shiny exterior. That's not to say skepticism is unwarranted, but you're gonna have a more focused discussion once you isolate human decision making elements.
@keyboard_g3 ай бұрын
Linus goes off on developers cutting corners and breaking rules. Not respecting how important the kernel is.
@neko63 ай бұрын
AI for coding is (currently) basically a replacement for Stack Overflow and Google If you just plug in AI generated code into your system, you're gonna have problems, just like you would if you copy code from SO as-is If you consult the AI, learn from it, and review what it produces and how it stacks up to your needs, then it becomes a net positive force that can both help your with trivial boring tasks, and also teach you things
@thorwaldjohanson25263 ай бұрын
I use it all the time for scripts and jumping off points or single functions. But it's just a start that speeds things up.
@DankMemes-xq2xm3 ай бұрын
This is the way.
@Vangaurd_tiger2 ай бұрын
Yeah i am using it currently learn sfml game dev cpp.
@Xankill3r3 ай бұрын
Re: LLMs - you should read/review the recent paper on LLMs and learning outcomes for students. They basically found that although LLMs helped students improve *while* they had access to them the overall learning outcomes were poorer when access was taken away vs when access was never provided. Basically students who learned with LLMs became poorer at learning things in general or at least didn't improve at learning things compared to their peers who didn't use LLMs.
@rnts083 ай бұрын
The google/stackoverflow/Wikipedia effect. 😂
@Arcidi2253 ай бұрын
I mean it's not surprising. You are learning to use a tool, and when tool is taken away you are less productive. Simple as that.
@-weedle3 ай бұрын
What's the name of the paper? nvm got it
@Xankill3r3 ай бұрын
@@-weedle that's the weird bit. Just saw it reported on yesterday and now I can't find it. Actually have the video paused at ~7:30 because I'm off in another tab looking for the darned thing 🤣 Will update as soon as I find it.
@MartynasNegreckis3 ай бұрын
Correct, now write a React to-do app, pen and paper only.
@VivekYadav-ds8oz3 ай бұрын
The reason he's so chill about LLMs is because he trusts his review process. He nitpicks everything and takes it seriously. Therefore it doesn't matter if the code came from LLM or from a human, it would still need to go through him or his trusted review body.
@ryanlee20913 ай бұрын
He’s chill cuz he already made millions and about to retire.
@AndrewMorris-wz1vq3 ай бұрын
Right. If you aggressively fight regression than every attempt at chance either improves things or changes nothing.
@atiedebee10203 ай бұрын
But if more people start submitting AI crap, its going to be a lot more work to find the parches that actually matter
@AndrewMorris-wz1vq3 ай бұрын
@@atiedebee1020 Block them. If you submit bad code and don't correct I am very confident they just block them, or through them in spam, or whatever. and maybe, just maybe if AI leads to more new contributors who feel confident to submit code for the first time because of AI, create a learning channel for them if they need additional guidance on how to review LLM output before submitting it. You know assuming ignorance but good faith.
@notusingmyrealnamegoogle62323 ай бұрын
@@ryanlee2091that would make sense for most people but he is not most people and still gets fired up about code pretty often
@lucasvella3 ай бұрын
I am objectively a great programmer (as judged by my peers over the years during my carer), and I like Copilot very much. I don't think it made me better, quality wise, but it made me faster on the boring tasks.
@MrVohveli3 ай бұрын
This. So much this. All of this.
@olabiedev54183 ай бұрын
link ur github great programmer
@Jasonlhy3 ай бұрын
me too
@WhiteWolfsp933 ай бұрын
I'm objectively a genius 100x programmer and i think copilot stinks.
@teaser60893 ай бұрын
Just wait, most people think like that and after a few months they realize they aren't faster
@TrancorWD3 ай бұрын
And in 3 years time, we have neural networks in our compilers warning you about your novel solution because it deviates from the average quality of all the code it was trained on.
@martingisser2733 ай бұрын
Yeah. What we will ultimately get is not AI, but artificial stupidity... at the best, artificial superficiality.
@TheSulross3 ай бұрын
Will need a new category of attribute to sprinkle in code and disable AI analysis around actual innovative code.
@InforSpirit3 ай бұрын
Hybrid Ai Linter: " I'm sorry Dave, I cannot let you commit that"
@Happyduderawr3 ай бұрын
Why would data scientists make average quality code datasets? You have to assume that data scientists are complete imbeciles for them to purposely train LLM's to make dumb suggestions. If it's overly suggestive, then the dataset will be changed to make it stop suggesting so much. probably through RLHF.
@TrancorWD3 ай бұрын
@@TheSulross Copilot told me `# $NO-COPILOT$` would work, it did not... just to stop it from stomping on my code currently.
@suchithsridhar3 ай бұрын
Dude! The thumbnail looked like you interviewed him! I was so exicted!
@Abu_Shawarib3 ай бұрын
Baited (like me)
@amoghnk3 ай бұрын
Same 😅
@sempiternal_futility3 ай бұрын
I got baited too
@theghost93623 ай бұрын
imma park here with yall
3 ай бұрын
Skill issues
@Me-wi6ym3 ай бұрын
My general rule of thumb is to use LLMs to learn how to *approach* a problem, then go figure out the details myself. If I am ever asking it about specific numbers in a problem, I have strayed too far from its purpose (in my opinion). Something like: "I want to make ____ kind of project, how might I start that?", or even: "I am stuck on ____ step, what might be a few good things to try?" are both fine. But nowadays, as soon as I search anything like: "will this loop go out of bounds of this array?", I start a new chat because I shifted its focus too far in the original. Once the numbers are wrong, I don't think I've ever seen them correct themselves. In rare cases, I'll ask it to explain what some code will do, but that's only if the documentation is truly abysmal, which to be fair, sometimes it is. I just see it as a way to sift through all the more niche or hidden code discussions online.
@gljames243 ай бұрын
It's a great rubber duck.
@snznz3 ай бұрын
I agree, it's the best rubber duck short of an actual human subject matter expert, which often times you may not have access to depending on what kind of problem you are working on. Bouncing ideas off your significant other for instance, is probably not going to be useful if you are trying to write something like an inverse fast Fourier transform, but the LLM will have the context needed to plan an approach. Using it to actually write code is iffy, it can get you like 50%-70% there often but you may end up spending more time fixing the output than it would take you to just write it.
@Atomhaz3 ай бұрын
yeah this is how I've used it. I wanted to make a music app and all the code it gave me didn't work because the library had been updated but it suggested patterns I could choose to adopt or dismiss
@TheNewton3 ай бұрын
@@gljames24 Basically, though it's a great rubber duck that LIES. And if someone doesn't know enough they are literally incapable of spotting the lies. And way too many treat these LLMS as sources of truth. In part maybe because a subconscious misconception views everything the LLM generates as based 100% on exact-words a person has written in that sequence like it's a search engine and not an adlibs slot machine.
@alvarojneto3 ай бұрын
I'm not a great or experienced coder, but one issue I already see with LLMs is that it breaks an important aspect of coding, which is the dissection of idea implementation. A huge benefit I get from coding is that it forces me to really think about what it is that I am trying to do.
@ScottHess3 ай бұрын
Kernighan's Law suggests that debugging is twice as hard as writing code. Letting the LLM write the code and then debugging the result is a direction with subtle issues. It probably means that you can crank out your low-end work even faster than before, but you may not be able to improve the quality of your high-end work at all. And Amdahl's Law would suggest that making your low-end work easier to do may not free much if any time up to put more hours into your hard jobs. The problem in that case isn't in having time to do the actual hard work, it's that your job involves grinding through boilerplate.
@mikew71713 ай бұрын
AI is going to be the corporate equivalent to buying a $3000 Gibson, Les Paul guitar thinking it’s gonna make them a better player without learning how to actually play.
@oompalumpus6993 ай бұрын
Even though I believe in the potential of AI, I am against corporations having a stranglehold on access to it. The future should be a place where we can all develop AI the same way we develop applications. Corporations apply the classic tactic of turning people into helpless consumers so they keep paying for whatever services that are being peddled. Independently assembled AI should be the direction to move towards.
@seanwoods6473 ай бұрын
Actually it is more like buying a $3000 collectors edition of a Harley Davidson bike. In 1:24 scale. With "real working engine", that is literally just a translucent engine block that gyrates the pistons if you turn the wheel.
@unoriginal_name70913 ай бұрын
This analogy is even better when you consider Gibson's quality control has been trash for over a decade
@alpuhagame3 ай бұрын
At least with guitar this expensive the sunk cost fallacy would force you at least try to improve to justify this investment.
@thejeffyb97663 ай бұрын
Remember the Gibson auto-tuning guitar? Lol.
@TonyDiCroce3 ай бұрын
I have been programming in C++ for 30+ years. I use LLM's in all their forms for coding. Using an LLM for coding successfully involves breaking off chunks of functionality that it can handle... and it usually involves defining function signatures for it. You'll only know what an LLM can handle by using it a lot. More complicated uses can only be tackled by providing it with extensive guidance in the form of pseudocode. Also, I never "trust" an LLM. I have to maintain the code so I have MUST understand it. Yes, they do make mistakes... but given the size of functions I'm asking it to write those mistakes are usually easily spotted.
@censoredeveryday33203 ай бұрын
Which LLMs do you use for C++ ?
@TonyDiCroce3 ай бұрын
@@censoredeveryday3320 I pay for and use ChatGPT & Claude mostly for technical discussions and exploring ideas (though I sometimes use them for code generation as well). I use github copilot is vscode... and as of last night I use Cursor pro.
@rich10514143 ай бұрын
It's ok for self isolated functions, usually, but it falls apart when it needs to interface with multiple systems already designed. And I would NOT use it in memory unsafe languages.
@gljames243 ай бұрын
LLMs are great for Rubber ducking or small snippets. They can't replace a human programmer.
@ch3nz3n3 ай бұрын
Yet
@TheSulross3 ай бұрын
Am pretty sure could get AI to generate an entire web interface CRUD application - in my programming language and tech stack of choice.
@LS-qs9ju3 ай бұрын
@@TheSulross As it should be, CRUD already been done for almost three decades, will be weird if AI can't learn it with that amount of dataset. But last time I check it kinda shit at doing visual to code based task (like asking to generate HTML+CSS with certain visual specification), they will do the bare minimum and then unable to expand it to something that you want.
@TheNewton3 ай бұрын
@@TheSulross there are already alot of projects for this, it's super optimistic to even refer to the output of such toolchains as "prototypes" considering the amount of unmaintainable garbage, nonsense code, and refactoring they need.
@aethreas3 ай бұрын
@@ch3nz3n It never will, the whole technology is a step in the wrong direction as far as real AI goes. All LLMs by their very nature can only rearrange and regurgitate that which already exists, it literally can't come up with something new because the underlying algorithms just take what it's been trained on (stuff that already exists) and tries to rearrange it in ways that best fits the prompt. That's a massive oversimplification but at it's core that's what it's doing
@ProfRoxas3 ай бұрын
I used gemini for a short while while writing my thesis work, but after it didn't help but instead i had to give them the answer, i stopped using it. i dont have much experience with them but i still think it can be useful for simple tasks or to have a starting point for figuring out something, but as a replacement or trusting its output without confirmation, i dont think it's perfect.
@jean-michelgilbert81363 ай бұрын
I can't stand working with LLMs for coding. I spend more time correcting their mistakes than it would take me coding the things from scratch.
@abeidiot3 ай бұрын
that just means you don't know how to use them or are bad at natural language. It's like being given freshman college interns and giving them tasks too hard for them
@l3lackoutsMedia3 ай бұрын
I think it's good for pointing vaguely towards something you can try to use
@Okabim3 ай бұрын
If tell the LLM to write something it's usually bad (even GPT4o struggles with regular expressions). But something like codeium in vscode auto completing a line I've started is almost always correct, saving on keystrokes.
@jean-michelgilbert81363 ай бұрын
Totally agreed. I tried a LLM stress test where I asked Mixtral 8x7b to make a console Hello World but with WinMain entrypoint in C++ without using any function from the standard library, only functions from the Windows API. In my requirements, the code had to work properly whether the UNICODE macro was designed or not and there had to be no #ifdef UNICODE in the LLM answer. Let's say that it was an abject failure. There are exemplars on how to do each of the specific tasks I asked for on GitHub and on Stackoverflow but they are few and far between. To code it, you're better off just with the MSDN doc 😂
@Happyduderawr3 ай бұрын
Then dont write prompts that produce mistakes... Its not hard to guestimate the ability of an LLM and decide to only ask it questions within its range of ability.
@killzolot3 ай бұрын
As someone who is a novice with code, I concur with your opinion. It's vital to have a strong foundation of understanding, and an LLM should supplement this, not replace it. If you always take shortcuts you will never build up the knowledge and skills to do anything well, and I think this is true for everything, not just coding
@MikkoRantalainen3 ай бұрын
5:45 I interpret Linus's opinion here as "LLM can be a great code linter but you should assume its output as opinion about the code and then decide by yourself if you want to actually change the code". Though this obviously assumes that the developer skill issues are more about the accuracy of the implementation instead of overall algorithm or mis-understanding data structures or thread locking.
@sirtobi60063 ай бұрын
I love LLMs to read documentation for me to quickly get started with new libraries.
@Karurosagu3 ай бұрын
Why not read the documentation itself? Every doc out there for a library or a framework has a "Getting started" section in its first pages
@HRRRRRDRRRRR3 ай бұрын
@@Karurosagu Because most of them are poorly written, and I'm not autistic enough to understand the author.
@Trahloc3 ай бұрын
@@Karurosagulinking the entire documentation and then asking your specific query is faster as the "getting started" might not answer the thing you need.
@Karurosagu3 ай бұрын
@@HRRRRRDRRRRR Most of them? IDK, I think the quality depends on a lot of factors And by the way, English is not my main language and I've read many docs without problems wether they are poorly written or not. Most problems i've had have been with: very new libraries and frameworks, very specific topics within existing docs that haven't been updated after a new release, misinterpreted features that turned out to be hotfixes and then got removed, and so on TL;DR Skill issues
@Karurosagu3 ай бұрын
@@Trahloc "Getting started" is not a specific feature
@parker77213 ай бұрын
His view is probably not negative because the code that he reviews is from developers that know how to use AI. Meaning they don't just tell a LLM to "make a driver in rust" they just use it for tedious repetitive code tasks.
@conceptrat3 ай бұрын
@12:00 This is the crux of the problem. Using the 'always blow on the pie' quote. Always create and run the tests even on LLM created/guided work. This isn't just field specific.
@faceofdead3 ай бұрын
For me personally, as a low/mid level IT support, LLM's help me a lot, because i was always a quieter person and somewhat shy to ask questions to the seniors at work... With the LLM's there is no such issues and i excel at the tasks quicker ^_^
@Aosome233 ай бұрын
LLM's are great replacement for searching for obscure methods in API documents. When string matching doesn't cut it, I always resort to LLMs. And they find stuff that I couldn't find in a few minutes with 70%~ accuracy
@TheHackysack3 ай бұрын
holy heck I don't think I've ever seen you go more than a minute without stopping the video
@onça_pintuda9993 ай бұрын
KEKW, youtuebers always does that
@mstrsrvr3 ай бұрын
Linus response sounds natural to me. He's basically saying: "Hey, if this code gets to match kernel standards, I don't care where it's coming from.".
@mohammadhalipoto3 ай бұрын
There are 10 reasons LLM's spit out crap code and both of them are hard to fix
@themartdog3 ай бұрын
He is outcome focused, he doesn't care how people get there.
@Nick-rs5if3 ай бұрын
I get that exact feeling as well. Linus just seem to treat LLM's as just another tool, which it currently is.
@James22103 ай бұрын
Watching this with subtitles on is trippy
@notapplicable72923 ай бұрын
Saying you're 10x better with AI is like writing terrible code just to cite a massive performance improvement
@sownheard3 ай бұрын
or the person just makes spelling mistakes and the bot just corrects the spelling
@specy_3 ай бұрын
A lot of coding is repetitive work, gpt is really good at repetitive stuff, that's where it helps. If you manage to build your code good enough where it is composable and reusable, the LLM will see the pattern and suggest you ways to compose it correctly
@kaijuultimax94073 ай бұрын
@@specy_ But that isn't making my code 10x better, it's just getting it done faster. If all it's doing is recognizing the pattern of what I'm coding and completing it, then the AI isn't making me do my job better, it's just letting me do the same job but slightly faster.
@GackFinder3 ай бұрын
@@specy_ "Resuable code" is in general a fallacy that will bite you in the tuchus sooner or later.
@specy_3 ай бұрын
@@kaijuultimax9407 yeah ofc it won't help u make better code, but it helps u make faster code. If u can save 50% of your time when writing one feature, you can use that time to make it better yourself.
@sarkedev3 ай бұрын
16:32 not negative, but he has very high standards. He's like Gordon Ramsay, who can be an asshole in the kitchen, but is a sweetheart when you see him outside the kitchen or interacting with children
@zzyzxyz54193 ай бұрын
Does the curl article have a video?
@sarkedev3 ай бұрын
15:50 maybe not dumber, but out of practice. I'm a full stack independent developer that employed a frontend dev for a few years. I find that using LLMs is the same if you don't "take in" all the code that is produced
@ChristophSeufert3 ай бұрын
My problem with LLMs are currently: - For my hobby projects they are a useful help to bootstrap and do boilerplate. For example: Create me an ORM for this database schema. - At work: Large codebase in Typescript, Dart, Go and Rust - Copilot expecially is useless.With recent gemini and Claude Sonnet i had slightly better experiences but it's still awful. My longterm concern is: Is the production -> training data loop without much feedback mechanism in between. It has already been shown that the quality of LLMs declined the more you feed them LLM produced input data. So i currently won't rely on them much. At least not for code where it matters.
@sierragutenberg3 ай бұрын
"openAI.... f*ck you!" - Linus probably
@tedchirvasiu3 ай бұрын
There is an identical comment below you, I think you are a bot.
@483SGT3 ай бұрын
There is an identical comment below you, I think you are a bot.
@ahmeddeco73203 ай бұрын
There is an identical comment below you, I think you are a bot.
@xClairy3 ай бұрын
There is an identical comment below you, I think you are a bot.
@SgtVenom3 ай бұрын
There is an identical comment below you, I think you are a bot.
@rommellagera85433 ай бұрын
Handwritten code or AI tools code are the same, if you have the patience to test/debug the code Don't blame AI tools for your own limitation, for not knowing how to properly test a code or just being plain lazy, if you don't know how the code works it is your responsibilty to learn it, otherwise it is like a dagger above your head once it is deployed in production
@sarkedev3 ай бұрын
7:10 so if we get LLMs to review and respond to these requests, we'll have LLMs arguing with each other.
@jony17103 ай бұрын
I've pasted in code that doesn't work into an LLM that was non-trivial and it spotted a bug for me, where I got the memory ordering wrong on some atomic operations. I feel like this is the sort of thing where the right answer exists out there a multitude of times and the LLM can pull together all these resources and explain why your code is broken. It's super useful for that stuff, and is way better than just trying to only absorb this from the docs. Also it's a great rubber duck.
@pyajudeme92453 ай бұрын
It seems like he sees it like a logical consequence of C -> compiler magic -> assembly. Now it is AI -> some black box magic -> C -> compiler magic -> assembly
@monkishrex3 ай бұрын
AI is great for remembering syntax with context. You don't ask it to build a house which its terrible at; you ask it to build a wall 4 times, a floor, a roof... Etc which it's actually pretty good at
@temari28603 ай бұрын
Linus talked about LLMs in future tense. He never said anything about using them for programming here and now. I think he's just optimistic about their potential in the future.
@darylclarino54393 ай бұрын
it might also be because even though you are using an LLM for help, the PR you produce will always depend on the person doing it. Whether they rely fully on it, or not.
@0xCAFEF00D3 ай бұрын
Best use I have for LLMs is as a user integrating very basic features in websites through grease monkey. Doesn't take long to change a website that requires hover on an icon to show a picture to one that shows them by default. It's not hard, it's not code that will see reuse. It's just fiddly normally. And with chatgpt you can actually just roughly ask it with the right info and receive a good enough result.
@Soldknight3243 ай бұрын
Linus had a valium this morning
@thk47113 ай бұрын
I have been using LLMS since quite some time. I am not a full time developer but have to write some python code from time to time. It really helps me when I start a script. You just tell it - I need a class named XYZ with methods a,b,c which have the following parameter and which return that. The script has the following command line parameters … etc. And it will perfectly do that. Then you have to kick in and write what you need your script to do. From time to time you ask how I can get this and that done. At the end you can let it help you to optimize your code in a short time if you for instance have a little bit too much if then else stuff in your code. But at the end you have to understand each line and judge what is a good recommendation and what not.
@alexandrecolautoneto73743 ай бұрын
LLM turns all bugs into subtle bugs. LLM turns compilation errors into syntax correct bugs with logical flaws that take ages to discover what when wrong.
@LtdJorge3 ай бұрын
Heavily depends on the language. With Rust, they hallucinate shit that gets instantly caught by the compiler.
@alexandrecolautoneto73743 ай бұрын
@@LtdJorge But they are evolving to hallucinate as coherent as possible. The future model will just be better at tricking te compiler.
@TheNewton3 ай бұрын
@@alexandrecolautoneto7374 users asks "make this" > LLM outputs > user copy and pastes > compiler fails > user gives LLM negative feedback > LLM model evolves to avoid negative feedback > stealth code. Currently reminds me of so many web performance "services" that just insert javaScript to trick auditing tools to spit out higher points. Mission accomplished for everyone that doesn't understand the actual code.
@exec.producer25663 ай бұрын
This is a skill issue. In 10 years, the mark of a good programmer will be their ability to debug LLM code. Prime & others are coping because the introduction of LLMs caused a total paradigm shift in regards to writing good code quickly. NVIM and all this other DX shit they obsessed over is a brick compared Cursor AI + Claude. These guys jerked each other off over their WPMs, but are trapped in their old ways when something better comes out.
@alexandrecolautoneto73743 ай бұрын
@@exec.producer2566 I loved the theory, but the reality is that LLM are just the not right model for coding. No matter how we improve it will always hallucinate, it's just how they work.
@mariusj85423 ай бұрын
As a freelancer with 30+ years of experience, mostly working on projects for a few consulting companies, I’ve found LLMs incredibly useful. Switching between languages like Python, React, C++, and C# means I often forget specific details, especially with new updates. Recently, I was assigned to a next.js project. Yet another js-framework, 😞 , and LLMs have been a lifesaver for quickly getting up to speed with syntax. They also help generate solid code with good comments on common code stuff, including unit tests, which keeps my workflow pretty efficient if I can say so.
@carlosmspk3 ай бұрын
8:30 I'm weirdly annoyed that Prime didn't react at the "humble" joke :(
@timturner76093 ай бұрын
I really like whatever Microsoft has baked in to the new visual studio where when you're refactoring a project and you make a similar change 2 or 3 times, vs will give you the red "press tab to update" the next time you move to a similar line. It sure beats trying to come up with a regular expression to search and replace. Simetimes.
@matt_milack3 ай бұрын
For me it's pretty simply. I meet developer, software engineer, sysadmin, network admin, cloud admin, qa tester, data analyst, data engineer, devops engineer or cybersecurity professional who lost his/her job because a random person who is not IT/CS professional can work their job using LLMs, and I'll be like "God exists, and it's AI."
@dmitriyrasskazov88583 ай бұрын
If random person can do this using llm - special person can do it too.
@matt_milack3 ай бұрын
@dmitriyrasskazov8858 A random person would be perfectly happy with a significant lower salary then special one.
@ottowesterlund3 ай бұрын
I don't quite understand. Are you saying you already seen this happen, or is it more of a "if/when it happens in the future..."?
@matt_milack3 ай бұрын
@@ottowesterlundThe later option.
@deepspace90433 ай бұрын
I think it'll be either domain professionals using LLMs to do their job, or it will just be entirely automated. If you have a random person who knows nothing about the domain using an LLM to do the job, then you can likely just automate the job at that point.
@supercheetah7783 ай бұрын
2:17 I'm not quite sure which incident that you're referring to, but, if it's about the Bcachefs author, that didn't have anything to do with his code, but rather not following the rules of development and RC cycles where he was submitting 1000+ line patches during RC cycles. That got Torvalds pretty upset.
@burger-guy-993 ай бұрын
I think that last bit is key. If your in it to learn, turn off autocomplete at least. If your just trying to ship your GPT wrapper, then go for it.
@anasmostafa13 ай бұрын
Off topic, before watching the video, just to say ThePrimeagen is a beautiful soul
@laszlo35473 ай бұрын
The current paradigm just fundamentally doesn't work for identifying bugs. The code available to train on likely got published after the big bugs are fixed. The small remaining ones are not identified in the training material as bugs.
@pm-dev3 ай бұрын
You should do the recent François Chollet ARC Prize talk at some point. Getting takes on LLMs from engineers like Linus is more of a personality test than anything else at this point. You should listen to what actual AGI researchers think about LLMs.
@jaye56323 ай бұрын
There is a line where LLMs are helpful on one side and problematic on the other side and today they exist on both sides. You can use an LLM as a tool to help you work, some might like copilot others may like using LLMs to scope out a problem, and in other cases they may not be useful at all. I don't think that they will be replacing devs anytime soon but they will be alleviating us from some tasks. Using a LLM is like being part of a team an having to read the code of some other developer, you can read it for structure or you can read it for solving the problem it is trying to address.
@TheNewton3 ай бұрын
8:50 "seleticve arena", Isn't the LLM take just going to be heavy survivor bias? Linus is an end maintainer so the amount of filtering that happens before every code review is massive. Meanwhile downstream see how intermediaries feel towards an increase in submissions because LLM's give people the idea they can code fast with no regard to quality.
@trn4503 ай бұрын
The LLM's are very useful to people who know enough to check their work. They're a productivity multiplier. Additionally, they do serve as a decent second set of eyes. They do catch bugs.
@alexeiboukirev83573 ай бұрын
LLM-assisted development development is worse than StackOverflow-assisted development. I am not worried. There will be more work to fix the LLM fallout for the professionals who know what they are doing.
@nothingness8633 ай бұрын
growth pains, it will be the opposite in short term.
@Boschx3 ай бұрын
Nice copium
@RawrxDev3 ай бұрын
@@Boschx How is it copium, its already led to worse code that needs revision, I doubt that will magically get better...
@ItZxDraW3 ай бұрын
When an LLM creates something is a guaranteed mess but other than that its op. Especially for learning (real) languages.
@msclrhd3 ай бұрын
I've found LLMs mixed. The line-based auto-complete is 80-90% useful, especially writing similar repeated code. The other times it has got in the way, but on balance I generally prefer to have this functionality. Using LLMs to ask questions, I found it helpful when trying to identify a Bootstrap class -- my Google searching didn't find the class, but asking an LLM helped me find the class name that I then looked up in the docs. Some other approaches I've asked for I've ended up adapting the code to the way I wanted to write it, using the LLM code as a basis. In some other instances it didn't help me solve the problem so I used different approaches.
@entelin3 ай бұрын
The difference is this: He cares about the results that land in his inbox. What tools people use is kind of besides the point. He will rake you over the coals if the patch you submit sucks regardless of how you got there.
@t1nytim3 ай бұрын
Where I think I've gotten to is, when I get it by an LLM, and if it doesn't get it in 1 attempt, I move on to figure it out myself. As the amount of time I would previously spend on some error that was because of a single character being wrong, and wasn't picked up by the likes of a linter/lsp was frustrating. But beyond that use case I feel like I've felt it frustrating longer term, because of its lack of quality mostly. That and learning Neovim, I think of something that I haven't encountered before that I wanna do, and it tells me if there is a hotkey I don't know about. But again, that's a ask it something, it's got 1 chance, but because it's essentially just answering from a manually, it's been like 99% accurate for the basic learnings anyway, as I would have just been googling them anyway.
@pixelfingers3 ай бұрын
It’s interesting what you said about software development and understanding the problem domain, and bugs due to edge cases or things you’d not quite fully understood. Because that’s sounds like you’re coming up with a solution to some kind of problem within a set of constraints (or trying to understand what the problem actually is about and what the constraints are.) It’s like a level higher than a particular programming language. It’s more like designing and being able to understand certain types of problems. So say if you were using a language that just didn’t allow certain classes of bug (like memory errors) so it was high level and the LLM didn’t need to generate that kind of code, and it became more about expressing solutions to known problems (I want to say applying patterns but I don’t mean design patterns - something probably more high level) and if an LLM was working at this level then I think they’d be really useful. “I can see you’re trying to do this kind of software, have you thought about applying technique / approach / algorthim X.” If you could somehow turn that knowledge about problems and their solutions into some abstract model that LLM could use to spot patterns and suggest techniques, to help you understand the problem space, then that’d be good. I genuinely don’t know if LLMs work like that at the moment. 🤷♂️
@gnuemacs11663 ай бұрын
How do u access the low level details of a video card or ai hardware ? That’s the real Linux question
@comfixit3 ай бұрын
LLMs have come a long way for coding even in the last few months. Sometimes it's about identifying the use cases its good at. One of the best use cases I use it for is dumping in an entire project, often times with lots of spaghetti code into context (Which now get as big as 2 million tokens and can be cached on repeated calls to save money) and asking it to locate the parts of the code that do X. It will surface the code, I can do a quick find and I'm on my way. It probably grabs what I need 80-90% of the time on the first shot with modern models. It seems kind of like common sense but it's a good idea to use LLMs for the aspects of coding they are good at and probably a bad idea to use them for coding tasks they underperform on. Unfortunately, things emerge and change so fast what an LLM is good or bad at coding wise is shifting quite a bit and not obvious out of the box.
@TheSkepticSkwerl3 ай бұрын
Whats bad is LLMs help me remember things ive forgotten. So the really rare things i use once every few months. And it speeds that up. WHICH MEANS I never remember those little things. An example is building the initial code for a program. run through all the logic a 1000 times and loops and so on. but when you want to do something like open a file, or grab a different library, if you only do that once every few weeks it's harder to memorize.
@ruukinen3 ай бұрын
Your brain is a very efficient cache. If it doesn't retain something, it's because you don't need to retain it. Something you do once in a blue moon is not something worth remembering off the top of your head, since the extra time taken to re-familiarize with the concept isn't that big compared to everything else.
@Muskar23 ай бұрын
@@ruukinen Except I've found that it also means that you get less ideas that involve the things you rarely do. And some of my best ideas combine my deep understanding of the current system with smaller rarer things I encountered in the past. But when I use LLMs, my brain got rusty at doing that. So I backed off, and it feels a lot like rebuilding cardiovascular endurance after not exercising for a few months. Maybe that's just me...
@sta1RR3 ай бұрын
@@Muskar2 yeah like maths without calc and maps without google, makes you feel like a lost 5 yr old separated from parents in a crowd
@tesuji-go3 ай бұрын
From an automation standpoint, I'm hoping/expecting AI to find a useful home in property-based testing. Helping to more quickly zero in on corner cases that the implementation missed.
@npip993 ай бұрын
LLMs have been trained on all of his rants, and the entire existing linux kernel. Hard to argue with that.
@duodecillion89543 ай бұрын
7:19 "he" 🥶
@turbokev37723 ай бұрын
My experience with co-pilot and with cursor is that they are distracting and not particularly useful and getting your way when you already know exactly what code you want to write
@Jasonlhy3 ай бұрын
There is a Chinese term called: 盲人摸象 ( Blind men and an elephant). This is what I feel like it I ask LLM to generate the code I don't have experience to work on before
@llpolluxll3 ай бұрын
I use llms to help me understand the problem I'm trying to solve. You can bounce ideas off of it to help you learn.
@notapplicable72923 ай бұрын
I have been arguing since GPT3 that AI will be amazing static analysis tools one day, awesome to see linus agree. It makes perfect sense as a good 30% of the bugs we catch in code reviews at work probably could have been caught by an AI (although maybe not a large language model with the current design)
@konrTF3 ай бұрын
That article about the dude telling the LLM that it isn't even answering questions and is just stating untruths repeatedly while saying "Sorry, " before.. that's happened loads of times in almost every LLM I've tested on so many topics and uses
@jaoschmidt378615 күн бұрын
do you have the article source?
@DE-sf9sr3 ай бұрын
100% it still takes SMEs to be effective. The LLM still depends on the inputs being perfect to be right. Copilot depends upon inputs that are not always right, or not always relevant, or not always applicable. Code that is older version, etc. Still takes insights and SMEs to be useful
@christian152132 ай бұрын
The argument that I feel you miss a little is that there are different ways you can use it. You are absolutely correct that knowing the code and being a good coder is a must but the bits you can bounce off or syntax you can gather quickly is invaluable. Often times, I am going back and forth between docs and llms and I will know when the LLM is bullshitting and I need to go to docs. In a way your right but also there are other ways you can learn to use it that is more helpful. if any of that makes any sense.
@conceptrat3 ай бұрын
@6:00 This is the problem with numptys using LLMs but without the person reporting understanding anything of the code. Most to cash up on bug reporting snippet coding for their CVs
@draakisback3 ай бұрын
The biggest thing I worry about with LLMs has nothing to do with some competent developers submitting code that was built within the help of AI. It revolves around these "services" which are being guided by people who don't care one way or the other, services that won't care about wasting a significant amount of time. I'm currently working on a browser with a team of people and after we went viral on hacker news, this AI company approached us and started submitting "code reviews" to our PRs without our consent. The stuff that it pointed out was just ridiculous, it was all stuff on par with the curl nonsense. We had to tell this company that we wanted nothing to do with their experiment because it was just wasting our time.
@imperius0627 күн бұрын
LLM have helped me screw thing up hard and I gotta learn more to correct them
@kanescott13002 ай бұрын
Ideally it should be used for internal pre-checks for a PR, using for automated bug bounties is obviously bad. But if I can run against my PR to check if I missed anything before submission, sounds good to me.
@maezrfulldive27703 ай бұрын
Thank you for making this kind of video again, my idiot friend every week thinks that the end of the coders is here.
@Rignchen2 ай бұрын
I think an llm wich would spot sth that could be a bug, setup an env to test the bug, run the program in order to trigger the bug, then check if it worked as wanted and then give this to the user, then it would be really interesting
@gnuemacs11663 ай бұрын
The real question is how do u access llm though Linux ?
@vivekpraseed9183 ай бұрын
The question is how good multimodal LLMs or VLMs will be in the future like 10 years from now...They are somewhat decent today but can get magically good as time goes by
@Rohinthas3 ай бұрын
I think you are spot on about Linus being surrounded by good developers who have the competence and discipline to use LLMs responsibly. I am lucky that I trust at least half my team to use LLMs in a useful way. Not so much the other half though... I notice that my reviews for the people using LLMs in a way I dont necessarily approve of (aka Copilot Autocompletes everywhere) have harsher wording. We can tell if you wrote it yourself y'all, we know you dont write code like that and if I find dumb shit in it, I will get madder at you for not discovering it yourself than I would for you making the error by yourself, because you are pushing your job of reviewing the suggestions unto me!
@SeanCallahan523 ай бұрын
I think the issue is people expect the LLM to just do everything, however, they’re most effective if you know the problem and understand your codebase and then just use it for writing functions where you know what goes in and what should come out. That speeds me up, anyway. It’s a basic take but I don’t ask an LLM to just solve everything for me.
@dev.sharif3 ай бұрын
the croudstrike bug was not because of a bad test. it was a release bug, i saw somebody said some random file in update had all 0 inside, so this is why even we can't trust the code that review our code. "being safe rather than sorry" stuffs i guss!
@gmt-yt3 ай бұрын
In the early days of linux I downloaded slack, IIRC. But I got a kernel panic because I was supposed to change over from the boot floppy to the root floppy (something like that, maybe someone will remember how this worked better -- basically this was the first problem anyone trying it for the first time would likely have, and surely well documented). So, I e-mailed Linus. He explained via private e-mail that you had to change the floppy or whatever. Yeah, problem solved, I was "in!". Can't remember if I thought to thank him.
@DotaBlitzPicker-wn7oq29 күн бұрын
I think LLMs for coding are great, but turn off the 'auto-suggest' feature. That stuff messes up with your thinking process because you have an idea of what the function is supposed to do, but at that exact moment a whole bunch of code appears and you've got to read through an lose your train of thought, and maybe sometimes it's what you want, and sometimes it's not. Instead I think having that bound to a key is super nice, so you write your function, and code as you normally would. Then you stop, and realise it's OBVIOUS what your about to do next, you know it, the LLM knows it. so save everyone time by hitting that 'manual' button and have the code pop up when YOU ask it to. That I think is just 'better auto-complete', and it's amazing. It's the perfect combination of automation and still having enough control to get to where you need to go without ending up with a bunch of code you don't understand and can't debug.
@clamato4223 ай бұрын
Is LLM generated code like a template that you then fix up?
@Tony-dp1rl3 ай бұрын
Somewhat ironically, the latest models do actually pick up that strlen check before a strcpy far better
@gatisozols3 ай бұрын
Maybe I am missing the point, but strlen will cause buffer overrun if there is no terminating null char..
@Yura1353 ай бұрын
2:46 LLMs are cruise control. even with cruise control you still need to steer. Linus will blame the one who submitted the PR, not the LLM (or other tool) they used to make it.
@jean-michelgilbert81363 ай бұрын
On unit tests: they may be useful for simple systems but they have nothing on functional and integration tests. As soon as you have to introduce a mock for a unit test, you're using the wrong kind of test
@MikkoRantalainen3 ай бұрын
12:50 I totally agree that we don't have good enough tests. Show me a project that has good enough tests that mutation testing cannot find problems in the tests. I'll be waiting for a long time. (Tests should be thought as a validation for the actual implementation and mutation testing should be thought as a validation for the tests. Every time a mutation testing can make a change a to the code without at least one test failing, those tests are not good enough! Are you going to be writing new features or new tests this year?)
@polymetric26143 ай бұрын
That's what I've been saying!! Even if you had a "perfect" AI that was always right, If you just use cheat codes for everything you never learn. that's like half the fun in life is learning