The noinline thing in the C++ code* has caused some confusion, so let me clarify: I used noinline to simulate an example of a higher order function that is not inlined, to show how function pointer types hurt codegen (and lambdas help it) in template instantiations that aren't fully inlined. Template != "definitely inlined"; not being inlined is common for e.g. recursive higher order functions, or doozies like std::sort. While answering comments about this I came up with a clearer example, that doesn't use noinline, where in a reasonable implementation of a commonplace algorithm (in-order binary tree traversal), using a function by name generates worse code than using a lambda: godbolt.org/z/eqz8EbWM6 The lambda line generates zero code whatsoever (since due to the lambda type it instantiates walk() in a way that is statically known to dispatch to f(), which does nothing, so the whole thing optimizes away), whereas the function pointer line generates an out-of-line instantiation of walk() containing indirect calls. Here's the Rust version, on the other hand: rust.godbolt.org/z/8xf87G5WT It's a bummer the compiler isn't able to optimize out the control flow completely (note GCC has the same issue with the C++ version), but the important thing is that _inside_ the monomorphizations of calculate, there are no indirect calls to the passed-in function, since its type statically resolves to code that does nothing, for both the named function and closure case. The two monomorphizations produce the same code and then they get deduplicated in the final output as I showed at the end of the video. I think this example is pretty compelling and if I could replace the one I used in the video with this instead, I definitely might. Even though Rust's codegen isn't a dramatic "home run" over C++'s since rustc doesn't optimize out the pointless recursion, it still supports my argument that Rust's type system lends itself to good codegen in non-inlined higher order functions, whereas C++'s type system works against it. *I also disabled inlining of calculate() in the Rust code, for the record.
@tk36_real Жыл бұрын
hey, this sounds stupid, but did you delete my reply?
@_noisecode Жыл бұрын
Definitely not--I didn't even see one come through. Apparently KZbin automatically/inexplicably deletes replies sometimes on a whim: www.reddit.com/r/youtube/comments/11wgrlc/youtube_keeps_deleting_my_comments_for_no_reason/ Sorry about that. Maybe try leaving it as a top-level comment?
@_noisecode Жыл бұрын
Oh... did it have a godbolt link in it? Reading some more, I guess KZbin might be prone to auto-delete comments with external links (my comments with links seem to all work but I haven't seen anyone else post a comment with links, and another person saying their comment got deleted was supposed to be posting me a link). If you post it again (and really sorry for the hassle and trouble) maybe we can try the thing where you post the link so it doesn't look like a link (i.e. 'godbolt dot org slash 12345').
@0xCAFEF00D Жыл бұрын
Can you produce a Rust version of the example you came up with? I don't think it's fair to disable the C++ channel of communicating caller context and then state that the Rust way of doing it is better because you did that. Also video corrections are usually pinned. I think this comment is important to understand the video. I got this comment on the second page.
@tk36_real Жыл бұрын
@@_noisecode hi, yes exactly - it had a godbolt link. I'll retry later and thanks for checking :)
@HMPerson2 Жыл бұрын
regarding rustc merging functions: function merging is done by LLVM and clang can do it for C++ too, it's just off by default. you can manually turn it on with `-Xclang -fmerge-functions`
@NoBoilerplate Жыл бұрын
Another great video, thank you Logan!
@CielMC Жыл бұрын
Hi Tris
@NoBoilerplate Жыл бұрын
@@CielMC 😁
@astroorbis Жыл бұрын
hey there! love your vids man, especially the HAM radio one.. getting my license soon :)
@NoBoilerplate Жыл бұрын
@@astroorbis You're too kind! The Rust community are so friendly, I wouldn't have got going without them! ❤
@astroorbis Жыл бұрын
@@NoBoilerplate Rustaceans rise up! For some reason, Obsidian and Rust feel very close although they don't have any actual relation.. might just be your videos :shrug:
@protodot20519 ай бұрын
So THAT's why lambdas can only be assigned to auto-typed variables. The reference is so jargon-heavy, I just couldn't figure out what a "unique unnamed non-union non-aggregate non-structural class type" even is.
@Delfigamer14 ай бұрын
_Unique_ -- means that no two instances are the same. Each time you write a lambda, the compiler creates for it a fresh new typename. Even if you literally copy-paste the same lambda twice, they will still have two distinct types. Compare with e.g. "Result", or in C++ "std::variant" -- when you write this type in multiple different places, they all end up referring to the same type. _Unnamed_ -- means that you cannot refer to this type in source code directly, unlike the "i32" and "Box" mentioned above, or "fn"-types constructed in the video. _Non-union non-aggregate_ -- means that this type is not a union and not an aggregate. :) In C++, a "union" is, well, the type that you get with the *union* keyword. An "aggregate" is a common word for *class* and *struct* -- they are practically the same thing in C++, the only difference being that *class* implicitly starts with "private:" visibility, while *struct* starts with "public:". As for explicit access specifiers, fields, methods, virtuals, nested types etc -- they are allowed in both *class* and *struct*, and behave in exactly the same way -- so, when talking about these types, they're often referred to with the common word "aggregate". In Rust, there is no direct counterpart for C++ *union*, but you can see Rust *enum* as a close enough approximation. The aggregate type, however, is present directly, though there is only one -- it is *struct*. _Non-structural_ -- that means that the object of a lambda does not split into constituent parts. This is in contrast to aggregates, where you can access their individual fields and base sub-objects. As far as the user's code is concerned -- a lambda object is a monolithic black box of bytes. _Class type_ -- in this context, it just means the opposite of "primitive". Primitive types have a special place in C++, these are types like int, float, raw pointers, SIMD vectors -- they don't have constructors or destructors, are standard-layout, and have some other specific properties. A lambda type, on the other hand, has a member operator(), and is allowed to have a non-trivial destructor (to properly dispose of its captures).
@protodot20514 ай бұрын
@@Delfigamer1 Yeah see, the problem is that you needed that entire wall of text just to explain that one sentence. But thanks for missing the point.
@mort444443 ай бұрын
good explanation! thanks
@OrbitalCookie Жыл бұрын
Rust: same function copy pasted? Different type. Javascript: null is an object? Sure.
@SuperQuwertz9 ай бұрын
Null is not an object. The typeof thingy is a bug ^^
@Leonhart_937 ай бұрын
@SuperQuwertz It's not a bug, it works exacty as intended. "typeof" is meant to always classify things as part of a data type, always returns something. So if null is not an object, then what it is? Because it certainly is not "undefined". The only remaining type left for it is "object".
@SuperQuwertz7 ай бұрын
No, just null@@Leonhart_93
@lobovutare5 ай бұрын
@@Leonhart_93 It's not a bug sure. It's a quirck. What Javascript is known for.
@Leonhart_935 ай бұрын
@@lobovutare So mean intentional convention.
@N.... Жыл бұрын
In C++ you can pass functions as template parameters using template and then the compiler can optimize it properly. You can think of this as passing the function by value, just as a template parameter instead of a function parameter.
@_noisecode Жыл бұрын
A few people have said this, to which my reply has been that you could, but then you're swimming upstream from convention. Idiomatic higher-order functions in C++ (e.g. the whole STL algorithms library) are written to take functions as regular parameters, not NTTPs. So this isn't really a generalizable piece of advice for how to avoid this pitfall; it only works if the function you are calling has this interface, which the vast majority (in my experience) do not.
@_noisecode Жыл бұрын
Besides... I have to say that "just write your higher order functions using NTTPs to get codegen" _kind of_ just supports my argument that in C++ you often have to do the non-obvious thing in order to get good codegen. It's the same as saying "wrap everything in lambdas"; it's something extra you have to do to get the compiler to generate the code you want, since the default behavior for passing named functions by value is to give you indirect calls. In the video I never said you _couldn't_ get the good codegen, in fact I literally demonstrated that you can with lambdas, I just argued that you have to jump through hoops to get it. This feels _exactly_ like jumping through a hoop to get it.
@Max-ui5gc Жыл бұрын
Also, if you pass the function as a function pointer in Rust (not a generic implementing an Fn trait), you get the same dynamic call as in C++. That's also a case where the easier solution (no generics) is less performant.
@N.... Жыл бұрын
@@_noisecode It's easy to make a template class with a call operator that calls a template parameter though, and the syntax would be less of a chore than a lambda
@_noisecode Жыл бұрын
@@Max-ui5gc I'm comparing/contrasting how the type systems have different consequences for generic instantiations in the two languages. Of course if you explicitly say you want indirect calls by taking a function pointer parameter, you will get them.
@blt_r Жыл бұрын
Also if you write: pub fn f_u32(num: u32) -> u32 { num + num } pub fn f_i32(num: i32) -> i32 { num + num } they will also be optimized to be the same function, because of the clever design of two's compliment
@_noisecode Жыл бұрын
Super cool! I hadn't tried this myself. Of course [for completeness] this relies not only on the functions boiling down to the same machine instructions, but also `i32` and `u32` having the same ABI. Putting them in a struct you can end up with different functions with identical machine code, since the functions have different ABIs. rust.godbolt.org/z/Mxnvs5qae
@Originalimoc Жыл бұрын
But in debug mode, won't it panic when overflow?
@blt_r Жыл бұрын
@@Originalimoc In debug mode, two functions will never be optimized into one. In debug mode there are pretty much no codegen optimizations at all, the compiler just generates whatever and gives it to you. But in release mode, if you use num.checked_add(num), then yes, this optimization indeed won't be possible because checking for overflow is different for signed and unsigned integers. EDIT: you also can use `-C overflow-checks=yes` to check for integer overflow even in release mode.
@博麗霊夢-p4z Жыл бұрын
I think it because these two functions generate the totally same assembly code, and the types are erased on runtime so Rust doesn't care about which one can fit the type
@angeldude101 Жыл бұрын
"Clever design"? You mean the "elegant mathematics of modular arithmetic"? 0xFFFF_FFFF + 1 = 0, therefore 0 - 1 = 0xFFFF_FFFF = -1. (Why people don't make the next step to 0xAAAA_AAAB * 3 = 1, therefore 1 / 3 = 0xAAAA_AAAB = ⅓, even though it's backed by the exact same mathematics, is beyond me.) Also, 0x8000_0000 + 0x8000_0000 = 0, therefore 0x8000_0000 = 0 - 0x8000_0000 = -0x8000_0000. It's just like 0 in that it's its own negative! In fact, it's the _antipode_ of 0 on the number wheel.
@yoshiyahoo1552 Жыл бұрын
I'm not sure if this is within the scope of this channel, but making a video on the difference between hardware threads and software threads and how rust affects and is affected by that difference would be really interesting!
@stefankyriacou7151 Жыл бұрын
I would also like to see this
@blouse_man Жыл бұрын
how u have such deep understanding of such topics is just amazing, hope one would also able to get simple topics to such great depths
@nottellinganyoneanything Жыл бұрын
have you tried his C++ Godbolt but with a different compiler? Try it with clang.
@henrycgs Жыл бұрын
ooooh. wow. I would have never thought of that. in other words: rust's functions allow for static dispatch. this enables the compiler to do much smarter optimizations since it knows exactly what function is being passed as an argument. brilliant!
@aleksanderkrauze9304 Жыл бұрын
Great video! In this little series you started recently, you talk about things that I already (mostly) know. However you show implications much deeper that I would initially find out myself, which makes me rethink about those subjects in a different setting. For example, in the case of your previous video, I knew that returning &Option is a bad idea and more idiomatic way would be to return Option instead. But I did not know all of the reasons *why* that is the case. That is all to say that I love your videos and hope to learn much more from you. Cheers!
@SolraBizna Жыл бұрын
I already knew Rust's particular "weird" function type approach-but I only already knew it because of those excellent, educational error messages! I'm sharing this with all my Rust students, because it explains it all much better than I have been.
@mattshnoop Жыл бұрын
This guy has very quickly become the subscription that I am most looking forward to his next video
@amansawhney4382 Жыл бұрын
same here
@AzuxDario Жыл бұрын
You can get 2 lines of assembly, with moving 20 to eax in C++ with normal function if both funcitons (f and tempalte) are defined as constexpr.
@BlackM3sh Жыл бұрын
6:05 I think the best way to think about closures are as tuples, or perhaps tuple + fn-pointer. With the tuple being your captured variables. When getting annoying problems with the borrow-checker, etc. it helps me understand to think of closures as tuples instead. A closure like `|x: i32| x + y` would be the tuple (i32,) since it captures `y` which is an i32. A closure which captures nothing is the empty tuple () which is zero sized, just like a normal function, so it can be coerced to one. Any other tuple is non-zero sized so needs additional stack space in addition to any potential function pointer making them incompatible.
@_noisecode Жыл бұрын
I think this is a great mental model for closures. Similar to mine but not identical; in my mental model (strongly influenced by my mental model of C++ lambdas), closures are a struct containing each capture (if any), and that struct has a single method that contains the code inside the closure. That method is what's invoked by the call operator. Note for the record that in terms of layout, the representation of closures in memory is completely unspecified by Rust.
@tenthlegionstudios1343 Жыл бұрын
I still think higher kinded types would be useful in rust if it could still be performant and safe. Having types based on types signatures. There is a world where having the same signature being the same "type" is very useful. Like defining a functor, monad etc... I know rust is moving towards Generic Associated Types, that helps solve some of these issue's. But, in my mind this function type you are mentioning is just a function ptr more or less. Still an interesting video, thanks!
@AndreiGeorgescu-j9p9 ай бұрын
Yea rust will always be a low level non expressive systems language unless they add HKT and ideally dependent types too.atm it's just better C which isn't much of an accomplishment
@Justin-wj4yc3 ай бұрын
@@AndreiGeorgescu-j9p jc cringe
@mikkelens Жыл бұрын
These rust videos are wonderful. I've barely written any c++ code at all, but it's wonderful to look at how these immensely different approaches to similar-ish languages pan out at compile time.
@mzg147 Жыл бұрын
I still don't get why it is necessary for functions, but not for e.g. integers. Wouldn't it be so great and efficient if the numeral "2" had the type 2? And there was a trait i32 that 2 would implement and then every computation on literals would be done using the type system? And arguments from the outer world would be "dyn i32"? I feel just like this using the functions in Rust. I think I get that the advantages of using functions like that is far greater than the advantage of using "i32 as trait" approach, and this is the main argument for it. But it makes the whole thing unnatural for me, I would prefer consistency in the type system, the optimizations could be done behind the scenes, not at the expense of design.
@_noisecode Жыл бұрын
Thanks for this comment, I think this is a super interesting point. I honestly think there's a decent argument to be made that the literal "2" should indeed be of type `2`, and only coerced into i32 when it's "type erased" in some way. Some type systems in some languages do exactly that (TypeScript for example). In that mental model, `2` is to `i32` as `{f}` is to `fn(_) -> _`. From what I can tell, the main downside of extending this to all Rust literals is just the absurd amount of generic monomorphizations that would result. It would mean the compiler would need to statically stamp out a completely different implementation of something like `foo(_: impl std::i32)` for each of `foo(0)`, `foo(1)`, `foo(2)`, etc. Seems sorta like that's simply a bridge too far, although I don't hate the purity, consistency, and elegance of the idea.
@mzg147 Жыл бұрын
@@_noisecode Rust doesn't have type coersion in itself, the traits only have it right? In Typescript, every type is like a set (of objects of this type) so it makes sense, and sets can be made subsets of each other, but in Rust every object is of exactly one type, making the whole thing asymmetrical...
@_noisecode Жыл бұрын
Sure, that's a fair distinction to draw between TypeScript and Rust's type systems. For the Rust case, I was thinking that a type like `2` would be a ZST that uniquely identifies the value of 2 irrespective of type, but it coerces into a concrete integer type when you nudge it. let x: i32 = 2; // the value of type 2 is coerced/erased to i32 let x: u8 = 2; // coerced/erased to u8 This would be analogous to how fn item names are coerced to fn pointer types if you give them a little nudge, as I show in the video: let x: fn(_) -> _ = f; // the value of type {f} is coerced to fn pointer
@christianjensen769911 ай бұрын
It is nice to watch a Rust video from someone that knows C++ and does good comparisons, instead of a JavaScript developer that misspeaks every time they talk about C or C++. It seems like everyone on the internet comes from front end web development.
@Turalcar Жыл бұрын
6:25 This might be nitpicky, since y is immutable but local captures a reference to y, not a value, unless you use "move". This isn't crucial since it's likely to be inlined anyway but something to keep in mind.
@_noisecode Жыл бұрын
Fair nitpick! You are correct.
@yaksher Жыл бұрын
It's worth noting that if you want function pointer behavior instead in Rust, you can just specify the argument as a function pointer and not as implementing a trait. Better yet, the compiler is still smart enough to see through it at compile time when it can, even if the generated assembly makes indirect calls. I.e., your exact example with calculate's signature replaced with fn calculate(bar: fn(i32) -> i32) -> i32 still has main just return 20, but the code generated seems to be literally identical to the C++ example's code. This is probably usually a bad thing though. Still interesting to think about the trade off with function pointer types vs function trait types. I would probably just shorthand the signature you had as fn calculate(bar: impl Fn(i32) -> i32) -> i32 most of the time so it's shorter to write.
@_noisecode Жыл бұрын
Totally agree about doing `impl Fn` for calculate() in real code--I wrote it "longhand" with the `where` clause (and the C++ version with the `requires` clause) for the video so you can visually match up the different key elements of the two versions better (and so the animation between the two looks cooler ;) ). And yep, if you intentionally opt in to the indirection (by concretely taking a function pointer type as a parameter) you get identical codegen to the "bad" C++ version. I prefer this story over C++'s... you get the nice clean optimized version unless you specifically say you want the indirect version (which you might, for e.g. reducing monomorphizations if a very large number of them is causing problems).
@JoshBlasy5 ай бұрын
One of the biggest learning curves I've experienced, going from languages like Python on JS to rust, is forcing myself to trust the compiler. I don't have to do anything convoluted, I just write code and the compiler does its job. This video is a good example of that
@7h3d4sH Жыл бұрын
You are an excellent teacher of programming concepts. The style of your teaching in this video worked very well for me personally. Very high quality content - keep it up. And thank you!!!
@rsnively Жыл бұрын
I would love an episode about Result types. I know there's a lot of cool stuff you can do with the ? operator and good functions for operating on them. I just struggle with all of the extra overhead of creating my own error types to return, combining different error types, and so on. Nine times out of ten I end up just reaching for Option because it mostly does what I want, even though I know it's not the right tool for the job.
@bobbybobsen123 Жыл бұрын
Using the anyhow crate simplifies this a lot, removing the need for specifying specific Error types
@yaksher Жыл бұрын
I feel like you could probably get away with `impl std::error::Error` or `Box` as the error type most of the time.
@jrmoulton Жыл бұрын
@rsnively take a look at the error-stack crate. error-stack adds a few things that remove boilerplate and make errors nice to look at but what I like about it is that it actually forces you to learn Result the hard way. Using anyhow and similar crates is really nice but you can get away without ever actually understanding the result type. When I was learning I read everything I could find and watched every video but I was still confused. I would think I understood it but then I would have a problem that made me understand that I fundamentally misunderstood. If this guy made a video it would probably be good enough but until he does learning Result the hard way is probably your best option (no pun intended)
@omg33ky Жыл бұрын
Anyhow is a great crate for applications but for libraries I would recommend thiserror since it makes it easy to create actually different errors for the outside, not just return a single error type like you would if you used anyhow
@SimonClarkstone Жыл бұрын
And now, you got your wish. An episode about Result types was released about a day ago. Possibly due to your comment.
@leddoo Жыл бұрын
that's really interesting! though the "functional programmer" in me absolutely hates it :D the compiler could technically do that in the background, without exposing it in the type system - simply by monomorphizing based on the identity of the function arguments to the call. (this wouldn't work for functions assigned to locals, but some rather simple analysis during type inference could identify those cases, where the variable can only refer to exactly one function.) i was also quite surprised by how monomorphizations are supposedly skipped, if the functions are identical modulo optimization. this would inherently sequentialize code generation and optimization (to some degree). the observed effect, that `f` and `f_prime` used the same machine code could also be explained by function deduplication done by llvm. however this would be after monomorphization and would therefore not reduce compile times. you could perhaps test this by generating a huge number of functions and making calls to a generic function for each of those functions, then compare the compile times for identical/different functions.
@_noisecode Жыл бұрын
That's an interesting idea! I wonder indeed how plausible it would be for the compiler to do this monomorphization trick behind the scenes while telling the white lie that all functions do have the same type. Good food for thought! On the other hand, then we're back in "functions are callable because they just... are" territory. A point I make in the video is (paraphrased) that calling a function is an _operation_ you perform on a function value, so the correct abstraction for it is a trait, just like any other operation you do to a generic value (e.g. std::ops::Add). I also like that the type system protects you from unnecessary dynamism; you only get a function pointer if you opt in to one (although to be fair, it is easy to accidentally opt into one, since fn items coerce to fn pointers so easily). And you're right, I glossed over the point that the function deduplication stuff doesn't help with the compile time issue, just the final binary size. It does indeed have interesting consequences re: sequentializing codegen. Thanks for the comment. :)
@ХузинТимур Жыл бұрын
@@_noisecode >telling the white lie that all functions do have the same type I think, such thing contradicts the idea of Rust being explicit and low-level.
@minirop10 ай бұрын
Your last point is very dubious (to not say misinformed). C++ does function merging (or "identical COMDAT folding") but it does so at link time, not compile time. The reason is that with Rust, when you compile your file, it's considered a complete crate (AFAICT), while C++ consider is just an object file and doesn't do any things that could break the public ABI (since it can't know how the code will be used, unlike the linker).
@licks4vr3 ай бұрын
Analyzing the compiled code is such a great way to learn about this
@adamszalkowski82267 ай бұрын
Thank you for the hint. Now that you mentioned it, function item types are like unit structs that implement the Fn traits. That's how it's zero size.
@IllidanS45 ай бұрын
I don't really think that the performance issue in C++ has to do with the fact that functions are "structurally typed". Sure, having a unique type for each function would simplify the optimization, but I'd fully expect the compiler here to acknowledge that you are passing the same function each time. It does so for normal values, so why not functions? Regardless, it is relatively uncomplicated to fix this without changing too much of your code, here's just quick code that I wrote that should help: template constexpr const inline auto monomorphic = [](auto&&... args) { return Func(std::forward(args)...); }; Using monomorphic instead of f gives you the benefit of the lambda type, while not sacrificing much of readability or conventions.
@CaptainOachkatzl Жыл бұрын
great video, makes me super excited for whatever you have planned for Fn, FnMut and FnOnce!
@ronminnich2 ай бұрын
I've really struggled with Rust, these videos are helping me a lot
@MrEtilen11 ай бұрын
Great description. Thank you for taking the effort to produce such a quality material!
@ricardolima52064 ай бұрын
Very nice video. Even more impressive that I actually understood what was happening.
@retropaganda8442 Жыл бұрын
I know I'm just adding more complexity to the matter, but my first approach would have been to write the C++ version as taking a function, not a typename, since this is what you want: template int calculate() { return F(1) + F(2); }
@didles123 Жыл бұрын
So one can regard functions are basically just unit structs (hence they all have their own unique type), which implement Fn* traits based on their signature. This makes it easier to understand closures as structs whose fields are their capture variables.
@lioncaptive Жыл бұрын
Logan narrates and explains the functionality so well.
@Voltra_ Жыл бұрын
14:20 C++ compilers actually do just that. Same with C++20 coroutines despite being even more complicated and indirect.
@johnjones8330 Жыл бұрын
Thank you for the great explanation, particularly with the comparison between passing a function and a lambda to calculate in C++, I found this really helpful.
@dahveed241 Жыл бұрын
Dude your visual representation of how everything works is fucking amazing. Keep up the great work!
@azaleacolburn Жыл бұрын
Amazing video, great work! I’m sharing this with all my co-workers.
@isaaccloos1084 Жыл бұрын
Your last two videos were so good I was eyebrow-raising when I saw a new notification from your channel. Please keep up this great work 👍🏻
@officiallyaninja Жыл бұрын
16:30 Isn't this also an issue with rust? you advocated for this exact kind of thing in the Arc video, of writing uglier code for performance so is it really an issue? I don't really think the lambda makes a huge deal anyway, and having function types be pointers does kinda make them more ergonomic to work with in situations where you don't care about performance
@_noisecode Жыл бұрын
I guess I wasn't really saying that Rust has no footguns at all--my point was that C++ is an absolute minefield of them. The C++ community tends to admit quite plainly that almost every corner of the language has poorly chosen defaults (the design of lambda expressions being one place where that's much less true). In Rust, when you write the pretty version, you're much more likely to find that the language design is working alongside you to make that version the most efficient version, too. Now you're right that I advised you to write it the ugly way in the Arc video--point taken. :) That video is ultimately still celebrating a fantastic Rust design choice though--the fact that wide pointers + slices let us effortlessly repurpose Arc for dynamically-sized reference-counted strings--something that would take considerably more error-prone work with e.g. std::shared_ptr from C++--so why not just do it? Also I don't _super_ agree that C++'s defaulting to functions pointers is more ergonomic. Rust fn items implicitly convert to fn pointers at the slightest provocation, so realistically if you do need to do function pointer stuff you won't run into any trouble. But when you don't, you don't pay for it (to borrow a phrase from a C++ fundamental design principle that it violates here). I really appreciate comments like this and the discussion that arises from them. Thanks for watching. :)
@draakisback Жыл бұрын
Great video man, glad this showed up in my recommended feed even though I really don't need to learn rust since I've been using it in production for like 6 years now. I had to learn this the hard way when I first learned rust. I come from a functional programming background and when I started writing rust I wrote it like a functional language. I'm sure you can figure out some of the horrible side effects that came with me writing rust that way. I remember being extremely baffled by the fact that we had three different function types via the trait system and then becoming even more baffled when I realized that we actually had infinite function types since every function is its own type. Now of course having used it for 6 years now, I will say that it is a really cool system in practice. I do still have my gripes with the trait system, for example the fact that generics are effectively just trait arguments is still a bit annoying. As somebody with a category theory and lambda calculus background, it's a bit counterintuitive to have generics that don't really work like true generics and functions that don't really work like functions but this really only matters for maybe 1% of use cases in my experience. Anyhow, keep up the good work mate.
@jboss1073 Жыл бұрын
" I do still have my gripes with the trait system, for example the fact that generics are effectively just trait arguments is still a bit annoying. As somebody with a category theory and lambda calculus background, it's a bit counterintuitive to have generics that don't really work like true generics and functions that don't really work like functions but this really only matters for maybe 1% of use cases in my experience. " Could you clarify, out of curiosity? How does Haskell get generics right and Rust gets them wrong by making them just trait arguments? How does it abide more to Category Theory than Rust? Thank you.
@asdfasdf9477 Жыл бұрын
Good stuff, please keep making these.
@Dash359 Жыл бұрын
Oh, man. Your vidos are so good. How about series dedicated to all Rust's features unshugaring. Moving, borrowing, smart pointers, etc. Performance implications using those and how compiler optimize those.
@vokungahrotlaas Жыл бұрын
If you really need to generate the function per object and not per type, just ask it to the compiler: ```cpp #include #include #include template [[gnu::noinline]] int calc() requires std::invocable { auto r = std::views::iota(0, 5) | std::views::transform(f) | std::views::common; return std::reduce(r.begin(), r.end()); } static int f(int x) { return x + x; } int main() { return calc(); } ``` Still an interesting design choice en rust's end tho, but the example is just not great imo.
@_noisecode Жыл бұрын
You could change calculate's API so you have to pass the function pointer as a NTTP in this case yeah, but I don't see this is as a generalizable solution at all. For example, none of the STL algorithms are designed to work this way. Idiomatic higher order functions in C++ accept their functions as values, which is the pattern that has the shortcomings I showed. There are ways to get the good codegen, yes--NTTPs are one, lambdas are another as I showed--but resorting to NTTPs is actually just support for my argument that in C++ you have to do the less-obvious thing to get the good codegen, because the defaults are working against you. It's fair to critique this example though; check out the pinned comment for a better one.
@codeofdestiny682011 ай бұрын
Great video! Thanks for explaining this amazing topic! Only think that I would like to say is that it would have better if you'd have cleared up the whole inlining thing from the comments int the video itself. Keep going with great content!
@julionegrimirandola3375 Жыл бұрын
Great video!! Knowing about it's pretty cool to see this kind of in-depth content about how types work, and thanks for the compiler explorer tip!!
@РайанКупер-э4о2 ай бұрын
Me, from math background: why can't we have both? Like 1 is a natural number. But it also is an integer and rational and real, complex, quaternion and it's a vector, multivector, and even a one-dimentional tensor. It is all of that at the same time. So why can't we have the same with types in code? Like, if you create an integer, it should go in everywhere where a real number can. And the same is here. Every function is obviously an element of a set that contains only that function. But it also is an element of a set that contains every function with this signature. In math any object can be an element of infinite number of sets. Why don't we have infinite number of types? Just compute them lazy way, like functional languages do with everything.
@topcivilian Жыл бұрын
Google algorithm brought me here and... Immediately 'liked' and 'subscribed' Thanks for the content, my friend.
@damonpalovaara4211 Жыл бұрын
4:26 The first line saying "different fn items have unique types" is saying exactly what you want to add to the message.
@skirsdeda Жыл бұрын
Very good explanations of practical implications in this video, just as it was for "&Option vs Option" one. No other youtuber gets even close in that regard :) Keep it up!
@TimDrogin Жыл бұрын
So, in rust function type is some mix of Id-location of a function, while in cpp it is just a signature? That’s some clever stuff going on there
@nathanoy_ Жыл бұрын
gotta say: I love your content. Please keep it up!
@dragonmax2000 Жыл бұрын
This is dope. Totally awesome. Please keep on making these. So fundamental. :)
@danser_theplayer01Ай бұрын
"function f is type function f and function g is type function g; also I'm going to tell you exactly why I'm not going to compile this code" That's so cool, usually javascript call stack trace yells at me in V8 minified garble, not even human javascript.
@MaxHaydenChiz Жыл бұрын
Interesting video, but I'd have appreciated it if you'd linked to your code so that we could see it on godbolt without having to retype it ourselves. I wasn't able to replicate your C++ results with my initial example and having to retype what you wrote exactly and then carefully look at what settings you were using in order to make sure that I was right about why we got different results was tedious. This is a compiler problem, not a language one. gcc does the correct optimization once you give the compiler permission to do full monomophization by making both `f` and `calculate` be `constexpr`. Output is the same as the rust code, even on lower optimization levels. Works across multiple architectures too. I'm not sure why clang specifically has trouble with this code and can only optimize the lambda version. But this is a compiler issue and not a language one. (It could also be user error. I'm not that familiar with the details of clang code gen.) I'm also not sure why making `f` have static storage class would be expected to do anything here (and it doesn't change the generated code). I'm not even sure that your explanation of `static` is even correct in this context and it sounds like you mean to be talking about `constexpr`. (Though maybe in editing for brevity, some nuance involving `static` got lost?)
@_noisecode Жыл бұрын
Very valid feedback! I'll put some links to the godbolt examples in the description. Even considering your points, I do still argue this is a language problem. Not all code can be constexpr, and besides, constexpr is a language construct, not a compiler switch; my point stands that it's a language problem that the 'default' semantics don't let the compiler optimize your code completely (unlike the default semantics in Rust) and you have to manually opt in to semantics that do (like constexpr or lambdas). My point about f being static was that, normally, you expect a tiny static function to probably be able to disappear/be optimized out completely (note that the out-of-line definition of f DOES get optimized out completely when we wrap it in a lambda). But due to needing its address when it's used as a function pointer, it can't be optimized out in that version of the code. I just rewatched that section and I don't *believe* I misspoke in any way about the semantics of `static` functions. Thanks for watching and for the discussion. :)
@_noisecode Жыл бұрын
Godbolt links in description! Thanks again for asking, should've put them there in the first place.
@MaxHaydenChiz Жыл бұрын
@@_noisecode Thanks for updating it so quickly! FWIW, I played with this some more and it seems like clang and gcc treat "noinline" differently. Clang takes you quite literally and just refuses to do the optimization since it has the effect of inlining it and then eliminating the function as dead code. (What clang does is probably the thing that people generally intend when they use that attribute in real code.) If you get rid of that attribute, the optimization works as it should even with minimal optimizations enabled. Both clang and gcc will be able to treat the functions as constexpr without having to force it by declaring it specifically. So, it's not so much opt-in as opt-back-in (after opting-out). There are probably cases where constexpr is actually needed to trigger the optimization, but since clang and rust will use the same llvm analysis for that, I don't think you'll actually see a difference in practice. If you could construct one, I'd be interested in seeing it, but so far, I haven't managed it. I suspect that Rust would benefit if the compiler needed to do an alias analysis on the arguments of a higher order function that took multiple functions as arguments. But haven't been able to construct a working example. And since I can't actually think of a distinguishing case, I'm not confident that I can say that rust being nominally opt-out is "better" than c++ being nominally opt-in. That would depend on what opting out with the rust code looked like and how often you ended up with build time problems or other issues with degenerate monomorphization compared to the times where the annotation or code change in C++ was complex or non-obvious. We are already dealing with a corner case of a corner case at this point. So it's hard to say. As for `static`, what you said is correct. Putting it on that function tells the compiler that it won't be referenced from any other file. This shouldn't impact the optimizations being done, only whether the compiler *also* emits the code for the function itself or if it has permission to eliminate it as dead code if it is inlined everywhere in the current file. And this is what actually happens on both gcc and clang. Using `static` there isn't something you'd normally see in the wild, so it stood out as odd. But having rewatched, I don't think there's an error in what you said.
@_noisecode Жыл бұрын
This example optimizes down to nothing if you take away noinline, yeah, which is of course why I added it: I was trying to simulate a case where you have a higher order function that doesn't get inlined. One such function that's extremely common is std::sort (which typically doesn't get inlined). You don't have to manually opt out like I did to find common real world examples of compilers choosing not to inline function template instantiations, where then the types they are instantiated with matter for your codegen. (An out-of-line std::sort instantiated with a custom predicate that's a lambda will have that lambda inlined into its definition, whereas an out-of-line std::sort instantiated with a function pointer will have to make an indirect call for every element comparison). (Huge amount of codegen here but you can see the two instantiations of __introsort [line 56 and then 1437] are quite different--the function pointer one has lots of `call`s that the lambda one does not: godbolt.org/z/1Er3nf6bT Code for `greater` is also generated even though it's marked static since its address is needed by the function pointer instantiation.) I think constexpr is sort of unrelated to the matter at hand by the way. It may affect codegen in the way you're seeing because constexpr implies `inline`, but aside from that, a function being marked constexpr shouldn't affect how it's optimized when it's not being used as a constant expression (thus making its constant evaluation optional), except compilers _may_(?) be slightly more inclined to constant evaluate it (which isn't really an optimization and more of a language semantic thing that happens in the compiler frontend before anything gets to e.g. LLVM). `static` is very common on functions in .cpp files--so I'm not sure what you mean that it's not something you'd see in the wild. Often in C++ they're put in anonymous namespaces instead, but `static` is another very typical way to do it, and some coding guidelines (e.g. LLVM's in fact) actually require `static` instead of anonymous namespaces for internal helper functions. You're right that `static` won't affect how the function is inlined etc., just whether its definition gets eliminated as dead code. Apologies if I suggested otherwise.
@_noisecode Жыл бұрын
Here's a simpler and more compelling example: godbolt.org/z/7rd79T1oT Node::walk() traverses a binary tree, visiting each node with a predicate. There are no `noinline` attributes in sight. The call using the lambda (so the one where walk() is instantiated with a predicate that can be resolved statically based on type information) is optimized out completely and is nowhere in the generated code. The call using the function's name directly generates a bunch of code featuring indirect calls.
@DavidBeaumont Жыл бұрын
Nice explanation. However in real code you almost always want to use function composition because you have different combinations of functions at runtime. Thus I strongly suspect it's not actually that common (in either C++ or Rust) to be passing functions in this static fashion, except maybe for generating constants. It would have been nice to see what Rust did in the dynamic case.
@ХузинТимур Жыл бұрын
I often use `x.map(T::from)`. In fact, there is a clippy lint for that called `redundant_closure`.
@_noisecode Жыл бұрын
Exactly. In Rust I try to used named functions like T::from wherever I can, because it's more elegant (it's kind of 'point-free style'). In C++ this could be detrimental to codegen, whereas in Rust it's not.
@DavidBeaumont Жыл бұрын
@@ХузинТимур That's a good point about map(). Applying a function over a collection is a case where you'll combine named functions explicitly. Maybe it's not as unlikely as I thought.
@christianburke4220 Жыл бұрын
A high-quality channel with only 6.33k subs? nice
@Leonhart_937 ай бұрын
Unless it solves a real problem that just using normal types has, then it's just complexity for the sake of complexity.
@pierrebertin4364 Жыл бұрын
Amazing videos man.. just discovered rust and your channel, loving it. Which tool are you using to build your videos? Super smooth and enjoyable.
@_noisecode Жыл бұрын
Thank you so much! I use the Manim library for the animations, check the description for more info.
@MuscleTeamOfficial3 ай бұрын
u keep putting these out imma keep watchin
@spookyconnolly60728 ай бұрын
making these types similar to gensyms probably is the same reason scheme macros are also hygienic (namespace splatting) -- but instead for typing functions and making sure you don't lead to weird behaviour as a result of them being 1st class values
@vanilla4064 Жыл бұрын
This is an awesome video! Loved learning more about rust's language design.
@pablorepetto7804 Жыл бұрын
This was very interesting, but I ended up watching it backwards. First i jumped to the practical application, then I went back to warch the rest. Seeing the ugly "wrap it in lambda" idiom made the rest make sense. Thank you for the table of contents!
@TCoPhysics Жыл бұрын
Brilliant explanation. Thanks for sharing!
@IllidanS45 ай бұрын
Pedantic complaint about the compiler's wording ‒ "no two closures, even if identical, have the same type" is technically incorrect. In the following code: let mut local = |x: i32| x; let local2 = local; local = local2; assert!(local.type_id() == local2.type_id()); The variables "local" and "local2" are our two closures, obviously "identical" (meaning "one and the same") since they come from the same definition, yet one can be assigned to the another, and their types are equal. It would be better to say something like "the type of a closure is unique and is not equal to the type of any other closure, even with equivalent definition".
@Alkis0511 ай бұрын
6:20 well, it could support currying. When passing y to that clojure it could wrap inside a function and compose it with the clojure. Or something on these lines
@Dash359 Жыл бұрын
And wow, Compiler Explorer is increadable!
@Ciubix8513 Жыл бұрын
Fascinating, I love learning stuff like this, keep it up!
@n0ame1u19 ай бұрын
I'm not sure I really agree with your argument. I think conceptually a "type" should usually be something that could represent multiple objects. If I instantiate a generic function for a given type, I expect one instantiation that works for whatever value of that type I pass in. If I specifically wanted an instantiation that works for the specific value I pass in, then I would have written a function that takes a function as a NTTP. Ultimately, it would be easy for a C++ compiler to do the same thing that the Rust compiler does, and generate a specific instantiation of the generic function for each function you pass in (since the specific function passed in must be known at compile time, otherwise Rust wouldn't be able to do this in the first place). It would be totally allowed by the standard under the "as-if" principle. It happens already when inlining the generic function, and there's very little benefit to doing it if you aren't inlining it (unless you happen to be calling the same generic function with the same argument multiple times)
@_noisecode9 ай бұрын
Thanks for the thoughtful response! I do just want to point out that types with only one possible value (in other words, Unit Types) are extremely common, even in C++ (structs with no members, empty base classes, the empty tuple, arrays of size zero, enums with only one case, std::nullptr_t, std::nullopt_t, tagged dispatch types, lambda types) and they are useful for exactly the kind of rich type-based dispatch that I’m talking about in this video-notice that many of the types I just mentioned are especially common in generic code. Common function objects in particular, like std::less, are unit types instead of functions because associating the function call logic with the type allows for better codegen. A std::sort instantiation can inline every call to operator< because it can see directly into the body of std::less::operator() at compile time and knows that’s what it does. Next two things I’ve already talked about at great length in other comment threads so check out those for more: To my great surprise, lots of other commenters have also brought up NTTPs as the ‘obvious’ way you’re supposed to deal with this issue, but I have yet to see an example of a popular or well-regarded C++ library that does this. It’s just not how idiomatic generic libraries are designed. For one thing, using NTTPs makes your generic higher-order function difficult to use with capturing lambdas, and for that reason alone it seems like a complete non-starter to me. Finally, I disagree with your last point. It’s still beneficial to have an out-of-line template instantiation where the passed-in function is inlined into the instantiation. (I already mentioned this with std::sort wanting to have visibility into std::less, even if the sort itself isn’t inlined.) C++ compilers do sometimes automatically perform this optimization (known as function cloning), but in doing so they are swimming upstream from the type system in a way that Rust just doesn’t have to.
@heto7958 ай бұрын
I think this video misses a far more fundamental point, one that cannot be handwaved away with "sufficiently smart implementation" like the optimisation points mentioned here. What if f was a generic function? If you have fn f(x: T) -> T::Output in Rust, Rust has no issue seeing that the type of that function implements FnMut(i32) -> i32 (and many other traits). If you have function template auto f(auto x) in C++ and try to pass f to calculate, this results in a compiler error because the type parameter of calculate cannot be deduced, and a sufficiently smart compiler cannot avoid this while still conforming to the C++ standard because it must fail under the rules of the standard. This is because a function template in C++ has no type at all, not even an unutterable type like Rust functions (or C++ lambdas). Instead the type that the expression f has is determined based on how it's used, but just calling a function where the parameter could have any object type does not give the compiler enough information of what type it is supposed to have. Passing f also won't work if f is a set of overloads, even if none of the overloads is a template, but since Rust does not support overloading in the first place, this does not work as well as a comparison. You can work around this by static_casting f to int (*)(int), telling the compiler explicitly what type you want it to be coerced to, but honestly, wrapping it in a generic lambda is a better idea. On a related note, did you know that taking the address of a C++ standard-library function is not allowed, apart from a few exceptions? In addition to possibly failing due to the reasons above, this can also fail if the implementation or a future standard version adds new parameters with default arguments, something that they are allowed to do. int g(int x, int y = 0) cannot be coerced into int (*)(int).
@leokiller123able9 ай бұрын
So, in our C++ code we could replace all our function declaration by global lambdas, for example: int f(int x) { x + x; } would become auto f = [](int x) { x + y; }, and it would achieve the same thing as rust, but the advantage of the C pointer types is that you can store functions that can take any pointer type as argument and convert it to a function that takes void *, for example we could store this function: int foo(int * param); and this function: int foo(char * param) into a int (*)(void *) function pointer
@_noisecode9 ай бұрын
Converting void(int*) to void(void*) is not supported implicitly in C++, you'd have to cast; and even then, it's not safe. Even if it weren't undefined behavior to call a function through a pointer with the wrong signature (it is), calling a void(int*) through a void(void*) is totally unsound, since the void(int*) is specifically expecting a pointer to int, but the caller would just be able to pass in any pointer, whether it points at an int or something else entirely. (The jargon is that function parameter types are not covariant--they're contravariant. It would actually "work" to assign a void(void*) to a void(int*), but not the other way around.)
@leokiller123able9 ай бұрын
@@_noisecode every pointer has the same size so I don't see how it wouldn't work, you can do this: void foo(void *ptr) { int *asInt = ptr; } So technically its the same as doing void foo(int * ptr) {}
@_noisecode9 ай бұрын
It doesn't work for a few reasons. For one thing your code doesn't compile in C++; it works in C, but in C++ it requires a cast from void* to int*. Secondly, neither C nor C++ technically _guarantees_ that all pointers have the same size/ABI (although it's true that they will be on most if not all real platforms). Thirdly, it is simply undefined behavior to call a function through a function pointer of a different type. This means the compiler might optimize using "wrong" assumptions about your code if you break this rule. Even if it seems like it "should" work, it's illegal according to the language spec and your compiler is not required to do as you say if you attempt to tell it to do this. But most importantly, it's just semantically broken to do this type of cast for function parameter types. Think about this code: ``` void foo(int* i) { printf("%d ", *i); } auto ptr = reinterpret_cast(foo); // now we have a function pointer taking a void*, so we should just be able to pass any pointer to it, right? NotAnInteger x; ptr(&x); // oops -- foo() is now going to try to format a NotAnInteger as an int. ``` By casting from void(*)(int*) to void(*)(void*), you are inappropriately _widening_ the set of types that are allowed to be passed to `foo`, and when a NotAnInteger* thus makes it into `foo`, all hell will break loose. This is because function parameter types are not covariant. In other words, the type of f(Subtype) is NOT itself a subtype of g(Supertype). On the contrary, the type of g(Supertype) is a subtype of f(Subtype), making function parameter types _contravariant_. C++ legality issues aside, there are other ways in which changing the function pointer type _could_ kinda make sense (covariant return types, or converting a void(*)(void*) to void(*)(int*) since function parameter types are contravariant), but I'm harping on this one because it's very wrong and it's a very common mistake.
@leokiller123able9 ай бұрын
@@_noisecode Thank you for your detailled answer, but the only thing I don't agree is that it is safe to assume that every pointer has the same size, it would be a nightmare otherwise and the void * type would have no purpose, even standard library functions like memcpy assume that every pointer has the same size. But function pointers are another thing and you're right about them, I made a mistake in my previous answer, thank you for pointing it out. So if you want your function to take a pointer of any type, you should take a void * as parameter instead of casting the function itself.
@_noisecode9 ай бұрын
Thanks for the discussion! For completeness, I do still maintain that a compiler could give int* and void* different sizes and everything would be fine. memcpy doesn't make any assumptions about the size of anything besides void*, and if you imagined that (for example) int* was 64 bits and void* was 128 bits, then the compiler could simply widen int*s into void*s when a conversion takes place, just like an int gets widened to a long long when a conversion takes place. But in reality, yeah, the address size will almost certainly be the same for all (data) pointer types in a given architecture.
@theCapypara Жыл бұрын
I love these videos so much, I'm a huge geek for programming language design, and Rust is just a so beautifully constructed language.
@rsnively Жыл бұрын
Another banger
@asdfghyter Жыл бұрын
so, if i understood this correctly, if you want to avoid the monomorphizations in rust, you can explicitly cast your function to an fn pointer and so avoid the code duplication in the compiled binary at the cost of getting an indirect call?
@_noisecode Жыл бұрын
Yep! Although in my experimentation, even if you do this rustc will still sorta sidestep you sometimes and generate a fully monomorphized function even when you handwrote the indirection. I haven’t looked into why and it could be just because somehow it was smarter than whatever my experiment code was, and in “real” code it would keep the indirection in there.
@virkony Жыл бұрын
14:28 Hmm... I thought that CPU do have indirect branching prediction and that GCC do have -findirect-inlining which should work for templates and code within same module.
@_noisecode Жыл бұрын
-findirect-inlining is an optimization that tries to _fix_ the issue I present in the video: C++'s type system causing indirect calls in higher order functions when you use a plain function name. Needing to turn on a special optimizer switch is kind of the same as needing to wrap the function in a lambda; it's an extra step you need to take to get good codegen because C++'s language semantics give you indirect calls in HOF by default, whereas Rust's give you static dispatch by default and you opt in to dynamism, rather than having to opt in to good codegen like in C++.
@JFMHunter Жыл бұрын
this might just be a difference of opinion, but i think the more inviting, more common syntax (normal functions) **should** be more ergonomic and generic. As in, by default, functions should be typed based on solely on their signature. And only if you care specifically about inlining and performance, then you should make that explicit, like using a lambda in c++.
@wChris_ Жыл бұрын
actually c++ with the lambda only got optimized, because f got inlined into the lambda, for more complex functions, c++ might not optimize it fully. However the dynamic dispatch could be optimized away, since the lambda knows which function its calling and the lambda itself can still be inlined.
@electra_ Жыл бұрын
I'm guessing that if you wanted specifically to avoid monomorphization and do dynamic dispatch, you would simply pass a Box instead of a generically defined function?
@_noisecode Жыл бұрын
Yep! Though I'd probably just pass a `&dyn Fn()` or a `fn()` if you just need to call the function (and not hang onto it) -- no need to pay for Boxing it up in that case.
@denkalenichenko4124 Жыл бұрын
Funny to see this weird moments with c and c++ background
@Alkis0511 ай бұрын
But f and g don't have the same signature, since they have different names. Name of the function is part of the signature. That is why in c++ you have the override keyword, which forces you to implement a method with the exact same signature when inheranting
@nirmalyasengupta688310 ай бұрын
Excellent piece of work! The premise is posited; the observations are presented; experiments are made; the conclusions are drawn! I am not really a beginner with Rust, but this video has been a real addition to my knowledge of Rust compiler works! Thank you, @Logan.
@maninalift11 ай бұрын
Thank you. Your analysis is very clear and useful. Do you know how haskell typeclasses work? I'd be very interested if you did a comparison of those to Rust traits, in terms of how dynamic dispatch and monomorphisation work. Obviously Haskell doesn't tend to give guarantees about the binary representation of things in the same way as rust and C++.
@simonfarre49079 ай бұрын
Quick note: when you can, make your functions constexpr. On gcc, simply making the functions constexpr will make you not have to wrap it in a lambda. But yeah, the idiom is a major head ache.
@kajacx Жыл бұрын
It's amazing that in 2023, you still have to pass itrable objects as two arguments (.begin() and .end) in C++.
@_noisecode Жыл бұрын
For many algorithms, you don't: en.cppreference.com/w/cpp/algorithm/ranges The algorithms like std::reduce didn't get Ranges versions in C++20, so I had to use .begin() and .end() for the std::reduce call. But C++23 is evidently getting Range-ified reduction algorithms that will make this code prettier: en.cppreference.com/w/cpp/algorithm/ranges/fold_left
@Amir_404 Жыл бұрын
This is a textbook example of a Strawman argument. You jumped though a lot of hoops to force the compiler to generate that bad code while also doing a lot of nonsense to bloat the C++ code. The non-malicious C++ code that does this is as follows: #include __attribute__((noinline)) static auto calculate(auto f){ int sum=0; for(int i : std::views::iota(1, 5)){ sum+=f(i); } return sum; } inline int f(int x){ return x+x; } int main() { return(calculate(f)); } You actually have to know a lot about C++ to write the code in a way that it doesn't compile down. You used __attribute__((noinline)) which doesn't have an equivalent in Rust(#inline(never) doesn't actually work here, its complicated). You made calculate public so the compiler couldn't assume anything about its input since your program could also be started from calcluate. You used template instead of auto to make your code look scarier, and your template doesn't even work as a template because you hard-coded what could be passed in. You used such a convoluted method of iterating that I am kind of impressed. You made calculate a lambda which is just nonsense. The reason rapping your function in a lambda changed your code so much is because you created a copy of calculate that isn't linkable so the compiler could safely check what f was. It is making the function static/inline that gives the performance boost. The only performance boost that lambdas offer is that they make it near impossible to mess-up the build processes.
@_noisecode Жыл бұрын
It's pretty easy to construct an example that proves it's the type system (not any linkage shenanigans) that is what lets the C++ code compile down to nothing with a lambda. godbolt.org/z/8jsPTs4Yc Everything in this example is fully public and externally visible, but because the code inside the function object F is known and can be called statically, it can be fully inlined inside the instantiation of calculate. This is the same thing that happens with a lambda: the code in the function call operator is known statically, so it can be inlined based on knowledge of the lambda type alone. The linkage stuff you're talking about facilitates function cloning, which is a separate optimization that actually has to swim upstream from the C++ type system to get the good codegen. I reject the claim that I made the C++ code any more complicated than it needs to be. I wrote idiomatic C++20 Ranges code that you'd see on any slide in any fancy Jason Turner talk. I wrote the C++ function declaration (as well as the Rust function signature) 'longhand' so the viewer could see the different parts that match between the two languages. I didn't intentionally make the C++ code scary--I like C++ and I think its version of calculate() as shown in the video is perfectly pretty. There are a few other things you say in your comment where I don't quite understand what you are accusing me of--but yes, the function template calculate is 'public' (I guess?) but also it's implicitly inline, since it's a function template that's implicitly instantiated. I don't understand what you mean that I "hardcoded what could be passed in." Also you say that Rust's inline(never) doesn't work here, but it does: rust.godbolt.org/z/5sWs98oK4 It prevents calculate from being inlined _inside its caller_, which is what I needed in order to demonstrate my point about the codegen of calculate itself. I see that your version of calculate() does indeed compile better than the ranges+std::reduce version, even without static+inline: godbolt.org/z/fbGr7c7hv . I'm honestly surprised they yield different codegen, but you can see from the symbols in the generated code that it's because Clang 'clones' your version of the function, which as you probably know and I already mentioned, is an optimization Clang and GCC both do to combat the _exact_ performance pitfall I'm referring to in the video--a function that needed indirect calls because of the type system being copied and then optimized into a special-cased version for a specific call site. This, again, supports my argument that the C++ type system works against efficient codegen for higher order functions (and we need special compiler optimizations to fix it it), where Rust's works toward it.
@_noisecode Жыл бұрын
Small correction: I did accidentally leave a superfluous `views::common` in the C++ code in the video where it's not strictly necessary. It was needed in an older version of calculate() and I forgot to remove it when I golfed the function a little. `views::common` is a no-op when it's not necessary though, so it shouldn't affect the final product other than some line noise overhead. I do admit it's a minor mistake in the video though.
@natashavartanian Жыл бұрын
I want Logan Smith to be president
@azaleacolburn Жыл бұрын
Bro wtf. We need sociology majors in office, not tech bros.
@natashavartanian Жыл бұрын
@@azaleacolburn Sorry. It’s too late. My comment is a binding contract and he will be sworn in later today. You’ll have to be faster next time
@azaleacolburn Жыл бұрын
@@natashavartanianCurse you Natasha the Platypus!!!
@gzoechi10 ай бұрын
When should he find time to create videos when he's presidenting?
@Primeagen9 ай бұрын
I want him to be prime minister
@autistadolinux5336 Жыл бұрын
In turn, simple imperative C (no C++) inlines the function pointer, lol. It just becomes something like: mov eax, 20 ret I didn't try with C++ but i believe the results would be the same if we use more imperative programming in C++. I think the problem here is that C++ is trying to be what it cannot be. C++ is an (apparently) evolution of C, and C is imperative, so is C++. The whole thing with C++ with RAII and other things it feels like Bjarne Stroustrup tried to make things feel more like stack allocated memory, as you do not need to handle (much) the destruction of the memory, but it became distracted with functional programming and it became what it is now. As such, Go is a better "evolution of C" than C++, although i don't like GC.
@skeleton_craftGaming Жыл бұрын
I'd like to see the same experiment but with the STL's function class because it is considered bad practice to deal with raw pointers in c++ in general, not just with the function pointers.
@_noisecode Жыл бұрын
Feel free and try it out, there are godbolt links to the experiments in the description! I expect the codegen using std::function to be pretty similar to the function pointer version or slightly worse. std::function involves type erasure so it probably has to do a virtual call or some other form of an added indirection. FWIW, function pointers in C++ aren't nearly as dangerous as raw pointers to object, since there's not really the same puzzle of ownership--functions (usually) exist for the life of the program so you don't have to worry about stuff like double deletion and you can be pretty sure they won't dangle unless you're doing some very advanced stuff. You do have to worry about null checks, but the same is true for std::function.
@ХузинТимур Жыл бұрын
std::function is generally worse compared to lambdas or manual callable structs (with overriden operator()) because they are opaque.
@petermichaelgreen Жыл бұрын
std::function is powerful but that power comes at a price. Creating a std::function from a lambda will give you a single indirection on the calls and additionally will give you indirect calls to a "manager" function whenever the std::function is copied or destroyed. Creating a std::function from a function pointer will give you a double indirection!
@NickDrian Жыл бұрын
Amazing description, keep it up!
@jursamaj Жыл бұрын
I don't get it. You complain that the "problem" in the C++ code is that compiler doesn't know what function the calculate function will be receiving. But that's the entire *point* of passing a function to another function: so that you can pass *any* function in and have it used. If you know what function you're going to use, just call that function inside the other function.
@_noisecode Жыл бұрын
You are confusing static polymorphism with runtime polymorphism. Templates provide static polymorphism only (which is their strength, and indeed their entire purpose). I want to be able to pass any function to calculate() in my source code, yes--it's polymorphic at compile time--and then I want the compiler to generate customized, optimized, non-generic code for the specific call site, _as though_ I called f() directly inside calculate(). That's what templates are for. They are generic recipes for stamping out code that is non-generic at runtime. As I show in the video, the C++ compiler is often forced to generate runtime-polymorphic code when I'm using a tool that is supposed to give me static polymorphism. The Rust compiler can make the code _fully_ non-generic at runtime since the type system gives it more information to work with during monomorphization.
@tiranito28347 ай бұрын
@@_noisecode You are the one who is confused. In your example, what you're doing IS runtime. You are telling the compiler that you want to make a template instantiation of a function named calculate that takes as a template parameter a function pointer type, and as a function parameter a function pointer. This means that it WILL call the function through a function pointer dereference. That is literally what you are telling it to do. If you want to do the same but completely compiletime, your function NEEDS to be the template argument, not the function argument. So for instance, you could do something like this (try it out yourself in godbolt if you don't believe me): template int calculate() { int ans = 0; for(int i = 0; i < 5; ++i) { ans += F(i); } return ans; } int fn(int x) { return x+x; } int main() { return calculate(); } And you will see that it does indeed compile to like 3 lines of assembly so it literally simply returns 20. YOUR code is the one that is preventing the compiler from performing any optimizations, because YOU are the one misunderstanding how C's and C++'s type system works. If you want something to be done at compile time, either you pray that constexpr works, or you actually make use of the compiletime mechanisms offered to you by the language. As a bonus, if you want to be able to pass any type of function that returns any kind of type rather than limiting ourselves to functions that return int, you can do the following: template int calculate() { T ans = 0; for(int i = 0; i < 5; ++i) { ans += F(i); } return ans; } And then you just have to call return calculate(); And if you think that's ugly looking and you don't want to explicitly tell the compiler all the types, you can always use macros to pretty things up... or just change the template arguments, I don't know exactly what is it that you were trying to achieve with this example (mainly because it makes no sense at all, it doesn't even help your argument because of what I just explained...) so I'm just left here guessing what is it that you intended to actually write.
@jkjoker777 Жыл бұрын
amazing, looking forward to the next one
@Vagelis_Prokopiou Жыл бұрын
Excellent video man. Thank you.
@godnyx117 Жыл бұрын
So.... it turns down to inlining and optimizing your code again? Well, what if you have a library and the function call (that is an upper function that calls a passed function) and you do NOT want to optimize and inline so when the new version of the function is out, you don't have to recompile all the programs that use it? Another way to fix this problem is to have 2 different function pointer types. The regular function pointers and another one that when used, it will signify that the function may or may not be inlined (but the choice is to the caller). After that, the caller may choose to pass a function as any of the 2 types and the compiler will do its job! Anonymous functions (lambdas) can also be choose which of the types they want to be. I'm planning to implement this syntax for my programming language. I'll never understand why these new languages make things so complex and complicated when they could had created a simpler design... Before I'm out, I will type the quote of the greatest programmer to have ever lived, Terry Davis! Rest in peace legend! "An idiot admires complexity, a genius admires simplicity!"
@eddieantonio Жыл бұрын
Really enjoying these deep dives into topics I would otherwise not think twice about!
@flatmapper Жыл бұрын
Do you use Manim for visualization?
@_noisecode Жыл бұрын
Yep! I always give credit in the description. :)
@flatmapper Жыл бұрын
@@_noisecode amazing)
@virkony Жыл бұрын
Hmm. Actually it is indeed extension of what is part of the type. Since code of function gets into type all specializations include not only type signature but body itself. This is effectively a compromise. More you put in what varies specializations - more code copies you have, but they can get better optimizations. But it is still unfair to compare it based on output code with artifical limitations. GCC able to produce "return 20" too. It just when you say "noinline" in C++ it doesn't inline particular signature that encodes only types and not code passed in and thus it cannot perform more optimization without breaching ability to execute it. And apparently C++ lambdas do encode in type also body. So effectively C++ do include Rust-like function way to pass, it just have to abide old behavior of auto casting functions to pointers.
@_noisecode Жыл бұрын
I believe you are saying pretty much what I said in the video. C++ does have a way to do Rust's "type-based dispatch", but it involves "opting in" and making your code more complex by wrapping things in lambdas, whereas in Rust it's the default semantics of all functions. I used noinline to help me illustrate that point, but if the noinline is distracting there's an example in the pinned comment that doesn't use it.
@virkony Жыл бұрын
@@_noisecode yeah. I first thought it was caused by noinline, but in the end it is difference in how precise type (key) this template is parametrized with. Instantiated template in C++ allows passing pointer to any code that satisfy call signature whiile Rust specialized version can accept only specific one with exact that body. C/C++ comes from times when you would prefer to re-use same code as much as possible. And actually C++ templates were sometimes frowned upon exactly because they bload the code by having different instantiation rather than re-using generic code. Nowdays it is probably only small amount of people who cares about how many copies of same function you have to fit some code size goals. So "new" languages can leverage that. But old languages that have to retain compatibility cannot simpliy do that when same concepts involved like in old times (passing pointer to function as an argument by simply referencing name). Thus, it is simply unfair comparison 😛. If Rust would be shackled with some requirements from several decades ago, it would suffer too. Sine lambdas is not that old as function - they can have different behavior allowing to catch up with current times.
@_noisecode Жыл бұрын
Totally agree--I mentioned in the video that C++ did the best it could (it also had to deal with C compatibility). C++11 lambdas, in many ways, represent decades of evolution of our understanding of what 'good defaults' are for C++'s language constructs, and lambdas do many things right that the rest of the language does 'wrong' by today's standards (e.g. lambdas all having different types, being const by default, etc.). Something I probably should have been clearer about in the video is that I love C++ and think it's a fantastic language in many ways. It is an incredible piece of human technology, and it ran (not walked) so Rust could also run. The point of the video was not really to bash it, just to highlight why this different design choice that Rust made has practical benefits for your code.