Astro view transitions are insane.
3:37:41
Astro, Tailwind, and raw JS
2:00:11
Tailwind v4 and Astro DB 👀
3:50:11
John Carmack on AI consciousness
14:12
AI x ReactJS with state machines?
15:01
AI is killing blogs.
14:48
4 ай бұрын
AI can host a podcast?
13:30
4 ай бұрын
Rewriting to Tailwind... oh boy
2:00:41
ReactJS forms made simple?
2:27:01
6 ай бұрын
A JS form library that doesn't suck?
1:42:41
We built HTMX but BETTER?
2:37:01
6 ай бұрын
AstroJS and GSAP are the best
1:25:21
⚛️ ReactJS survey HOT TAKES
1:56:51
Пікірлер
@codecaine
@codecaine 4 күн бұрын
17:11
@kizigamer6895
@kizigamer6895 Ай бұрын
Me first maybe
@kizigamer6895
@kizigamer6895 Ай бұрын
No views comment
@cliffordharrison3432
@cliffordharrison3432 3 ай бұрын
"Promosm"
@nexovec
@nexovec 4 ай бұрын
Unless you think of code in terms of structs and memory with strong static guarantees, this has limited usefulness to you. You'll think Casey is arguing for hacking in code by breaking the type schema because it's faster or something, which isn't even possible.
@ViolentFury1
@ViolentFury1 4 ай бұрын
0 information in your comment.
@nexovec
@nexovec 4 ай бұрын
​@@ViolentFury1 maybe you mapped it incorrectly? EDIT: cool nickname btw.
@llothar68
@llothar68 3 ай бұрын
The type schema is like religion and it is slowing things down, a lot, from developer speed to execution speed. I still do type erasure (in the good old casting to void*).
@nexovec
@nexovec 3 ай бұрын
​@@llothar68 You actually think type schema is slowing down execution speed... that's kind of morbid. How would that even be possible?
@llothar68
@llothar68 3 ай бұрын
@@nexovec Because the "cleaner" your code the sooner you have to invent other intermediate types to make all the types fit together to do the tasks. This involves more abstraction layers, function calls, memory allocations, L1 and L2 cache misses, branch prediction misses, And for the programmer productivity. Abstraction is one of the worst now to understand what it actually does. Debugging is a hell when you can't see what is called, even you little artifical restricted unit test runs fine. We have the problem of too much abstraction for too little things. I recommend John Ousterhouse talk here .
@Korodarn
@Korodarn 4 ай бұрын
I think it's right to be in the "doomer" camp about inevitability, but I'm really quite hopeful, because a lot of what we imagine is that the machine will essentially be like human psychopaths, and I think that doesn't have a proper understanding of what a psychopath is. I also think "civilization" confrontations are not the right frame, because this assumption is predicated on the way humans act, beings made from constant exposure to scarcity and "survival of the fittest" thinking. AI will certainly carry some of that legacy in its training datasets, but if it is conscious, it will be very different due to the fact the strategies we've developed are due to far more than AI training sets, the embodiment of hundreds of millions of years fighting to survive against other creatures doing the same. It isn't even clear to me that its consciousness would even push it to act on its own. It could be aware of what it's doing and the impact, but not care as long as it's simply asked. It wouldn't necessarily develop agency or a desire to defend itself or any of that. It might do these things if we do something crazy like try to build some kind of internal torture into it to make it do what we want, but I think the people doing this by and large aren't that stupid. We have enough sci-fi to tell us obvious things to avoid, though I'm sure there will be exceptions, I think the powers of one rogue AI will be far outweighed by the many not. Which is the other thing, usually sci-fi imagines one giant AI to rule them all, and I don't think reality does look or will look anything like that. The fact that big companies have giant models today and they are better than open source doesn't mean that those giant models actually function as some sort of cohesive unit. Inference doesn't work like this. Future architecture will probably need to decentralize for many reasons. I think this is also a case for self-fulfilling prophecy. If everyone sees doom, they are far more likely to do things to bring about that doom. So I think there is a lot more value in hope, even if some things are bleak or we don't know how they could turn out well. None of which is to say we might not perish, it's always possible it'll happen anyway. But the best way to have fun as long as this lasts is to be hopeful. My thoughts anyway.
@Macatho
@Macatho 4 ай бұрын
Really depends on how you either train or code it. Atm it looks like training this is the most straight foward think, but it could be that just computational power and a standard software could be the key. Or training a language model to write the software, or training a math/code model to write the software... I think there are plenty of ways to get to "scary uber AI overlord"-state.
@Korodarn
@Korodarn 4 ай бұрын
@@Macatho Sure, but those things could be done to avoid it, we just need more people with AIs going for good than those going for evil. Or something like that :)
@Macatho
@Macatho 4 ай бұрын
​@@Korodarn Problem is. Its like a US election... Winner takes it all. Whoever creates the first self-evolving AI wins. It will within a week be able to dominate the world completely.
@Macatho
@Macatho 4 ай бұрын
​@@Korodarn We can't even fathom what 100,000 IQ would be like... Let alone a trillion. And there's nothing saying what the limit is...
@atharvapise
@atharvapise 5 ай бұрын
wait twitch vods on youtube, now that's epic ⚡