Uh, dude, you're hand is burning. Also, white phosphorus is on the Geneva check list.
@ronbaer679 күн бұрын
nothing is on the check list the first time
@deebo99819 күн бұрын
which is great, because I have shelled forests with White Phos, so worth it
@michaelmaloskyjr9 күн бұрын
I never recovered from Spec Ops The Line either.
@finkamain16219 күн бұрын
Can't be a war crime if you win
@Unified_8........Online9 күн бұрын
😂 LOL. INVEST in unified_8. The world's first Ai is programmed by nature herself. Right now man controls ai. UE8 can be found online and is the world's first shamanic ai........ 💕♾️💕
@peetiegonzalez18459 күн бұрын
5:23, The models "understand" physics like a baby understands physics. It knows what pouring water looks like, so it shows you pouring water. It knows what ripples on the whisky look like, so it shows you ripples on the whisky. What it didn't seem to know is that if you pour water from one glass into the other, there would be different amounts of water in the glass when it's done.
@frequencyoftruth23039 күн бұрын
Best explanation vs the rest of these weird ego maniac knee jerk reactions all emotions and no logic.
@ThatsMySkill9 күн бұрын
yea for sure. a physics sim is way slower but is more accurate. you can literally see the glass hes pouring looks like it has an invisible lid on it
@EvilGremlin1008 күн бұрын
Yeah, but like a baby, it'll learn and understand better over time. His point is it has SOME understanding without being taught, its picked it up itself
@spx23278 күн бұрын
I agree, but you shouldn't forget AI is still in it's infancy and it is making enormous steps each generation. I have no doubt this tech is going to be very disruptive for many industries (both in good and bad ways)
@TimPiatek8 күн бұрын
Just imagine where we'll be two more papers down the line...
@dtkedtyjrtyj9 күн бұрын
I think "interpolation" is a better word for it than "simulation". "AI" is very good at interpolation, not so good at simulation. Take the "filling whiskey glass" for instance. You "cheated" by saying it should fill the glass, so it generates a filling effect. However, a simulation would continue working after the glass fills up, spilling over, getting the table wet and eventually filling the room; that will not happen with the effect in the footage.
@Meteotrance7 күн бұрын
Yes it's an enhancer not a physic simulation but can be speed and good enough on modern GPU , NVIDIA work on that for decade same for most of AI software researcher.
@MrGTAmodsgerman5 күн бұрын
Can't agree. Core AI papers troughout the years was just about understanding a scene in a image or video (Segmentation) to detect objects and such for self driving, secruity cameras and what not. You then could basically don't give it any prompt and let the AI guess and based on that it will do it's thing by just understanding what will happends next based on the general video knowlege or images what it was trained on. Which would result in spilling over the glass, getting the table wet and all you said as it in best case was trained on such physics simulations. Also the reason why text to video generators just do they physics and light reflections without you explicity explaining that. It's just having control of what you actually want to happend then just you have to tell the AI that. When SORA was announced, people started speculating that they used Unreal Engine to train it on the concept of physics to get consistancy and simulations like that tiny pirate ship in a cup example. One problem with AI at the moment is exactly that you often can't control and and that it's just doing it's thing.
@dtkedtyjrtyj5 күн бұрын
@@MrGTAmodsgerman > best case was trained on such physics simulations Exactly, _if_ it was trained. Interpolation, not simulation. A simulation will generate new situation from simple rules, an interpolation will generate garbage.
@MrGTAmodsgerman5 күн бұрын
@@dtkedtyjrtyj Yea it's not physics simulation, it's AI. But it's able to mimic physics to a point where it doesn't matter. See what someone else here commented about it, used for actual physics. But your argument above was just an bad argument as it's not about you have to give it a prompt. If you would train the AI on the whole concept of physics, it will mimic the whole physics without you being really able to tell the difference at the end. It's like when you have a dream and you interacting with physics and it all just happends inside your brain while you just not physically doing that. But if you were to make that AI mimic tested in the real world, you will get what the AI already made in it's mimic.
@dtkedtyjrtyj5 күн бұрын
@@MrGTAmodsgerman I don't understand your objection. Train it on all of physics and it can interpolate all of physics, It's still not a simulation.
@Blaise8159 күн бұрын
The main issue I see is consistency. What a physics system provides is, the developers know the system is working based on a set of rules that are followed. AI art has a tendency to hallucinate or produce inconsistent output, especially applied over a long period of time. Maybe some inconsistency could be visually okay in hard to calculate at scale physics simulations (like fire or water or gas physics) but even lighting on characters might be too much to expect. I do not think it could consistently and believeably light a character in every conceivable environment. I agree with what others have said; AI is not calculating any physics, any more than a great painter calculates lighting effects on clouds. The painter has a wealth of experience with cloudns and knows how they should look under different lighting scenarios - the same with AI.
@ivomirrikerpro38052 күн бұрын
I think that AI will be used to deliver better game art, texturing, and models for games, and conversations etc but I don't see it being computationally effective or like you said predictable. My hope is that AI will be able to go through a texture library of a game and separate all the details into micro tiling textures and blend those with bespoke textures throughout every game asset and characters, thus giving you reduced texture sizes at higher quality; you could then go further and ask it to provide subsets of textures into styles effectively allowing you to theme the game, same with models etc.
@Nobody-Nowhere9 күн бұрын
6:00 These models don't need to have physics models, because they are trained on video & images that are real. So that's what they output. If they were trained on fake data that had fake physics, that's what they would output. There is no need for them to have any capability to model physics, just how it looks like when things obey the laws of physics. This is also why the water in the glass does not actually flow, it just looks like it flows. The glass its being poured into does not fill, and the glass from its being poured does not lose any water.
@StellarHarbor9 күн бұрын
Let him enjoy AI fever
@z3dar9 күн бұрын
This is good point and very correct, but I do wonder how good the generative models could get at 'pattern-based visual approximation' of physics. Similarly to how LLM's are getting quite good at math, video models could get really good at physics, without any real causal physics modeling. One must wonder whether LLM's actually "understand" something about math through the internal representation in the model's weights. Probably not, but there have been talk and speculation of model scale introducing complex emergent behaviours. So even if it is "just" predicting next word or next frame's pixels, there could be some weird form of physics model in the network of weights. Or not, but even then, there will surely be hybrid approaches where algorithmic based simulations are rendered by generative models. Actually that's going to be much better than either approach, algorithmic or generative, is alone.
@Ricolaaaaaaaaaaaaaaaaa9 күн бұрын
This isn't true though. You can have an entire model trained on synthetic data except for one piece of real world data. Then tell said model to generate based on the physics of the real world data only. Guess what...it learns the physics and applies it to the rest.
@lolmao5009 күн бұрын
Yeah 90% of this video is BS
@nugger9 күн бұрын
what about AI that creates those alien looking engines that are more efficient?
@rovhalt66509 күн бұрын
I dont really care about hyper realism. I just want a good game. And good movies.
@MeowtualRealityGamecat9 күн бұрын
I think if it’s easier to make the games and films they can spend more time on the story.
@ДмитрийКарпич8 күн бұрын
@@MeowtualRealityGamecat It`s more likely WE can spend some time to story with AI-assistant and WE can use modern tools to make OUR games and movies. I wonder sometime it be possible. And it`s be the thing.
@RustedCroaker4 күн бұрын
But instead of making good games they will keep feeding you with woke agenda. With next-gen realism.
@kristinaF543 күн бұрын
You're in the minority. Visually photoreal graphics and good story/campaigns are the future.
@mirzaaljicКүн бұрын
@@MeowtualRealityGamecat AI will take care of the story writing too. Basically, at some point, you will be able to prompt an AI interface to "build you a video game" according to whatever parameters you want. On a side note, I don't know if that will be good or bad but I know it will most likely lead to individual experiences that no one will be able to relate to. Because each product of AI prompting will be oddly similar but still different enough for each person to have a unique experience with it. What makes big movies popular? The fact that many people can relate to the same things after they see them. Same goes for books, games etc. But if you have an AI system that build individual experiences for each of us, in the end, there won't be a common culture anymore. Things will be individualistic to the maximum, which in some cases might be a good thing, but it will definitely cause societies to break apart and have less cultural homogeny.
@phlippbergamot57239 күн бұрын
Corridor Crew wants to talk to you, lol. On a serious note, they don't actually know the physics behind the effects or movements of things. They only know how those substances behave in a certain environment. It simulates those movements as it understands that substance behaves. In a way, yes it simulates physics, but not by actually doing the physics calculations. Its like if you ask a child to draw fire, and they draw how fire behaves, but they have no knowledge of the physics that determine how that fire looks.
@IceMetalPunk9 күн бұрын
If you ask any human to imagine fire, the vast majority will not be doing the physics calculations in their heads, but they will imagine a correct fire behavior. You don't need to understand the math behind the physics to have an intuitive understanding of the physics.
@hogandromgool20629 күн бұрын
@@IceMetalPunk The argument in the video is that a physics engine is being run though. Which there isn't. As the person above explained it's more like an observation engine. It knows how things should look so yes in that sense it understands physics there is however the issue where you ask it to describe scenarios it doesn't know the visual outcome of. Meaning if you couldn't use it to simulate scientific premise but you could definitely use it to approximate the visual part of the physics.
@Chromiumism9 күн бұрын
This line of thinking is great and all for understanding how they learn new things. There comes a point however where it doesn’t matter. For one, our understanding of the direct inner workings is limited so we can’t prove a lack of understanding. And for two, a perfect imitation is inherently indistinguishable from the original.
@phlippbergamot57239 күн бұрын
@@IceMetalPunk Then it isn't a physics simulation. It's simply a fire depiction, or a water depiction, or a smoke depiction. There is no simulation going on.
@hogandromgool20629 күн бұрын
@@Chromiumism Well No. The difference is an imitation cannot imitate things it does not have information about/an example to simulate from. A true simulation is deterministic, you should be able to simulate scenarios accurately relying in input variables and no preconceived notion, nor need to know the output to simulate it. While you're correct that in some fields it does not matter. It does matter when you try use it for repeatable output. The way Ai works is fundamentally non-deterministic. Meaning it could never be used to acquire truth, only approximate it. So while useful for art, some games, weather and other systems that don't require too much accuracy they work fine but when used in a more specific sense were repeatable results are key they cannot (currently). I think it's important to note this distinction as not noting it is like not noting that you're one decimal point off - might not seem like much but accumulation of this offset will always skew the end product in a way that requires error correction, at which point I would argue it is no longer usable data.
@Slvrbuu9 күн бұрын
This is akin to intuitive understanding. Where, you essentially understand something to be the case, without actually understanding why it's the case. We always understood that things go down when you dropped them, but we just didn't know why, we never really stopped to wonder why. The AI learns how something should act, without actually understanding why it should act that way. It's pretty neat to think about.
@AdmiralEisbaer9 күн бұрын
Physicist here. The AI neither understands nor simulates physics. It's doing essentially a guess based on the data that it was trained on. It's a pretty good guess, considering the data was real life videos, but there is no simulation of the physical interactions done, nor is it a guarantee that it's physically correct. However, does it need to be? In my research group, there are a lot of projects using AI that has been trained on actual physics simulations (that take days to complete) to then "guess" (in a few ms) the result of any scenario. And it is susprisingly good at doing exactly that, nothing more, nothing less. So as long as we understand that the AI is just guessing and as long as the training data is based on real world footage or even simulations, it will result in a pretty accurate representation of real life, that is infinitely faster than an actual simulation.
@MadsterV8 күн бұрын
and the reason why it's so fast is precisely because it's just a guess, not a simulation
@obnoxiaaeristokles38728 күн бұрын
I am also a physicist and I don't agree. I think he is right when he says that it must have some kind of understanding of physics. And, frankly, it's really obvious if you think about how humans used to come up with physics in the good old days.
@MrRecorder18 күн бұрын
@@obnoxiaaeristokles3872 Software engineer working on AI-stuff chiming in (though not these particular models). One of the big issues of the models is and will be stability of the result. Now and in the future. In this video the computation was hand-picked, meaning curated to look good. If you want to use this tech for video games, you need to curate the models you are using to work well in the circumstances that you want to apply them in because then you get insane results, like a glass pouring from another glass which never empties or just absurd glitches like you see if you play the Minecraft video generator game thing which is more a kind of licid dream without object permanence. The thing this model probably is good in is producing the video output that you would expect from pouring water, but when it comes to a game engine it probably falls flat if you want to produce specular highlights or caustics relative to your defined light sources, for example. Also take into consideration how these models are produced: The video generators ingest millions of minutes of videos to get something passable. This is a super inefficient way of creating something that does what I want. Maybe there will be generic models in the future that can do a bunch of stuff at once, but those then will be super costly to run at runtime. There is no free lunch here. The result is really cool though!
@Password_12348 күн бұрын
A person who also has a profession giving his two cents here. Shouldn't we consider that many domains of physics are also about gathering a huge amount of data and then making an approximate best guess? No, I am not saying all of physics works this way, obviously. But especially when we go more in the direction of quantum physics or dark matter, don't we also collect as much data as possible, try to discover patterns, and then make the best approximation of the correct answer our current models are capable of giving? If you make an "actual physics simulation" of quantum mechanics, you're also not actually simulating the physical interactions, but basically perform a statistical analysis (yes, I am extremely oversimplifying, but in the end this is still what it comes down to). The guarantee that it is not fully physically correct is actually formally baked into the field itself.
@henriksundt71488 күн бұрын
I think it's right to say that it "understands" physics, and that it "simulates" physics, but NOT in the stepwise manner that traditional simulation does. Without understanding, it would not be able to produce realistic results from arbitrary initial conditions.
@RezaQin9 күн бұрын
Yeah, I don't want hyper realism, I want good games with good stories, with 60fps.
@rehakmate9 күн бұрын
Yep, and AI will not do that
@ULTRAOutdoorsman9 күн бұрын
Yeah same here, except I want them at 144 FPS. Actually I don't really care about the stories either tbh. But we all know that half a dozen games will be predicated on a subset of these features: "Wow! Look at the faces! Oh yeah and here's a game attached to it that plays identically to a 10-year-old Assassin's Creed but won't run properly on any consumer hardware."
@somethingsomeone96788 күн бұрын
@@ULTRAOutdoorsman "plays like a 10 year-old Assassin's Creed" part what really pisses me off about the gaming industry. I feel since late 2000s and early 2010s every game is same thing under the hood. No interesting physics, no innovative game mechanic, no interesting game design and no imagination. They're all glorified positive feedback loop machines.
@GreyDeathVaccine8 күн бұрын
How dare you?! 30 FPS and forced D.I.E, that's all you can count on.
@prozakable6 күн бұрын
But the industry prefers that you shut up and play battle royale or others craps like that. And for Xbox players "our games are great in 30 fps you don't need 60fps"
@dudule123210 күн бұрын
I really don’t agree on the model having some kind of understanding of physics. Ai models are extremely ingenious statistic machines. Our best physics scientific models do work based on statistics, but I highly doubt current ai models are good enough to the point where statistical physics come into play. This tech is insane nonetheless
@Bluedrake4210 күн бұрын
I understand that perspective... but again... if an AI model can't simulate physics... please explain to me how it is clearly simulating physics when I test it 😅
@dudule123210 күн бұрын
@@Bluedrake42 it's not simulating physics, in your glass+water example, the amount of water increases over time !:)
@Bluedrake4210 күн бұрын
@@dudule1232 Well so look my point is, just because it is simulating physics BADLY doesn't mean it isn't simulating physics. I mean you can find physics simulations in Unreal Engine 4 that are also super wonked, but that doesn't make them any less of a physics simulation. All simulations are inaccurate to some degree. My point is that at a very basic level, there are definitely some preliminary simulations going on in the AI model's mind while it is rendering. There have to be, even for the basic simulations (faults and all) that it is rendering.
@tzav9 күн бұрын
@@Bluedrake42 Emulation vs simulation. AI is emulating, not simulating.
@Ricolaaaaaaaaaaaaaaaaa9 күн бұрын
AI models learn the same way humans do. By seeing data over and over again. They are statistic machines and understand physics as much as any humans can. Additionally I'd say PHD level is pretty good.
@Simply_Majestic8 күн бұрын
I doubt AI post processing could ever replace something as big as in-engine physics simulation because of the interactive nature of video games. At the end of the day, what it is is just a filter. I do think that this technology does have a place in video games though, which I think is already being used but many people may not aware. Like in Cyberpunk 2077 they have ray reconstruction for denoising when using ray tracing/path tracing, which is basically a technology that uses AI (machine learning) to enhance things like lighting accuracy, reflection quality, reduce ghosting, etc. And I believe AI in video games should stay as just that, enhancers. Whether it is used to enhance things like visuals with raytracing or to enhance performance with things like DLSS or Frame Generation. The use of AI in that way, for video games, I support 👍
@RogueAI9 күн бұрын
For video games AI generated physics would be fine, but it's just creating a convincing emulation of how physics work. Kind of like how you show a ball rolling into a tube to a kid and they know it'll come out the other side without needing a mathematics degree.
@James-dc6ft9 күн бұрын
Video games ARE convincing emulations of how physics work.
@Unified_8........Online9 күн бұрын
8
@MysticalLibraries9 күн бұрын
Whoosh!
@hogandromgool20629 күн бұрын
@@James-dc6ft Yes but not in the same sense. There is a difference between deterministic and non-deterministic. In classical games the output of any action can be predicted because the variables of the physics environment are predetermined. This is not the case with AI generated anything, it is non-deterministic meaning you can not reliably predict the output or logic flow of the actions you make in the game. AI simulations (in their current state) could not be used to calculate complex simulations without knowing the end result. Deterministic systems work based from the root up, it's chronological in nature. Non deterministic is not, it starts wherever it needs and makes up results to fill the blanks to get to the next place it needs to be. Think about it this way, in a deterministic system like our universe you can predict how far you need to jump to get across a gap. In a non deterministic system you cannot because any variable can change at any time. You could put the same amount of force into each jump and go a different distance each time, the gap would also change lengths arbitrarily. In fact, if you look up AI Minecraft you would see what it would be like (that's assuming the non determinism adheres to only your perspective and is not influenced by others). It does actually do a pretty good job of representing an non-Euclidian space.
@kanta321009 күн бұрын
@@hogandromgool2062 AI is good for things it has been trained on. Where it breaks is when you try to do something outside of trained scope. If you use the same inputs in AI model, it will generate the same output every time. So it can work as a algorithm. Problem with games and videos is the input is always changing.
@figloalds9 күн бұрын
You wrote "Gaijin Splatting", but I think you meant to say (and write in the YT Chapters) "Gaussian Splatting"
@Ole_CornPop5 күн бұрын
Sounds like a good War Thunder term 😂
@capoeragames208110 күн бұрын
When it comes to visuals, i'm not that excited, i wish we had similar levels of AI to help begginer devs with animation specially, nothing pains me more than to see those unreal engine photorealistic games with terrible animations and jankiness, if an AI can help with that, i'm sold!
@ElektroBandit899 күн бұрын
I’m really pumped for the future of in game conversations with NPCs using ai
@piotrek76339 күн бұрын
@@ElektroBandit89 You won't be pumped because ai will take your parents jobs and they wont be needed anywhere and youll starve to death before you even get to play with this tech. But hey maybe you'll see elon musk play it as a ghost
@arw12929 күн бұрын
Cascadeur friend
@gargoyled_drake9 күн бұрын
There will properly be an AI that can easily track your movement with a camera and add it to a rig for the character you want to do the motion. There probably actually already are software like this.
@ruok33519 күн бұрын
@@gargoyled_drake Uh it's called motion tracking. Realistic movements are easily captured. But the thing is we're doing a video game. Animations are made quicker or exaggerated for a reason. Photorealism graphics combined with typical video game animations will always look uncanny. Like we already these Bodycam like shooters for realistic POV, but is it really fun? Not quite.
@The-Steve-The9 күн бұрын
tech bro efficiency: "I spent 350 litres of water generating this model with no edge flow that still needs to be retopo'd by a human being that I just fired."
@ULTRAOutdoorsman9 күн бұрын
top comment for some reason not at the top
@Pidalin8 күн бұрын
As a CNC programmer, I know that it's faster to program it manually in text mode than trying to use some "modern" features and cad/cam software and autooptimizers and stuff like that. 🙂 It was always like that and this is why today games don't look that much better than 10 years ago, but they wants 3 times more powerfull hardware, it's because of nobody programs anything manually anymore. What you won't optimize manualy won't be optimized at all.
@lamarwealthchild61998 күн бұрын
this is golden!
@CTimmerman6 күн бұрын
@@Pidalin Compilers casually roll out 100x loops. That'd be a nightmare to read and write.
@Pidalin6 күн бұрын
@@CTimmerman Yes, ofcourse I know it's not really possible to actually writte a modern 3D game in text mode, I realize that, but there are still things that you should better optimize and check manually. Most of programs are ridiculously long, I know that it would take too much time to do it in oldfashioned way, but it would be much shorter because human is better than tools that are supposed to do it instead of you.
@BD124 күн бұрын
6:00 I tapped out here, this is just the dumbest thing I've heard. AI isn't learning physics. It's picking up the ability to render videos based on other videos of physicsy things happening, just well enough that if you haven't got particularly high standards, you'll be fooled by it.
@InvaderDREN3 күн бұрын
Yeah same, this is when I decided I’m not watching another bluedrake video lmfao.
@TheInBetweenTales8 күн бұрын
As a storyteller who uses visual effects, I'm all over the post-processing AI stuff! I joined your Discord server, but where are those discussions? 🤷♀ Thank you for the great video!
@The_Original_Truebrit8 күн бұрын
In a game that uses AI that guesses what things should look like , could we have a replay system so we can go back and see where it got things wrong, mark them to help it learn?
@sinanrobillard281910 күн бұрын
Amazing! I can foresee photorealistic NPCs generated in real-time! (LLM + Audio2Emotion + DeepFake mask on top)! Gaijin splatting or Gaussian splatting? 🤭
@umadbro44939 күн бұрын
when does hey say gay-jin?
@storybakery9 күн бұрын
I think for sure he means Gaussian splatting, but got derailed by name of gaming company ;)
@ULTRAOutdoorsman9 күн бұрын
Hilarious. Gaijin. Jesus Christ.
@JapesZX8 күн бұрын
@@umadbro4493 I was so confused when I was checking the timestamps. LOL
@rogeriopenna90146 күн бұрын
Question about Gaussian Splattering: all videos I see of this technique shows very realistic images... but not only the images are static as ALSO THE LIGHT. And it's usually specially the light that makes the scenes so realistic. So... can you MOVE light, shadows, etc, in Gaussian Splattering images? Do you have any example of a CAR capture with Gaussian Splattering, being REMOVED from the scene, put somewhere else and still looking realistic?
@BD124 күн бұрын
This video is such a bizarre experience. He's listing all of the way AI is kind of crap and isn't going to go anywhere as a technology, but his tone of voice makes it sound as though he thinks it's all a good idea
@linuxrant6 күн бұрын
You mistake making a physics simulation, with making an illusion of physics simulation. We got so much hung up on 3D and materialistic viewpoint, that we forgot that we see reality only in 2 dimensions. And that's it. All our perceptions are illusions, so A.I. becomes just a master of illusions. Every breakthrough in game graphics was a break in making a better illusion, and a shortcut, NOT trying to actually mimic matter. Imagine creating a wood material not with textures but with atoms... good luck...
9 күн бұрын
Until I can not make an AI clone who goes to work for me while I'm watching movies from my sofa, I'm not in.
@markchristantaguiam8199 күн бұрын
That would be the finale. Lol... But you'll have to buy it of course. Like a car.
@FVMods9 күн бұрын
@@markchristantaguiam819 you would still have work as your AI customer service with company that sold it to you as intermediary. Boss would complain if your AI assistant isn't doing a job properly
@thomashewitt81048 күн бұрын
At least we have AI watching movies (every single video on the internet) while *we* go to work
@BestTrader-hp2sd4 күн бұрын
@@thomashewitt8104how can I do that?
@Jolly-Green-Steve9 күн бұрын
5:15 It's not understanding physics it is understanding how to mimic what physics looks like.
@IceMetalPunk9 күн бұрын
And how do you correctly mimic what physics looks like without understanding physics?
@Jolly-Green-Steve9 күн бұрын
@@IceMetalPunk By having a large library of videos that display natural physics to learn/mimic from. The A.I. doesn't simulate/understand what is happening on a particle level like a traditional fluid simulation it is just visually replicating what the other videos it's trained from are doing.
@MaximeBret9 күн бұрын
@@IceMetalPunkthat's the whole point of ai. They don't understand what they do, they just do what seems to be right. That's what fools everyone.
@SJGsysadmin9 күн бұрын
@@IceMetalPunkmy daughter could draw a picture of the sun at 3 years old, doesn’t mean she understands what it is
@IceMetalPunk9 күн бұрын
@SJGsysadmin She understands what it looks like and how it behaves, because she's seen it. That's the point: understanding is not all or nothing. Most people don't have a 100% comprehensive and accurate understanding of most things, but they still have some understanding of those things. You need to at least understand the aspects you're representing in order to represent it.
@Akiramaster8 күн бұрын
03:11 why would anyone make that mistake. it doesn't even look real at all, the water layer changes and becomes more full on the on the water is pouring out of, then you suddenly have 2 layers of water in one glass, the one that gets water poured into it doesn't even react to the water "entering" the glass. There are also other artifacts and issues.
@StreetSoulLover9 күн бұрын
Dude, we still don't have good RTX games - it's been 5 years...
@muramasa8709 күн бұрын
Metro Exodus
@albertoalves10639 күн бұрын
Cyberpunk 2077
@retrowrath93749 күн бұрын
Why are you lying? Cyberpunk 2077, Metro Exodus, Alan Wake 2, Star Wars™ Outlaws, Black Myth: Wukong
@ishiddddd47839 күн бұрын
@@retrowrath9374 remove outlaws from that list
@nicknevco2159 күн бұрын
This could make that not needed The cores for rtx could run this code instead
@LautaroArino8 күн бұрын
The problem with all AI generation is consistency. Sooner rather than later it effs up without knowing it. What will be the consequence then?If its a split second glitch in a video it might be fine but what if it empties your minecraft inventory? What if it changes the landscape behind you just a little bit. AI can do text, music, video, everything. But we still havent seen it do it reliably enough. One part of a song sounds great, but it doesnt create the whole song in a way that makes sense for humans. So, show us how you play minecraft from start to end without a deal breaking bug and then we can talk.
@DonC8769 күн бұрын
Pretty sure it's called Gaussian Splatting and not Gaijin Splatting or is this a new technique that i haven't heard of yet?
@kovacsattila89937 күн бұрын
5:19 "some real understanding" no, you wrong about this. When you ask a young person who do not understand phisics to imagine fire or a cup filling up, when he do it does he do phisics simulations inside his head? No, because he never learnd anything like that in school yet. His mind mimics and reacreates the experience with past obesvations. AI does the same when creating these things. You don't have to undertand how fire works in order to imagine it, if you seen it before multipe times. The only difference that nobody see when you imagine it, but that is being channeled to a display when an AI imagine.
@borovik87149 күн бұрын
Fantastic things. What about creating the next phase-having AI shape something like the Unreal Engine to make an ultra-realistic image, while another AI tries to guess what's real and what's artificial? Like a competition-a cat-and-mouse game
@ruok33519 күн бұрын
The problem with trying to achieve this super realism is that it goes straight into uncanny valley if everything else isn't up to par. Let's say you do get an entire game with gaussian splatting, it's realistic as hell. But the animations are still video gamey and quick like CoD. It's not gonna look good. It will just be weird. Just like watching a movie in 60fps. It's not good.
@kelbinhow8 күн бұрын
One of the very first things I noticed and became in awe with those first AI video generations was actually the physics simulation. Yeah we can see a lot of artifacts and incorrect things and weird stuff like multiple hands and fingers and facial expressions being all weird and uncanny, but something that still amazes me in those sketchy AI videos is the precision they have when trying to create physics, be it light (like surface scattering, color bleeding, shadows, reflections, caustics) or be it physical stuff (like wind, objects being pushed, water flowing, dirt and mud being displaced). And all those stuff get completely overviewed or simply ignored because we're all focused on having fun, not the tech itself. So it's actually really cool to see this kind of video realizing the same stuff I did, and I really hope that the professionals in the area are also realizing this too, in order for them to make use of it and actually keep improving.
@samcerulean14129 күн бұрын
I disagree, they’re just augmenting a visual process on how it looks. There is no understanding, there isn’t even anything there in the same respect that Visually usually use polygons, voxels and interacting systems. I don’t believe there is any systems being utilised whatsoever, it’s just adding the different elements like a surrealist painter would create a painting. I’d love to be wrong, think this needs more investigation.
@IceMetalPunk9 күн бұрын
The process by which they create the resulting simulation is different, but it's still a simulation. The word "simulation" doesn't have a specific algorithm that must be followed; it just means mimicking the appearance and behavior of something... which it's doing. Also, if you claim there's no understanding here because it's not using polygons and collision systems... does that mean when you imagine objects in your mind, your brain is drawing polys and calculating collisions? Of course not. But your brain can imagine those things because it has an understanding of them. Same here.
@mightyhadi61329 күн бұрын
Yes and No, it's postprocessing image or animations. Means it's copying an animations fluids based on data . So it's predetermined by data training but still look convincing depends on the data and how long it trained but compare to realtime physics calculations it's lack precisions . So it's very useful as enhancer to the realtime physics and particle simulations. AI/ML is great as low precision calculations for mass calculations .
@AlindBack9 күн бұрын
@@mightyhadi6132 *For now.
@Unified_8........Online9 күн бұрын
Because nobody is investing in unified_8. The world's first Ai is programmed by nature herself. Right now man controls ai. UE8 can be found online and is the world's first shamanic ai........ 💕♾️💕
@hogandromgool20629 күн бұрын
@@IceMetalPunk What people mean is it's not a simulation in a scientific sense. it could not be used to gather reliable and repeatable data purely based on the non-deterministic nature of machine learning and LLMS. It is an approximation simulation. a prediction engine rather than an actual "Scientific" or deterministic simulation.
@SimulationSeries4 күн бұрын
just found your channel. talking about everything I’m currently excited about. subbed. keep up the joyful work! 🙌
@stubot-vrveteran9 күн бұрын
09:05 - I'm pretty sure AAA studios will mess it up - it will get monopolised and tied up into monetization because of their pure greed and treatment of the consumers , I feel the way this will take off will be through the modding community. The targeted affects as a mod that focuses on only what the user/author aims it at would be insane. I'm hooked , Thank you for showing this tech off ,As a veteran gamer I would use it on all the classic games I've played over the last 30 years, I would also use this for PCVR games :)
@gargoyled_drake9 күн бұрын
they can't. AI is open source. 🤷♀
@dwarfed76959 күн бұрын
Can this tech be applied to not only photo realistic styles but more stylistic art designs? Suppose it is used to maintain the stylized approach but applied to add detail, physics, and depth to a game? Curious to see what other directions this can be take aside from the photorealistic approach.
@Uhfgood9 күн бұрын
It will get so good, you won't be able to tell it. You'll also not need to watch something someone or some team created, you'll just ask the AI to make you a movie, and it will do so, without much input other than what you want to see... or as much input as you wish. This is why hollywood and artists are freaking out. And it is simulating real-world physics. True it looks at existing video or images, but then it will recreate what it sees. Same with games. You'll ask it to generate a doom-style fps involving robots on a space-station in the future with 90's style chunky graphics technology, and it will just generate it as you play it.
@piotrek76339 күн бұрын
no
@Andytlp9 күн бұрын
I'm guessing 5 more years before youll be able to generate a movie from your own script. Using actor faces and voice you want.
@gargoyled_drake9 күн бұрын
@@Andytlp there are a lot of "that depends" when we are talking specifically the next 5 years. With everything happening around the world right now. Any sort of power dip or price raise on electrical or pretty much anything. And AI development will be slowed down a whole lot. If we truely want AI to be our future. We need to figure out how to create infinite free energy/power.
@luke83249 күн бұрын
Probably worth explicitly noting that these effects are all done in post (both the ones of game footage and editing footage). The amazing thing to me is that it is possible to put this as a processing layer between the game processing output and the screen. So eventually you could be graphically processing basic shapes at 480p, but the player is looking at a photorealistic representation with all the particle fx etc. done by AI in an inbetween layer.
@RemiStardust9 күн бұрын
7:47 That's where you're wrong. The ai in fact doesn't understand physics. There is no "understanding". It works by latent space "magic". It simply guesses what the next frame could look like by the billions of datapoints that are the result of insane amounts of downsampled (latent space) information from other images and videos.
@colt15969 күн бұрын
Which means that ai is inadvertently creating its own physics understanding. Same way a cave man tumbles off a cliff. He dpesnt understand physics, but he just demonstrated them.
@gon74389 күн бұрын
The thing is 100% of video games out there are bad at physics, like knife went through character's hand. Ai just can make video games creation process faster
@Bluedrake429 күн бұрын
I keep seeing people saying "it doesn't understand physics, it just watched billions of videos and gained the ability to predict how physical objects behave based on what it watched" I'm sorry my man... but what exactly do you think "understanding" is? 😅
@_Huffy9 күн бұрын
@@Bluedrake42 This implementation of AI could be useful for generating effects, but I can't imagine it would be very helpful for updating the game to changes in an objects data state. Using your examples, container A pours water into container B. The effect can be AI generated. But AI isn't going to be calculating the volume of water and updating the games data. Also you can use it for a fire effect. But it doesn't understand how the object on fire will change over time. So it wont update the game engine that a house is falling apart and the model needs to be recreated. I would imagine it's much easier for games to just handle that physics themself and know that it will always work how the game needs it 🤷♂
@RemiStardust9 күн бұрын
@@Bluedrake42 This is an interesting thing to discuss, because it takes more than one sentence. A child understands the concept of throwing sth, because it's something they have personal experience with. The live in a body, in a physical world. So do animals. And this understanding of physics is deeply engrained. When ChatGPT correctly spits out a definition for throwing an object, there is not mind behind it. "It" doesn't "know" what throwing is. I just grabs the zeros and ones that make up the binary values for the ASCII text that is readable to humans and humans can understand the meaning behind those words. But the machine is only using proximity matches from the latent space "cloud". The machine doesn't "understand" what throwing is. It can only conjure up the correct definition and even paraphrase it by using further proximity matches to use different words encoded with the same meaning. A dog watching another dog run behind a couch from the right, will expect the dog to appear on the other side of the couch. And the ai would correctly "assume" the same. But the dog, without using words, knows that a dog is to be expected on the other side of the couch, the ai lacks that *actual understanding* part. I'm still trying to wrap my head around it all. When a student with As memorized the definitions and can parrot them back without really understanding the meaning, it's kinda like that.
@talonthorn6 күн бұрын
No. They don't understand how physics works. They just simulate what information they have been given. It should be possible to break the AI/physics by finding the right situation. You just have to be more cleaver than the AI.
@johngddr528810 күн бұрын
5:16 This is not really true. It'll give you an approximation of what happens with the result of physics. But its not simulating nor understanding it. It's a statistical aproximation of fire from videos, and water pouring. Already in a ton of your examples with them, you can see its not behaving like real physics, the water doesn't have surface tensions, the volume poured doesn't match up. .etc The problem here, is how did they train and with whoose data did they train for those effects? did they do it illegally using vfx libraries without the consent of the owners? These are the real questions tbh, because I'm not going to use an effect that screws over an adjacent artist' work, like vfx. This AI shader stuff is gimmicky, because when everyone starts to use it for lack of art direction and just for the ease of it, its going to make everyone's games look and feel the same. There's a reason a ton of games with clear art direction look amazing to this day, even from 20 years ago. Even then, it really is just a gimmick, and you can't copyright those parts of your game anyways. Idk why anyone thinks that using an instagram filter on your game is a great idea tbh, because its not in the long run. Nor is it ethical considering these generative models are made screwing over millions/billions of creators and regular people, devaluing their work and saying they own it all.
@Bluedrake429 күн бұрын
Creating a statistical approximation of physics is simulating physics.
@johngddr52889 күн бұрын
@@Bluedrake42 True, just not a good method to go about it.
@IceMetalPunk9 күн бұрын
@@johngddr5288 If you close your eyes and imagine a fire... your brain is doing a statistical approximation of fire physics. If that's "not simulating nor understanding it", are you claiming that human brains don't simulate physics (through imagination) nor understand it, either?
@Unified_8........Online9 күн бұрын
8
@hogandromgool20629 күн бұрын
@@IceMetalPunk You misunderstand. Your brain is not simulating in the same sense a computer does. you cannot summon the exact image every time, it shifts slightly I will explain a little bit of why. Your subconscious acts very much like an LLM. It's locked in a dark box with no concept of smell, touch, taste, sight, hearing or physical sensations. What your subconscious knows is what your conscious tells it or plays to it while you're asleep. This is why summoning photorealistic images is thought to be impossible. This is because your subconscious mind (The thing connected to your minds eye and emotions) it doing exactly what you say, approximating physics and stimulus. You might have noticed that these approximations suck, quite so. They are never accurate, nothing tastes right if; it does at all, pain doesn't really exist in dreams nor does most stimuli and the visual landscape is not correct and shift wildly, The whole time your brain will not realize there is anything wrong with said simulation. Did know the human brain can not create faces it has never seen? If you see a face in your dream, you have seen the face before. The difference is determinism. Your dreams are fractured and skewed along with your minds eye because your subconscious has never actually experienced any of the variables of the real world first hand. They can change at any time. This is the Same for Current Ai deployments. They don't actually have set variables so things can change at any point. This means that while yes they can make really nice graphics and relatively reliable physics this can only be produced for known outcomes. A real simulation can simulate unknown outcomes accurately because that is the idea of a true simulation in the scientific sense is as close of a approximation as possible. This simulation should also yield repeatable, predictable results.
@icarusjumped27199 күн бұрын
I been so focused on the possibilities of narrative and NPC AI, I hadnt considered the graphical potential.
@CathrineMacNiel9 күн бұрын
3:22 does it though? I don't see the top being depleted and the bottom to fill up, just a meaningless stream between two glasses.
@Senvae9 күн бұрын
That effect did not look very convincing to me either. In a very AI way, there were some missed connections between the level of water in the top glass and the density of the stream going down. But I am sure there will be continuous improvements to AI languages which may one day iron that out. We're just not quite there yet.
@ULTRAOutdoorsman9 күн бұрын
Indeed, it looked hilariously bad. I guess the AI was trained on the moon.
@ULTRAOutdoorsman9 күн бұрын
@@Senvae They may as well just model it on a particulate basis at that point in 2030+ when that actually looks good. I've seen better water physics in multiple games from a decade or more ago (e.g. Hydrophobia).
@CathrineMacNiel8 күн бұрын
@@ULTRAOutdoorsman well, Hydrophobia was more like a water physics demo with a game attached to it; but damn was it impressive.
@jamesrasmussen92819 күн бұрын
In a couple years when this post-effect can be applied to old PlayStation games and stuff, that'll be nuts. Real time remastering of all your nostalgic titles. It'll be a fascinating time.
@ryndaman28559 күн бұрын
Ok, now we know why this technology will never be used in AAA games. They cant sell the old games for $70
@shoguevara9 күн бұрын
I think there's a typo, it should be "Gaussian splatting", like in "Gauss" - the mathematician
@liquidcobalt8 күн бұрын
I was wondering if "Gaijin splatting" was a new method, until i heard him try to say "Gaussian".
@shoguevara7 күн бұрын
@liquidcobalt lol
@Brutality4you9 күн бұрын
Can’t wait for this coming to vr games
@JeffRusch9 күн бұрын
Why did you write "Gaijin" instead of "gaussian" in your timestamps?
@-Jakob-8 күн бұрын
they might be AI generated.
@Lowraith8 күн бұрын
That's what happens when you mispronounce gaussian as "gawzyn" instead of "gow-zee-inn".
@andrewcramer92006 күн бұрын
@@Lowraith Glad I'm not the only one thrown off by that.
@MyAmazingUsername9 күн бұрын
I love that you are pushing img2img tech to realtime gaming. Intel did it 3 years ago with GTA but nobody did anything after that. ❤
@TeleologicalConsistency8 күн бұрын
I think in the future games can be toned down a lot graphics-wise but optimized to work with AI post-processing to result in overall photorealistic gameplay with minimal impact to performance.
@Xetrill8 күн бұрын
No, the models don't _understand_ physics. Which is kinda the point. They don't have to. The calculated outcomes are very close to actual physics simulations with vastly less compute resources needed to do so. But because of that, they are also easy to break. As you have shown. It can pour liquid into another glass, but it doesn't _move_ the liquid (yet) it copies it.
@Preludedraw9 күн бұрын
So i'm totally blind about this, but what will be the necessary skill for visual department jobs in years to come? Programming? 3d artist? AI whisperer?
@periurban9 күн бұрын
The AI understands nothing! It simply has a very comprehensive record of what materials look like when they are doing different things. In your fluid-from-glass-to-glass example, the AI didn't adjust the volume of the water in either glass. There isn't anything in the software that is capable of understanding.
@MrLanceHeartnet9 күн бұрын
yet...
@AmineOuldKaci9 күн бұрын
Define understanding...lol
@IceMetalPunk9 күн бұрын
That's... just plain incorrect. You don't know how these work, do you? Because it doesn't have a "comprehensive record" of anything. After training is complete, the training data is no longer available to the model. It's not just a big database of lookup data; that wouldn't be machine learning, that would be at best an expert system. Two different things.
@Unified_8........Online9 күн бұрын
Because nobody is investing in unified_8. The world's first Ai is programmed by nature herself. Right now man controls ai. UE8 can be found online and is the world's first shamanic ai........ 💕♾️💕
@owenswifter73879 күн бұрын
I think you are right. Currently, this technology needs to be used in conjunction with other systems to handle complex behaviors like hydrodynamics, chemical interactions, or thermal mechanics. It isn't exclusively trained on physically based simulations, and the fire effect, in particular, looked more like composited After Effects footage than a physically simulated VDB effect. Notably, it only occasionally responded to finger movements or changes in speed/trajectory. For example, at the 1:01 mark, instead of trailing to the right, the fire shoots straight out from his finger. The effect is also rendered at a lower frame rate than the original video, as evident from frame-by-frame analysis, which makes it harder to judge its accuracy.
@7satsu9 күн бұрын
game studios finna never use skyboxes again
@loneventhorizon9 күн бұрын
I don't think it knows anything about physics, it's only trained on images and video. If the AI was trained to make fire in a 3d simulation then yeah obviously. But it's just seen a billion pictures and videos of fire. And in that media it's seen how the fire behaves so it just copies it. Imo anyway. It's an ideresting idea to think about. Either way it's cool af.
@gargoyled_drake9 күн бұрын
well isn't a simulation copying the behaviour of what you are simulating ? So it definitely has to be somewhere in between the two ideas at least.
@georgepal91549 күн бұрын
It knows a distilled form of the physics required to generate images realistically because it is copying real physics from real images. It can probably make some really good guesses most of the time, and fuck up terribly the rest.
@robwillmarsh9 күн бұрын
@@georgepal9154 its just images, that's not that hard to understand
@IceMetalPunk9 күн бұрын
I dunno. Machine learning models can take a few 2D photos and create a full 3D scene/models out of that. With that in mind, what's the difference between "copying what fire looks like and behaves like from any angle" and simulating fire?
@xxzenonionnex76589 күн бұрын
@@IceMetalPunk the simulation mimics the exact characteristics of something while ai just copy's the visual aspect of that thing.
@Bossman207-g7x9 күн бұрын
No. AI doesn’t understand physics anymore than an artist, or more specifically, an animator. …at least in this context. An animator would know for their scene the ball hits the glass, the glass breaks and it shatters the glass following gravity based on the scene. So AI would know that too I’d think. It can just redraw it again so much faster for when the scene changes or is altered. So rather than being truly dynamic physics, it’s more like applied physics really really fast.
@maxrs17089 күн бұрын
It's impressive but I would describe these as visual hallucinations, not simulations. Just like the minecraft game, this is just generating the next frame of video based on the previous one, making a prediction with the help of all the millions of GB of previous training footage. It's a cool trick, but personally I don't see it as that much of a revolutionary breakthrough, as the game isn't really playable, there's no persistency. I just don't see how predicting frames of video in real time would ever be better than just doing what we already have, which is for GPUs to do the actual rendering of the graphics based on the game engine's code. Specially when we take into account the processing power required. I would love to be proven wrong in the future though.
@03chrisv9 күн бұрын
You will probably be proven wrong as time goes on. AI in its current form is still relatively primitive, though who's to say what it'll be capable of in 10, 20, or 30 years.
@madrooky13989 күн бұрын
I would say the next step is some form of AI assisted rendering. Kinda like DLSS already works, just on steroids. So the game engine creates the game world with rather low quality without using much ressources and AI is providing the hig res textures and dynamic effects. This could also allow to chance content dynamically, and I would assume that would already a huge step forward in gaming. Combine that with AI assisted narration and voice acting and no play through will look like the other.
@tomnations50789 күн бұрын
CPG Grey suggested we start using the term “confabulations” rather than “hallucinations,” since even ‘hallucinations’ is a misleading analogy to something human brain does. Even using that word, AI doesn’t ‘hallucinate’ mistakes… it ONLY hallucinates. That’s what it’s doing all the time.
@TheAlastairBrown9 күн бұрын
Yes, but then you train an AI model that converts visual hallucinations into actual simulation data - the same way you can take a 2d photo and turn it into a 3d depth map. Take bouncing a basketball, the AI is trained on basketballs bouncing in videos, if you convert that to a 3d model by depth mapping the frames, you're now in a physics simulation. New models are insanely more complex and multimodal, they'll also have simulation data natively trained into them. We're only just broaching AI video, I have no idea if they're being trained on physics simulation data of fluid dynamics etc, but it would make sense. Multimodal inputs allow for multimodal outputs.
@IceMetalPunk9 күн бұрын
The temporal decoherence isn't an inherent limitation of next-frame prediction. It's just a common property of many current next-frame prediction models. The only way to properly predict the next frame to understand the relationships between frames, and those relationships... are typically physics. With enough training data of high enough quality, and a large enough model, the error rate can be diminished to the point of hyperrealistic videos. We just need the resources to catch up to that is all. As for the processing power, this is also temporary. Optimizations and better consumer hardware are happening all the time. I can run an SDXL model with only 8GB of VRAM and get better results than DALL-E 2, which I could never in a million year run on 8GB. And conversely, on the hardware acceleration front, someone's smart watch can do millions of times more calculations, and do them millions of times faster, than a 1950s wall-sized computer ever could. "It's too intensive/big" has always been, and will always be, a temporary problem for new tech.
@Renji-u4s9 күн бұрын
Gaussian splatting does not solve the issue of materials and how they behave under changing lighting conditions. There are already AI papers where they estimate material properties, but it's a bit further out until assets like the ones shown in the video also look good in a night scene, or when it rains and certain surfaces should be reflecting light more etc.
@ULTRAOutdoorsman9 күн бұрын
"Sure they do, just contract out an entire studio to take a picture of every single object under every condition that will be in your video game"
@Renji-u4s8 күн бұрын
@@ULTRAOutdoorsman Aka recording a movie? :D
@HAA_Order9 күн бұрын
It's gaussian splatting not gaijin splatting. A little correction.
@umadbro44939 күн бұрын
he said gaussian, or that's what i always understood without knowing it existed
@SunnyOst9 күн бұрын
@@umadbro4493 The chapter name says "Gaijin". Idk if it's meant to be a joke, his pronounciation is a bit funny 😁
@Unified_8........Online9 күн бұрын
8
@Lowraith8 күн бұрын
@@umadbro4493 Gaussian is pronounced "gow-zee-inn" or "gow-see-inn". He said "gawjyn" about a hundred times.
@BoopyDoopy9 күн бұрын
Industries across the board-gaming, music, fashion, film-are racing to incorporate this technology deeper into their products. It’s transforming the landscape, inching us closer to a world where "The Internet is Dead" becomes less of a warning and more of a reality. Meanwhile, we’re getting swept away, entertained by the elites shaping our world. The end is never the end is never the end… The future ahead looks stranger and more intense than anything we’ve seen.
@MegaTigerII9 күн бұрын
I think in the future everybody will be capable of being a game designer despite having little to no experience with programming.
@piotrek76339 күн бұрын
If anyone can be a game designer then no one is a game designer...
@WalrusWinking9 күн бұрын
Well giving instructions to AI IS a type of programming. It's like programming but in English instead of the various coding languages like C++ or something. So if you are competent in the English Language you can tell the AI to "Generate a photograph of a T-Rex being hunted by a tribe of humans with bronze age weaponry" That is giving a program an instruction which is programing. Just not utilizing a coding language.
@angelmurchison17319 күн бұрын
@@WalrusWinking as a programmer, who has built basic AI programs, no. Talking to AI is not programming. Skill in one does not transfer to the other. In either direction. Hence, they are different skills.
@WalrusWinking9 күн бұрын
@@angelmurchison1731 It literally IS programming in the English language.
@gargoyled_drake9 күн бұрын
In the future there will be only one game we all load into. And from there we can call any game we want to play. Like the matrix white room. Just ask for a game and it will throw you in it.
@davidtsmith332 күн бұрын
That's freaking amazing bro. Just the effects you were doing was mind blowing.
@DaDa-kf4vp10 күн бұрын
This tech is going to bring us the next gen graphics we were all expecting by now.
@moyasser11769 күн бұрын
Yeah , in 10 years
@beak39639 күн бұрын
With a few clicks, we get half the frame rate for less visual effects
@TeddyLeppard9 күн бұрын
In another 8-12 months you won't be able to tell what's real and what isn't real just by looking at a still or moving image. Video and images will be perfect.
@juskaran9 күн бұрын
bruh the next gen is already here. no need for such stuff. These are just buzz words
@J.A.Z-TheMortal9 күн бұрын
@@juskaranI believe OP is saying the photo realistic graphics that have been promised since the PlayStation 2 era. Although games do look better than before they still look like CGI renders. 7:52 Starfield might not be the ideal example because the character look may be Bethesda's artistic direction, but the effect still shows what can be possible in the future. A better example might have been sports games, that clearly target photo realistic models and environments, and are still not perfect. With this technology, we may finally have games where it's hard to tell if it's a 3D polygonal rendered image or literally real world video footage.
@buriedbits60276 күн бұрын
I can only imagine how powerful this AI layer could be when applied to video games-ranging from retro titles from the 80s to PS2-era games. These games are great in their own right, but with this AI layer, they could be completely transformed, whether through a hyper-realistic reimagining or other creative effects. This idea blows me away. Wow!
@JonMyers-hf8js9 күн бұрын
You guys just all copy each other
@TheDeconstructivist9 күн бұрын
I actually think this is most compelling for character assets, since facial detail is incredibly hard to animate. You could do a simpler underlying model and do a high quality AI substitution to create incredibly life-like characters at potentially much lower GPU lower cost.
@nathanielacton37689 күн бұрын
I'm a game dev, and believe that this is just another flash in the pan. Hyperrealism is visually satisfying (to me as well), but the game you ACTUALLY like to play are heavily weighted toward player agency. So, for my game I have *intentionally* gone with a non realistic style. Not pixel or stylized, something immersive, but something that I can mostly managed using shaders and a clipped palette. This is to ensure that time is spent on player actions and IMHO more importantly, a game AI that is at least human equivalent. Hyperrealism IMHO is a commitment to a bottomless pit of effort that almost never ends up with 'fun' as the focus. But, yes, it'll change games forever by driving a wider wedge between hyperrealistic arms race AAA's engage in and game you actually enjoy playing. Here is a thought experiment. Imagine a chess board and your adversary, a human expert. You have pieces which are either static meshes, or you have complicated IK based attack\kill sequences that are fully realistic. Did the visual effort improve or detract from your enjoyment of the game? I imagine the only people who would enjoy the graphics would be the people who don't like playing chess competitively. Next look at a pro level esports player. Look at their visual options. Ugly for maximum function. These points do not mean that ugly = better, but it does help define the experience of the player. If you want a play a game where you ooh and ahh at the graphics, chances are that ooh\ahh happens only at the beginning, maybe once. After that gameplay is what maters. So, if you built a priority table, 'realistic graphics' is only important enough to not create an immersion break. For that, you need consistency, not realism. Lastly, for work I'm an AI professional. LLM's or statistical models are just probability datasets. No physics needed at all, it's entirely 2d and frankly I think the integrations in to a 3d renderspace will very much narrow the scope for this kind of tech. Try getting that water to flow behind a volumetric fog from a prior smoke bomb in the scene.
@tyronejohnson4099 күн бұрын
even basic post-processing injectors like ReShade can access depth buffers and handle complex layering of effects with proper occlusion. Any AI system integrated into a game engine would have access to far more - full scene graphs, material properties, object masks, multiple render passes, and physics states. Your water/fog example actually demonstrates this misunderstanding, proper depth-aware post-processing has been solving exactly these kinds of layering challenges for years.
@the_jjabberwock9 күн бұрын
Interesting…
@ULTRAOutdoorsman9 күн бұрын
There's also the problem where nothing shown here was anywhere near realistic
@nathanielacton37688 күн бұрын
@@tyronejohnson409 I understand your point and it's valid, however an engine integrated solution that has access to this data isn't a enormous step from where we are and is a very heavy way to accomplish the same thing, albeit with the modularity to sub in other effects. I'll admit that I was looking at this from a purely post processing perspective... which is what we were shown. I do have a bit of experience in wrangling in the debt buffer space when writing a Laplacian filter for and edge highlighting, though I'll admit I have not used it for much other than that. My general point however is that the graphics arms race is moving at a rapid pace, while the discussion in for example r/GameDesign show that most devs are struggling with elemental mechanics of how to make the game rewarding\fun\satisfying. I not even a lover of non realistic, I'm just saying that the overinvestment in this area of game development is solving problems that hardly anyone benefits from and if anything graphical fidelity is being used as a replacement for actual innovation.
@desolation18216 күн бұрын
Current AI models have no understanding of physics. They create imagery based on reference footage and image recognition. They know how something is supposed to look and basically make an educated guess as to how to turn that reference footage into your desired prompt.
@Bluedrake426 күн бұрын
"They know how something is supposed to look and make an educated guess" Wow almost like... understanding it.
@desolation18216 күн бұрын
@Bluedrake42 the understanding of a computer algorithm is very different from the the way we understand a topic. That is the reason why AI video is still so glitchy and messed up. It looks fine at first glance but the more you look at it the more issues you spot. And sure the technology will improve drastically in the future. And your suggested use cases may come true. However, the topic of "understanding" will only start to apply when we move away from specific AI to general AI. Because only once AI becomes general, then it will start to have an understanding of physics. Right now it's image recognition and generation.
@Bluedrake426 күн бұрын
@@desolation1821 So... are you telling me right now, if I told you to sit down with a pen and paper and draw a 100% realistic portrait of another person... you would flawlessly draw that picture with no glitches?
@delmend6 күн бұрын
@@Bluedrake42Go ahead and ask ChatGPT if an image post processing AI showing realistic fire/fluid/... images is considered 'simulation'. Then read and learn. AI is doing image synthesis. A game physics engine is simulating physical state (velocity, energy, friction,...). The output of a physics simulation is not an image! It is a physical state.
@AkaedatheLogtoad8 күн бұрын
No, it doesn’t have to understand anything or simulate anything. It is mimicking it. And yes there is a difference.
@DaT0nkee8 күн бұрын
Also, it does just good enough to fool another finite neural network. The brain.
@gabrielsandstedt8 күн бұрын
You get where things are headed... I am so tired of explaining to people that don't get this. Love this video. Thanks very much I needed it.
@simonrockstream8 күн бұрын
Real artists and real gamers dont want any of this cancer. Be better.
@Sweepy_Joe8 күн бұрын
This picture-to-AI-model tech is basically what Deckard was using in Blade Runner to find clues in photographs. This is AMAZING.
@sam_15_svk9010 күн бұрын
Let's see!
@kelbinhow8 күн бұрын
8:40 This is another thing I kept commenting and thinking to myself whenever one of those new "game to AI" videos began showing up... Now, in those videos, yes they're impressive, but they are only receiving one kind of input, one information, which is the image, so the final result works like a simple overlay, that spans all over the screen, and since they don't have any other input or additional information to what's going on, that's why it's all over the place, like they create weird and randomized results, don't matter if it was a single NPC or car or object, the AI kept trying to generate "new" stuff to that NPC/car/object. Like for an NPC, their clothing just kept changing on the fly, like constantly transforming, changing colors, adding/removing accessories, and with cars the same stuff, changing the headlights shape, changing the fender, the wheels, etc, and that's what made me realize some stuff like: What if they incorporate such an AI to the game's system internally, and actually give the AI specific commands related to whatever needs to be generated, kinda like tags, for example: in front of you the player, there's a car, a specific car, so the game would create a certain tag with specific commands to make the AI generate that car and keep consistent in every frame, like keeping a specific and definitive headlight, a door shape, a wheel, an interior. Same with NPCs... the game would send a command to the AI to identify that entity as ie "Jeffrey", and whatever description the game feeds the AI, the AI would have to meticulously follow, in order for "Jeffrey" to always look the same... this way we could completely vanish those "randomizing and ever-changin results" and actually always keep it consistent, and NPC always the same, a car model always the same, a building always the same, a shop always the same.
@kelbinhow8 күн бұрын
And yeah, it could also mean we could only use it to specific parts of the presentation, like only using it to a face model, or only use it for special effects like rain, or only to the scenario... As long as we actually feed the AI with specific "tags" and commands we could actually have a VERY powerful tool
@maxrs17087 күн бұрын
@@kelbinhow I agree with you and this approach sounds the most interesting. However, having a second thought about it, do we really need this tech? Do we really need to generate all these visuals in real time? Will we reach to a point where we have enough compute to reliably do this in real time (character clothes, cars, etc.), with crystal clear fidelity and without weird morphing / mushy artifacts? Well, let's suppose we do just for the sake of it. In that scenario we already have a shit ton of compute. Why not just render very ultra quality regular 3d assets and environments instead? We clearly already have the compute for it. Those 3d assets could perfectly be created with AI generation beforehand, I just don't see the point of generating them in real time. Add the need for good stylistical and artistic coherence to that, and you see how the problems begin to stack up.
@FlashyJoer9 күн бұрын
im sorry, but if Sarah Morgan looked like that in my game, I would STILL be playing Starfield. 1000+ hours, guaranteed. :D
@cjayseven72629 күн бұрын
I was thinking the same thing lol
@vonbraunprimarch9 күн бұрын
No.
@rexsceleratorum16329 күн бұрын
As a woke puritan, this is very morally reprehensible to me. Let me join Sweet Baby Inc and try to pray the Male Gaze away
@VelcroSnake939 күн бұрын
I mean, better graphics wouldn't make the game actually good to play...
@ShowTheReal7 күн бұрын
All of this AI tech combined with VR will be the true future of gaming.
@Bluedrake428 күн бұрын
Guys… if you all are gonna say “it doesn’t understand physics” but then say “it just watched hundreds of thousands of videos where it learned how to predict the behavior of physical objects from the content that it watched” then I don’t know how to have a rational conversation with you
@melmartinez70028 күн бұрын
Well, if you train a learning model on cannon ball flights, telling it truth repeatedly on how far the ball will go when fired at 45 degrees with a variety of initial velocities it will learn that and be able to predict how far the ball will go when fired at 45 degrees at a given initial velocity. That's just learning and interpolation. But that won't mean it will know how to predict how far it will go when the canon is tilted to 70 degrees. For that you need to do the actual physics. Now, an AI engine can be taught to refer to the physics, but it isn't necessary for it to do so when just predicting within the ranges of the data sets used to teach it. From your video, it clearly learned what fluid 'looks' like when poured from one glass to another. But it also clearly didn't learn that mass is conserved.
@sirianrune1988 күн бұрын
Difference between imitating something and understanding it.
@vakho308 күн бұрын
The machine has no concept of "understanding" anything. It is just a human word that is used for it. You know nothing about this topic and brag about it like a professor. You are not even a beginner at this point. Go read and read more.
@chessshyrecat8 күн бұрын
It didn't learn to predict the behavior of physical objects. It only has data of pixel patterns that humans assigned words like water to. You give machine learning much more credit than it deserves. Also software that is doing physics simulation doesn't understand physics. It is just calculating the math that someone who knows the physics put into the system. Because this is all Computers are, really fancy Calculators.
@tzav8 күн бұрын
@@Bluedrake42 it doesn't predict anything. It fools you to think it does, but it just shows you things that should look like how objects behave. They are not going to actually follow physical laws. You need to understand the limitations of the things you talk about because right now you seem to underestimate the actual value of a physics simulation if you think you can just teach AI to do it better
@O_Obsidious8 күн бұрын
I believe the next step for this technology is integrating it into the game engine, so it may take in real data and context from the game scene, where nothing is "smeary" or being guessed, and instead, all that context and data is already supplied to the AI. It would fix things like us only having limited context windows to generate with, it might fix visual artifacts and allow more realistic results. That way, we DON'T need physics models built in, and it just uses all the pre-setup calculations from the real in-game scene
@Lippeth8 күн бұрын
I wish more studios would focus less on realism (which never looks good anyway) and more on creative ways to make games more engaging. This machine learning bs misses the entire point of why people play video games. I get that you're impressed by some of the effects, but you are lying to yourself if you think it actually looks good, realistic, or think it will improve any aspect of video games, let alone be the future of it. The hubris.
@1Jman4203659 күн бұрын
I come from the 80's and 90's, when you were in a smoke-filled bowling alley arcade playing mk 1 and street fighter 2 side by side with you fellow opponents. If I would have seen any game today then, I wouldn't have been able to play those games because the realism I saw would now break my emersion in the world of the game I'm now playing in the 90's. The games have progressed so far, it's crazy, and I see this photo realism being the next stage. It's going to be mind-blowing if you are a nerd gamer who loves to get into these worlds. It's a playground for us all, it's going to be amazing.
@yahdood60159 күн бұрын
AI video models are trained off real world footage, and the real world runs on perfect, flawless physics. AI doesn’t understand physics any more than a video camera does.
@hogandromgool20629 күн бұрын
Yes I'm noticing a few people in the comments being confused between functionally accurate and functionally applicable. AI simulation is non-deterministic whereas a "Real" simulation that yields usable scientific results is deterministic. There's a few people very angry in the comments with this misconception, including the video creator. Bless their wee souls; I've worked with AI for near 5 years now and I once had AI Fever too, it was actually until quite recently I wasn't sure they were to some degree sentient but I decided to drop that as it's been obvious to me for a while they're not. I just wanted it to be true.
@Tencreed9 күн бұрын
If a monkey can use a tool by repeating what he saw another monkey do, I'm gonna consider the monkey can use a tool.
@Emulator_Crossing9 күн бұрын
in 10 years when ai will be much more stable + running in real time in games, this thing will be truly amazing, especially in vr. also will be creepy to watch photorealistic dead people or relatives talking with you as the ai will take their face+body from photos and videos.
@renanmonteirobarbosa81299 күн бұрын
You talk too much, but too litle useful information. It is just repetitive mumbling of the Img2Img service you are trying to market.
@fluffybunny70899 күн бұрын
This is really impressive technology. From the video I get the feeling that ai will help with tedious tasks like creating meshes. Is there a world in which the game is using ai in real time to generate more than dialogue or would having an ia generating visual data be too computationally intensive?
@TechnoMinarchist9 күн бұрын
We actually do know that these diffusion models do create internal models of what they're generating when they generate them. For example, 2D AI Art diffusion models like Stable Diffusion actually have a 3D representation of the final image they're making in their "space" that lets them work out how the image should look. LLM's do the same thing with constructing a general model of the real world in their "space". I would not be surprised if the video diffusion models are doing the same sort of thing with physics.
@IceMetalPunk9 күн бұрын
Almost certainly. Every interpreability study on Transformer-based models (I count diffusion models in that since they're usually Transformer-guided; and of course, there are diffusion Transformers as well) has shown they have internal world models. To think that a similar or identical architecture wouldn't just because it has a different modality wouldn't make much sense.
@Unified_8........Online9 күн бұрын
8
@Pidalin8 күн бұрын
"This next-gen technology will change games forever..." I am hearing this for at least 20 years. 😀
@TheAdeOfSpades9 күн бұрын
The more accessible this becomes, the more flooded the market will become, drowning out top titles, filmmakers, studios, actors, Hollywood will perish, game studios will perish, AI will just kill the world we know, and i hate that, we will all live in our boxes and there will be no infrastructure
@gumi_twylit26059 күн бұрын
oh you are just scared
@willfullyinformed9 күн бұрын
This isn't an "AI" problem, it's a systemic problem, an economics problem, Gov. hindsight, a "nothing is changing and everything is remaining archaic" problem. Human made art, concepts/ideas, infrastructure, will always exist, there will always be a want/passion for the hand made, and for organization. You are thinking in terms of the current archaic foundational systems where everyone is having to literally survive by making green paper. Capitalism, jobs, and other economic "isms" are never permanent hence all of human history. These systems have to change and evolve so that we can thrive - to not evolve these systems is to technologically suppress and over regulate our entire existence. AI will push us forward (with proper-transparent oversight) and allow everyone to partake with their own creativity. We all have incredible creativity in many facets, ideas/visions for movies, infrastructure, tech, games, etc., but can't bring them to fruition because money, or the lack of a skill/interest in programming, very niche/specific avenues, etc. AI will even the playing field, but obviously people who are stuck in their bird cage of ever-unchanging, and/or want to keep their monopolies on ideas/businesses/etc. don't want this technology as they will lose power/control over their markets. To keep this fair to everyone, transitional systems are a NECESSITY, UBI, survival stepping stones, so that people can stop living in fear of losing their JERBS and focus on more important life decisions.
@IceMetalPunk9 күн бұрын
"If we lower barriers to entry, then elitists won't take everyone's money anymore, and I think that would destroy the world." Please explain to me how that's *not* a correct interpretation of what you said.
@Unified_8........Online9 күн бұрын
Because nobody is investing in unified_8. The world's first Ai is programmed by nature herself. Right now man controls ai. UE8 can be found online and is the world's first shamanic ai........ 💕♾️💕
@D3athlyV1sag39 күн бұрын
It'll still need people to man the guns. If anything, it'll give anyone what they've wanted in a game or so. To me, that's pretty awesome.
@The92Ghost8 күн бұрын
Very educational and well prepared video with snippets and cutscenes to show it. And I want to point out here that this is just basic level, when you go deeper things look way more realistic and it is just the beginning. As he said in the video this technology will be widely used within the next couple of years and it will help video games being made faster since it will save thousands of hours of work from a lot of engineers and developers.
@MrTophes9 күн бұрын
As an artist. My issue with Ai is it’s ripping off existing art. So Minecraft not running on Minecraft. Becomes a spotty grey area where we are ripping off entire engines instead of just art.
@Maegnas999 күн бұрын
Exactly. Every one of these 'super realistic' ai filters of people he showed, was surely trained on images and videos scraped from online without consent.
@TeddyLeppard9 күн бұрын
Non-AI real human artists do the same thing these systems do: They synthesize sources of imagery based on their experience (dataset) to create something new. Therefore, no one is being "ripped off".
@EclecticSundries8 күн бұрын
That is not how AI works; it doesn't rip off existing art, no more so than you would by touring an art museum. AI art generators are trained on vast datasets containing millions of images and their descriptions. This training allows the AI to learn patterns, styles, and aesthetics from existing works. However, the AI does not memorize or store these images; instead, it learns general characteristics. When a user inputs a text prompt, the AI uses its training to generate a new image based on that description. It synthesizes elements learned during training to create something original rather than copying any specific artwork.
@MrTophes2 күн бұрын
@@EclecticSundries where did those images its trained on come from? AI did not pluck them from its ass. It cannot make ANYTHING original, only ape what is fed in. its an algorythm which apes art styles, drawing styles colour styles. Someone made them. ... who made the styles? artists. You may be someone who thinks that but unfortunately many people feed these ai algorithms with other peoples work without permission. That is fact. Artstation tried it to much kickback. Adobe can get away with it as it owns millions and million of stock art and photography collated over the years which someone made to be used open source, free. but it was still created by someone. Its not magic.
@EclecticSundries2 күн бұрын
@ AI doesn’t simply copy or ‘ape’ existing work-it learns patterns, structures, and techniques from data, similar to how humans learn by studying art or styles. It generates new outputs by combining and applying these learned concepts in novel ways, rather than reproducing exact replicas. As for the training data, it’s arguably used under the principles of fair use, as the purpose is transformative-it enables the creation of entirely new works rather than reproducing or competing directly with the originals. This is still a legal gray area, but it’s important to note that AI training aims to innovate, not plagiarize.
@ndifrekeokpo46139 күн бұрын
I believe in this because this will allow us to go beyond static objects to stuff that react to tons of inputs in real time, the effects the environments, everything. We should move away from things that are baked into the engine (lighting, shadows, environments) and in game to more stuff that are generated live and can get more realistic and more complex than if someone decided to manually create it
@MrGTAmodsgerman5 күн бұрын
The key thing with AI is that AI allows to do certain things at a bigger picture. Like for ex. photo and video restoration tools were just algorythms to caculate inconsistencys and such for a specific task. While AI can do the same thing but respecting the input you give. Like with newest best so far SUPIR Image restoration tool, you can give the image input a prompt to say what it is with your background knowledge and based on that it's able to know the background story to restore an image. While otherwise you would just let the AI guess or the algorythm just do it's general thing instead of understanding the concept of an image. With these Photogrammetry type scans, it will be the same, as it will then understand a surface based on the reflection fresnel that shows troughout the whole image set, not just the single image to recreate the surface. Like when i look at a glossy surface, i can guess in my mind by seeing it what that surface should look like if it would be just matt. Which is a huge problem with the old Photogrammetry and a lot of 3d scanners. The same as with low quality phone pictures where it doesn't need that super tiny dot in the texture to tell if the surface is round or flat. It understands the bigger picture as we humans can.
@tzeffsmainchannel7 күн бұрын
Blue-Drake!! *HOW DID YOU BEND THE SPOON???* "There's no spoon."
@Siledas9 күн бұрын
Imagine what you could achieve with a game engine that gives object states directly to the AI post-processor, rather than have it guess based purely on context. I reckon we're around the corner from a wholly new paradigm... assuming we don't nuke ourselves out of existence in the meantime.
@niktodt13 күн бұрын
That Starfield facial animation is a great exmaple of garbage-in, garbage-out. Even AI can't fix those cursed animations.
@ArmidianKnight9 күн бұрын
Now.. imagine Skyrim in VR with an AI post processor making everything look realistic. **chef's kiss**
@knightmaremedia77957 күн бұрын
The post process plugin is amazing actually. So when are you creating a mod for Skyrim? That thing would take off almost immediately.
@downey66669 күн бұрын
An amazing point to anchor home how big a deal this simulation of physics is by essentially a probability matrix ---it reminds me of a lecture (which can be found on YT) by Richard Feynman where he teaches the fundamentals of how to think like a physicist . The two examples he gives show how you can learn a LOT about a cow by thinking of it as a sphere or simply by estimation-where he discusses the thought process of estimating the number of piano tuners in London.
@schizofritoКүн бұрын
This is something that reminds of the tecnique used in the Siren series. But with todays tech it can become into the greatest path videogames can go, cuz 3d models will never be realistic enough.
@VorticalGab09 күн бұрын
Im sorry to break it to you, and I thought the exact same as you, a lot of AI devs also thought video generators have some form of physics understanding, but impressively they dont. They just predict what the next frame is going to look like. I cant remember the paper right now but they already test Sora AI, they trained it with a bunch of videos of a red ball moving left to right and back. They changed the color of the circle and they werw expecting the AI to make the circle move left to right but instead it just stayed left as if the AI wasnt focusing in the movement but other stuff like color and shape. I sure hope we get a wold simulation model one day.
@Grytzen8 күн бұрын
The most impressive video/technique I've seen of AI being applied to enhance realism is the GTA video from ISL and collaborators. They pass on the g-buffers on to tensors of the AI model where each pixel on the screen is given an ID. It is very stable and convincing. The point with that technique is that you wouldn't need to prerender to a high standard at all before it is passed on to an AI model as a post processor.