The ONLY texture a game NEEDS [UE4, valid for UE5]

  Рет қаралды 73,985

Visual Tech Art

Visual Tech Art

Күн бұрын

Пікірлер: 285
@D3FA1T1
@D3FA1T1 2 жыл бұрын
"the normal map is very important! lets scrap it" proper game development right there
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I'm glad that someone got it ahahahaa
@reliquary1267
@reliquary1267 2 жыл бұрын
It's GENIUS actually, unless you just don't get it
@Johndiasparra
@Johndiasparra 2 жыл бұрын
Proper technical artist**
@gehtsiegarnixan
@gehtsiegarnixan Жыл бұрын
Then procedes to recreate it using bilinear interpolation with 4 extra texture samples making the improved material way more expensive than the original. Still very interesting process
@falkreon
@falkreon 2 жыл бұрын
Your artifacts aren't coming from texture compression. They're coming from sampling immediate neighbors. You can sample more neighbors or just slide your points out so that the hardware texture filtering samples more points for you. And you're doing a heck of a lot of work to arrive at the built-in normalize node. Just normalize the vector after you scale it.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
The neighbours are already bilinearly interpolated, and since the original texture was a jpg recompressed to BC5, you can see why I say that the artefacts are due to that :) Normalize does exactly the same thing I did, internally. If I was using a normalize node, that would have meant doing one more dot product for the condition check, in that way I'm recycling it in my normalization!
@falkreon
@falkreon 2 жыл бұрын
@@VisualTechArt what are you talking about? normalize usually decomposes to a length (via pythag on hardware), a divide, and a saturate. your divide by zero check happens for free. and the box blur artifact is not from the compression. it's from your unintended box blur. placing your extra samples 1.5px away instead of 1px will help but not eliminate it because the most accurate answer is two gaussian kernels.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
@@falkreon You can go on shaderplayground.io and check the ISA breakdown yourself that the instructions generated from a normalize and what I did are identical! :D I'll check the thing you're saying about the kernel and see if I missed something, thanks.
@falkreon
@falkreon 2 жыл бұрын
@@VisualTechArt No, I get what you're saying, you save three multiplies, which are cheap, and still doing the square root, divide, and saturate, which are the expensive parts, and throwing an IF node on top of that. This is going to become optimized SPIR-V, not GLSL. Use the normalize node so that it's easier to maintain.
@falkreon
@falkreon 2 жыл бұрын
As an aside here, you're right on target with the memory and streaming bandwidth bottleneck. For Unreal 5 specifically, instead of using a shader to build displacement and normal maps, throw away the normal and displacement maps entirely and use real geometry with nanite. Performance increases dramatically, not with the switch to nanite, but with the removal of these "extra" maps which are trying to approximate what we can finally do for real.
@ShrikeGFX
@ShrikeGFX 2 жыл бұрын
The thing is that the normal creation works very inconsistently well and you are basically reverting back to CrazyBump times What looks good on a bark might look really bad on a brick. This makes sense if you don't have normal maps available otherwise all these operations will just cost more performance but less memory
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I don't see what's the issue in tuning up the Normal in UE instead of doing it in Maya or Blender or Designer or ZBrush, etc... Unless you're baking from an highpoly, you're still running a similar filter to make it :) And you can still bake an heightmap instead of a normal and run the filter in the shader anyways... And yes, this moves the "price" from "memory reads" to "vector alu", which, as I point out in the video, are the ones with more fire power in current gen of GPUs :)
@ShrikeGFX
@ShrikeGFX 2 жыл бұрын
@@VisualTechArt Yes but I see very few people still doing normals from height, I guess from a baked heightmap its good but generally people bake anyways or use already done textures with baked or scanned normals, and megascans have only a low contrast displacement map in mid grays, but I can see the convenience of just using one texture
@mattiabruni5463
@mattiabruni5463 2 жыл бұрын
@@VisualTechArt another problem with your approach is that your generated normal map is losing the highest frequency details and is practically equivalent to a normal map of half the resolution. So you could have your 4K, 1 channel displacement or a 2K, 2 channel normal map and get the same information (it you don't use actual displacement, so it depends) Interesting method nonetheless but not really one-size-fits-all
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I didn't care about the size of the texture, it wasn't the point of the video! But yes, of course using an oversized texture is bad :)
@gamertech4589
@gamertech4589 Жыл бұрын
@@VisualTechArt Does it cost fps or cpu ?
@fernandodiaz5867
@fernandodiaz5867 2 жыл бұрын
Wow! I only was doing this with two textures. First texture I put a difusse (rgb) and roughess (alpha) Second texture I use a normal map compresed in two channels (r,g) and derive z normal from this two vectors. And I put ambient oclusion in the blue channel and displacement in the alpha. Incredible with one texture!! Thanks a lot!
@jabadahut50
@jabadahut50 2 жыл бұрын
Tbf your method allows more artistic flexibility with traditional tools and includes the AO map which is incredibly useful for reducing performance impact since you don't need ssao. Both are very cool methods of reducing texture bloat. One could combine the two methods... use his method to get roughness and normals from the displacement... and then with a second texture you could have Ambient occlusion, Subsurface Scattering/mesh thickness map, Metalness map, and Opacity map.
2 жыл бұрын
A normal map often imported as a Normal map in traditional game engines. I tried what you proposed above like 2 years ago and I got weird looking AO and Smoothness back in time. Am I doing something wrong or can you explain a bit more about how do you import and use these textures?
@kenalpha3
@kenalpha3 2 жыл бұрын
Did your performance increase or decrease by switching to 2 texture Materials (more instructions)? More textures = more ram use correct? But less textures, more instructions = higher stress on performance (lower performance if many Materials like this at once)?
@N00bB1scu1tGaming
@N00bB1scu1tGaming 2 жыл бұрын
@kenalpha3 the impact to sample 2 packed textures is far less compute heavy than whatever you want to call this solution. Most GPUs also have built in architectural optimization to make these additional samples negligible. Just channel pack and atlas, you get the correct results with far less cancer.
@kenalpha3
@kenalpha3 2 жыл бұрын
@@N00bB1scu1tGaming I mean what is better performance: 2 textures packed (even with Alpha = doubles mem use in UE4?) OR 4 textures (not tightly packed, Not all RGBA channels used. But someone said UE4 loads all RGBA channels in mem anyways if just 1 channel is connected?) My texture set is 4 to 5, some are loose. My code is like 490 instructions (for a character skin with advanced effects and Material/texture changes).
@TorQueMoD
@TorQueMoD 2 жыл бұрын
Wow, you clearly know a lot about shader programming. Well done. I've never seen a tutorial like this :) Liked and Subbed.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Much appreciated :)
@SumitDasSD
@SumitDasSD 2 жыл бұрын
A cool approach to experiment. Though this technique is inefficient and also can't be used for a lot of surface to represent. You are basically following the old Photoshop process of creating textures for materials using Albedo. That approach is nice but will never produce better result. Also Because you are calculating them in shader instead of using Textures, it can be calculation heavy. I feel the trade off is not worth it. Also for the Normal, You can use a more detailed Height map and then process it with the mesh normals to get a more accurate detail normal.
@DARK_AMBIGUOUS
@DARK_AMBIGUOUS 2 жыл бұрын
I love watching videos about optimization. I make games for phones and try to have the best graphics so I need this kind of stuff
@toapyandfriends
@toapyandfriends 2 жыл бұрын
😀'amen!
@C_Corpze
@C_Corpze 2 жыл бұрын
I use a texture format called "ARM" which is 3 textures inside one. Ambient occlusion = Red channel Roughness = Green channel Metalness (or use for different mask if material is not metallic) = Blue channel. This reduces the amount of textures from 3 to 1 and simply uses RGB values. I use ARM textures almost everywhere because they save a lot of memory and still look really good. And because they still can provide a lot of material information or since their unused channels (such as metallic) can be used for various types of masks they can be used for many things. Also you can reduce the amount of texture samplers by setting the sampler nodes to "shared wrapped" mode. If you then sample the same texture multiple times it's seen as 1 sample.
@karambiatos
@karambiatos Жыл бұрын
wow good job you discovered.... the unreal engine basic documentation....
@jakubklima5193
@jakubklima5193 11 ай бұрын
​@@karambiatos Damn, bruh, do you have self-esteem issues? What he said might actually help someone; there's no need to be toxic and demotivating.
@JohnJohnsonSonOfJohn
@JohnJohnsonSonOfJohn 3 ай бұрын
It’s seen as one sample but multiple texture lookups. I think you should look at the texture sample stat as a way to gauge memory usage but use the texture lookups stat to see how many samples there are. The stats window could be a little clearer in this regard (imo)
@WoodysAR
@WoodysAR 2 жыл бұрын
Since it is a grayscale texture ( and normal Maps only need two channels) I'd create a proper normal map from the texture. Then put grayscale texture in the third Channel and use an alpha for roughness and metallicity. That would give the dimensional normal effect without compromise and still have a single texture. even more efficient would be to use a single gray scale image for the texture and then a constant variable for roughness and metalicity.
@EXpMiNi
@EXpMiNi 2 жыл бұрын
Virtual Textures/lightmaps/shadows are there to not have the Vram issue, doing that is transferring one issue (who does not exist that much anymore) to an other (number of instructions), that being said I think it's a very interesting approach! I would completely consider that kind of solutions if I wanted to have a very very light project with tones of reused assets using the same textures set everywhere :) !
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Number of instruction is a less existing issue at this point in time, I'd argue! But yes, fair point :)
@albarnie1168
@albarnie1168 2 жыл бұрын
Adding an alpha channel doubles the amount of space and memory, because unreal uses 16 bit for textures with alpha. Also, in photogrammetry, the normal and height are not derived directly from color, but from the 3d point cloud generated from hundreds of images. Regardless, this is a super fun exercise. The best idea for textures like this is to do two samples Basecolor, and normal. Because the blue channel is not needed in normal, you can put AO, height or roughness in there. Common technique is to put height into normal, and then have the AoRM texture. Still 3 texture samples, but not so much more space. Alternatively, if you care less about samples, you could do basecolor, then have height, ao and roughness in a second texture. Same amount of memory as your technique! Stepping issue is due to precision and sample distance, not compression I think. Compression would have larger blocks with smooth details in them.
@kenalpha3
@kenalpha3 2 жыл бұрын
Can you post the code on BlueprintUE? More textures = more ram use correct? But less textures, more instructions = higher stress on performance (lower performance if many Materials like this at once)? And you do or dont use Alpha channel packed?
@YourSandbox
@YourSandbox 2 жыл бұрын
big fan of your channel sir. was thinking of a wayout to have simplified props and character shaders for metahuman and megascans. found a thing to keep in minds. brilliant
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Thanks! :D
@LastIberianLynx_GameDev
@LastIberianLynx_GameDev 2 ай бұрын
Very creative approach. And innovative. I think this can bring benefits in many cases.
@rossbayliss4151
@rossbayliss4151 2 жыл бұрын
Can anyone tell me what those light-blue BaseUV nodes are at 3:23? My materials always look like spaghetti and they look perfect for me.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I suppose you're referring to Named Reroute Nodes? They're quite handy :D
@StBlueFire
@StBlueFire 2 жыл бұрын
@@VisualTechArt Thank you so much! This has been one of the things I've hated most about materials but now I have a solution.
@schrottiyhd6776
@schrottiyhd6776 2 жыл бұрын
"Speaking of which, would you like to explore more the world of photogrammetry" while moving camera around the object with the chunky mouse movement 👍 I like it
@saisnice
@saisnice 2 жыл бұрын
Every interesting! Could be really useful for stylized/mobile games. Thank you for video!
@TroublingMink59
@TroublingMink59 2 жыл бұрын
Technically, you could just bake in Displacement (Which Normal data is usually inherited from) in an external 3D program, and simply use a roughness channel in the alpha channel of the diffuse. To make things extra crazy, you could retain a metal map encoded into the same map as the roughness at the cost of half precision roughness by making metal roughness features use the top 50% of the value range and diffuse roughness for the lower 50%. Then just feed this new encoded roughmetalness into a multiply node set to 2, and then a pair of IF nodes. For the roughness output IF node, just have it pass through for values less than 1, and have a roughness-1 version come through for values greater than 1. For the metal IF node, I would just set a 0 float for values less than 1 and a 1 float for values greater than 1. The biggest caveat of this method is that nonbinary metal textures would have to be crushed into binary ones. Another caveat is that every mesh using a different scale for the texture would have to be tessellated individually Another thing worth mentioning is that you would not always need one vertex per pixel on your mesh to retain normal data on the mesh faces if you use an angle based dissolve on the mesh. But you might need to. And you definitely would need a lot of vertices. This method really leans heavily on Nanite to do heavy lifting, but, so does using WPO inputs. This also lets you skip the unreliability and low quality of WPO and Derived Roughness maps and ignore the vertex shader. It is also an overall simpler shader graph.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I'm not sure if I got everything you're saying but yes, if you want to really go wild there's much more you can do!
@TriVoxel
@TriVoxel 2 жыл бұрын
I could see a simpler and more performance-friendly version of this being really great for adding small, low resolution models for finer detail, but for more prominent models such as characters, buildings, major set pieces, etc. it would be better to have all the real textures. I think it is more practical to combine things like roughness, height, metallic, into a second texture, and use a separate normal and color map. This gives you three textures as opposed to 5, and is typically much better looking than pulling from 1 texture as you get all the benefits of PBR graphics, with the smaller memory footprint of less textures, but less shader math than your method. While I agree that many modern developers tend to waste performance on poorly compressed textures and using many B/W maps to achieve what could be a single texture, it is important to remember that these "extra" maps exist for a reason, and that reason is to simulate the tiny changes in a surface based on real materials, rather than the older method of just approximating everything with fake or baked-in texture trickery. These tiny surface details cannot truly be achieved for most assets with a technique like this.
@AllExistence
@AllExistence 2 жыл бұрын
What you built is a very specific shader for a very specific case. You sacrificed alpha and roughness because you didn't need them in this case. But it makes the shader useless for any other texture that needs them. For example, roughness may not match the albedo.
@daveyhintzen53
@daveyhintzen53 2 жыл бұрын
It's worth noting that in unreal using the alpha channel doubles the memory usage of the texture compared to using one without alpha. RGB uses DXT1 compression which is 64bits per block, while adding an alpha uses 128bits per block (64bit for the colour and another 64 for the alpha values). So purely from a memory point of view a 2k RGBA texture is as expensive as 2 2k RGB textures. Spreading it over 2 textures has the benefit of possibly giving you 2 additional channels to play with, though you have less bits per channel to work with. Another useful thing to note is that the bit count per channel is different. Simplified you have 16 bits per RGB value, split as 5:6:5 bits, meaning you get better quality out of the green channel.
@kenalpha3
@kenalpha3 2 жыл бұрын
Can you post an example optimized code on BlueprintUE, that only uses RGB? but multi texture? (So I can see what you pack vs what you calculate). Ty
@wpwscience4027
@wpwscience4027 Жыл бұрын
Another tip for normal maps: I had good success multiplying in a blur constant to my image kernel texel sampling so I could play with how much angle I wanted to matter. Anything below 4 doesn't seem to overlap much and allows you to fuzz your angles since sometimes you don't want hyper detail but other times you do. I also added a saturate before the derive normalZ because depending on what that multiplier is you can push in some dead values there also by accident. I didn't notice any appreciable performance change BUT this does speed up my asset creation pipeline because with a lot of things I can go over to a single CH packed texture instead of a CR and NH and that means less to do and less to keep track of.
@VisualTechArt
@VisualTechArt Жыл бұрын
You mean like doing the Sobel Operator for the Normal? That's a solution too! Actually be careful because if you saturate before the DeriveNormalZ you wipe out the -1 to 0 values, which results in a wrong normal :) Spoiler: I recently had a look at performances with RenderDoc and found out that my material has the same performance as one that uses 2 "classic" RGB textures :) So there's a gain if you're able to use this approach to replace 3 or more in theory!
@wpwscience4027
@wpwscience4027 Жыл бұрын
@@VisualTechArt Your spoiler tracks well with my experience as my original channel packed mat fit your description. I don't think I introduced any saturation problems as my change was way upstream in the UVs of the normals.
@wpwscience4027
@wpwscience4027 Жыл бұрын
@@VisualTechArt I noticed today unreal comes packed with a normal from heightmap function. You might also check a comparison in perf and quality of pulling the normal out of the rgb vs out of the heightmap since you have both.
@Catequil
@Catequil 2 жыл бұрын
The issue with moving the height map into the alpha channel of the albedo map is that the alpha channel is generally uncompressed, meaning it takes up as much memory as the RGB channels combined. You're better off having two RGB textures than one RGBA texture.
@OverJumpRally
@OverJumpRally Жыл бұрын
Exactly my thought. Also, if you consider that you can have the Normal map at half the size and the Roughness/Metalness one at 1/4, you end up with way more space than just having those elements on separate textures. But it's true that it could be useful to create a Normal map from Albedo if you have a scanned object, that would be a great scenario.
@N00bB1scu1tGaming
@N00bB1scu1tGaming 2 жыл бұрын
This process saves memory, yes, but trades memory for compute. You can get better results by packing and atlasing to shrink your texture inputs into 2 textures anyhow and more objects at the same time. GPUs continue to gain routine op improvements to make PBR math faster and faster, so skipping these inputs can actually be slower on top of being more compute heavy. Neat experiment, and I do agree there was a loss somewhere for hefty ops in favor of brute force, but this method is not the way.
@asdfghjklmnop850
@asdfghjklmnop850 2 жыл бұрын
our TA's have that Material Template that utilizes just the base color only (for optimization purposes of course) and it automatically converts to Height, roughness, normal. But we Environment Artists mostly use his other material template which uses (Base Color + Height, AORM (AO, Roughness, Metallness), and Normal Map. It just looks better IMO and easier to control. Although our TA recommends Atlases for environment to save memory. btw, AORM can be set to a lower size to also save memory, doesn't affect the final output that much.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Interesting! I'd say that if I were able to reliably convert the usual multi maps pipeline to a one texture one I'd try to make an automation that in build converts all the materials to one texture, so artists can work as usual but the game would be converted at build time :) I should actually measure performances first to see if it would actually be a performance gain though
@Mireneye
@Mireneye 2 жыл бұрын
@@VisualTechArt A tool that would atlas textures already applied in a material would be lit!
@sciverzero8197
@sciverzero8197 2 жыл бұрын
I notice significant drop in detail and shading clarity on the derived version compared to the normal mapped version. Moreover about not being generalizable... you're absolutely right. MOST game assets won't be able to derive their normal maps because most assets won't be using maps derived from one source. Most assets will be using maps derived directly from a model rather than from texture, and many of these textures will be greatly different in resolution from each other. Normal maps in particular are often 2 to 4 times the resolution of diffuse or height maps, and roughness maps tend to have absolutely catastrophic results when not mapped exactly to the right texel brightness. I've had no end of trouble trying to paint my own roughness maps by eye or using color sampling from other maps, because it just doesn't map in a simple linear way that can be interpreted. The reason these maps are usually baked into stored textures is actually because deriving them at runtime is... a bad idea. You lose a lot of quality (as I noted in the comparison here, though you seem not to have) if done the simple way, and you lose a lot of processing power if you do it the correct way that image processors that actually produce _good_ textures do it. (most texture processors do not produce good textures, which is why some texture bundles are a more expensive than others... more effort, better technique, and not all derived from diffuse... though some are just scams too.) If you really want to cut down on your memory footprint.... you can just not use lighting. Bake your lighting data into the vertex color channel of your objects, derive a general edge modulus value from your normal map to use on the baked lighting, and apply a diffuse texture. If you want height data... sample your diffuse map at about... 1/4 ~1/8 scale and normalize the values, or just bake the correct shape of your models.... into the model. Vertices and polygons are generally less memory intensive than textures, so... unfortunately, having several million polygons in your scene is more efficient on memory than having highly detailed textures OR derived texture information. And usually... the extra polygons aren't needed, because you can get rid of hundreds of thousands, quite readily, for one normal map or parallax map that is significantly lower resolution and can be used across multiple objects, than the footprint of the high resolution mesh. Most mesh details aren't necessary at all, because you won't see their silhouette or deformation ever, and this is the real problem with modern development. No one... is optimizing, because they've been taught that they don't need to anymore. A little work in the front-loaded asset development workflow goes a hell of a long way toward making a game run better in all ways. Taking careful thought to how your assets will perform when making them can avoid the headache of trying to get more performance out of the game you've already put together.
@reliquary1267
@reliquary1267 2 жыл бұрын
The fact that he's doing this with purely math and advanced shader programming knowledge is genius enough for me, regardless of what anyone's opinion might be of the method
@lennytheburger
@lennytheburger 2 жыл бұрын
there is always a tradeoff between compute time and storage (in this case memory), if memory is a concern calculating many of the maps for a texture is a good solution, good video
@JasonMitchellofcompsci
@JasonMitchellofcompsci 2 жыл бұрын
I'm seeing an application for AI here. Not even a heavy AI. You've basically correlated aspects of a color map to other maps. Those correlations can be developed into models automatically. As you've shown it doesn't take that large a model to perform the correlation even when done by hand. A AI model could likely do it with an even smaller model, and likely consider pixels beyond immediate neighbors to drive useful results. The only area I'm not very familiar with is generating those high frequency features. My intuition is that is possible as well based on what I've seen other people do, but I wouldn't know how specifically to implement it myself. But with that you could compress textures pretty painlessly.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
It has been a while now that I had this idea of overfitting an AI to a specific texture and use its graph as shader, actually! Never took the time to test it out though (also because training AIs is so time consuming and boring)
@JasonMitchellofcompsci
@JasonMitchellofcompsci 2 жыл бұрын
@@VisualTechArt I don't know how much you would have to over fit. Thus smaller model. It would not take much processing to fit that. Literal seconds of training time. Not all AI has to be this heavy thing. Language models sure. But AI concepts apply to models 20 or 50 weights large as well as 20 million. Considering you current method is practically doing 1:1 with extra steps a small model is all it takes. Even on CPU that trains in a practical instant.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I think we would need to overfit to keep it really small, like creating a very small network with a bunch of neurons/convolutions that can only do that operation on that specific texture set. I know that for such a small task training doesn't take long, the problem is that training doesn't give you the result you want straight away :D
@arsenal4444
@arsenal4444 2 жыл бұрын
A good way to test this for real world cases would be to turn it into an asset package or addon that crawls through and modifies all files in a project. Then any project made in the usual pbr style could have a second copy made for testing and have this modification applied to all materials project-wide. Then it's just a matter of running some tests to see results of running things this way, especially if it's a realistic graphics project, as apposed to stylized. I'd be really interested in the frame times and vram usage in said comparison of a project running on the usual method and this one.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
That would be a proper tech art project! I might give it a chance in the future :D
@arsenal4444
@arsenal4444 2 жыл бұрын
@@VisualTechArt I think it would be a win-win for your channel as well as viewers. It's a bit funny to think about how on the hardware side there's extreme obsession with cpu and gpu spec comparisons, whereas running the same type of test on the software side, which is what this would be a demonstration of, is much more rare. They're both equally part of the end result, but seems only one gets most of the analysis.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
You're right :D And I'd argue that software at the moment is WAY more important than hardware, I think. There are tons of HW tests because they're easy and everybody is able to put a GPU in and run some apps that someone else made
@N00bB1scu1tGaming
@N00bB1scu1tGaming 2 жыл бұрын
Save you the effort, this method is compute heavy and skips a lot of micro architecture optimizations. You are trading vram for compute. While there is definitely a need and push for more efficient ways to handle PBR input, you will get better results packing your PBR maps into 2 packed textures and letting the GPU run the proper math ops with its architecture optimizations.
@arsenal4444
@arsenal4444 2 жыл бұрын
​@@N00bB1scu1tGaming I think in most cases, you'd be right. But idk if 'most cases' would be closer to 51% or 99.9%, so if it were possible to have a way to approximately test this project-wide without too much hassle, that would be ideal (if that's not possible then I guess this was all just an interesting 'what-if?'). Point being is, if a project is in it's total volume of assets weighted either too unevenly into either compute or vram use, as apposed to optimally balanced, testing it in a project-wide optimizer may be a working solution. If it does come out more performant, from there it would just be a matter of testing to clean up any errors after modifying everything.
@SatikCZE
@SatikCZE 2 жыл бұрын
Would love to see some benchmark to see how it performs
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Me too ahahahah, I think I'll do a stress test in the future :)
@SatikCZE
@SatikCZE 2 жыл бұрын
@@VisualTechArt would be great :)
@AFE-GmdG
@AFE-GmdG 2 жыл бұрын
Very interesting technique. I wonder how much calculation per pixel is to heavy. I guess it's a race between memory usage and calculation time and dependy on the current situation. It may be better to simply reduce a 4k texture to 1k or 2k to reduce the memory footprint and make usage of a ORM texture.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Whatever texture size you use, using less textures (of same size) is always a gain in memory!
@chillfactory2149
@chillfactory2149 2 жыл бұрын
Not too relevant question, what is the source of your pfp?
@Nerthexx
@Nerthexx Жыл бұрын
The only textures you need is albedo and some kind of spacial information, depth or height, if you know that it's metal or non-metal by default. Think of how real-life objects work. You can pack this information in one RGBA texture. Other maps can be derived at runtime. This is a basic long-running argument of "processing" vs "precalculation".
@82FGDT
@82FGDT 2 жыл бұрын
embedding textures in the alpha channel is my life for the past year. In the Source Engine (the engine for half life 2, portal, CSGO, etc if anyone didn't know already) you have to embed roughness into the normal map's alpha or else you can't have both at the same time
@roadtoenviromentartist
@roadtoenviromentartist Жыл бұрын
One question Master?. Minute 5:59 Calculating the pendient (derivate X and Y of the neighbourpixel).... Is it very fast use in this step to use DDX and DDY???? I think that you can avoid repack the normals and normalize them. Thank´s. :)
@VisualTechArt
@VisualTechArt Жыл бұрын
Not sure I understood your message honestly 😅
@mwjvideos
@mwjvideos Жыл бұрын
I am not very good in shading process but after all these years of development I understood one thing that whatever you do, do not ever mess with normal map texture.
@Itsme-wt2gu
@Itsme-wt2gu Жыл бұрын
Can you add parallex onclusion instead of displacement
@VisualTechArt
@VisualTechArt Жыл бұрын
Yes
@mb.3d671
@mb.3d671 2 жыл бұрын
Really good explanation thank you
@SenEmChannel
@SenEmChannel 2 жыл бұрын
i developing VR game. And roughness, specular, ambient occlusion dont stand out as solid material like rock or brick. O i dump them all. Only use diffuse and normal, and it look good, combine with custom data, optimize texture size, optmize uv, optimive vertex count, optimize lod, optimize mip map, optimize culling, etc.
@cocinando3d
@cocinando3d Жыл бұрын
Your videos are amazing, pure usefull content
@Sweenus987
@Sweenus987 2 жыл бұрын
For the normal calculation, could you do a single small pass at bluring it to help with the steps?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Yes, one better way to calculate it would be a 3x3 sobel filter, which is actually the derivative after a gaussian blur :) It would need 8 samples though (it considers also the neighbours in the diagonals)
@nicolashernanhoyosrodrigue762
@nicolashernanhoyosrodrigue762 Жыл бұрын
Thanks a lot for you wisdom tutorial!
@Mehrdad995
@Mehrdad995 2 жыл бұрын
Should be titled "How to unnecessarily hyper complicate a shading process to achieve a similar result in an unoptimized way" or "How to have 10x shading complexity and bottleneck instead of 1x memory bottleneck" Brilliant 👍
@jackstack2136
@jackstack2136 2 жыл бұрын
Agreed, I struggle to find any value from this video other than "I know what I'm doing so I tied my hands behind my back and did it some more"
@Mehrdad995
@Mehrdad995 2 жыл бұрын
@@jackstack2136 No doubt you are well-educated on shading, just wanted to share my point of view in a humorous way. Sorry if it sounded offending, didn't absolutely mean to. 🙏
@tech-bore8839
@tech-bore8839 Жыл бұрын
@@Mehrdad995 To be fair the bottlenecking is an important caveat to mention, especially if people are going to use this technique. I think people would like to know these things before going through all the hassle of setting it up.
@Mehrdad995
@Mehrdad995 Жыл бұрын
@@tech-bore8839 Exactly. Lower-lever optimizations can be postponed to the final steps in development but things like this are the approach that changing them most of the time requires re-doing things from scratch. Good point.
@Derjyn
@Derjyn Жыл бұрын
There are many scenarios where this approach would be (and HAS been) useful. Reducing memory but putting more strain on ALU might be necessary depending on what platform(s) you are targeting. Industry experience would have likely had you posting a very different type of comment, but alas... here we are. I used a similar technique on more than one occasion for gamejams that had file size constraints. There, that's another good reason to be aware of this technique. I worked on a major project once, and when tasked with some optimization passes, we found several key areas where we could obtain noticeable gains. One of them was reducing VRAM usage by utilizing a similar technique for nearly 60% of the materials being utilized. A batch script and a couple hours of elbow grease later, and our VRAM budget had some more breathing room. Many hobbyists that lack a lot of experience tend to run with the flock of sheep, and when they see something different, can't retrieve the creative engineering mind juice to find a good use for an approach/solution. Don't do that. That holds you back. Store these useful things away, because odds are there will be a point in time when it proves to be useful.
@Mireneye
@Mireneye 4 ай бұрын
Ever since you posted this video I've been thinking about these color pickers. Sort of thinking about what could be a way to automate that part. Like finding the brightest and darkest pixel (just for precision purpose) and then create something like a histogram and sample say the two most common colors. Hah.. maybe it's easier to just blur the texture colors into vertex color and bake it into the mesh. But then you'll pay the price of all the specific cool stuff you can do with vertex color :/ Also a year down and 4090 is still pretty hot ^- ^
@VisualTechArt
@VisualTechArt 4 ай бұрын
I don't have any specific solution for you, a few ideas, but nothing proven :) It would definitely be useful to make the shader automatic and not require and further input per mesh
@TomSwogger
@TomSwogger 2 жыл бұрын
Is this available for download somewhere? I don't think I'm savvy enough to create this, but I'd like to play around with it!
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I wasn't planning to upload it as it's not a versatile material (didn't take the time to add parameters etc..), but I may do that in the future!
@Potatinized
@Potatinized 2 жыл бұрын
Inaccurate methods for all other textures including heightmap, which supposed to have higher bit than the normal texture we're using for color maps. But will this solve the infamous vRAM limitation issues? because small inaccuracy but enabling more stuff we can use in a scene is subjectively better.
@plasid2
@plasid2 2 жыл бұрын
finally next video from my master
@VisualTechArt
@VisualTechArt 2 жыл бұрын
@Starlingstudio
@Starlingstudio 2 жыл бұрын
this is amazing, thank you
@penkimat
@penkimat 2 жыл бұрын
Great quality video. Well done.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Thanks!
@lorenzomanini1017
@lorenzomanini1017 2 жыл бұрын
Nice approach! I also experimented with a different way of dealing with masks for color overlaying, in order to pack a max of 44 different masks in a single texture, instead of the classic 4 you can get by using only the RGBA channels. Maybe we can have a chitchat on that if you're intrested in making a video about it to share the knowledge
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Definitely interested in that! Were you manually assigning bits for that? You can join my Discord Channel and we can talk there if you fancy :)
@kenalpha3
@kenalpha3 2 жыл бұрын
Im also interested in learning optimized masks. Can you post the code on BlueprintUE in the meantime (4x color mask is ok).
@multikillgames
@multikillgames 2 жыл бұрын
The one on the left looks more real, maybe it's the lighting, or something. The one on the right looks kind of off? Oh found out why. I wouldn't do this but I like the idea. Plus you can highly optimize textures already anyways. But it could be a good decision.
@Goochen
@Goochen 2 жыл бұрын
Out of interest, how would you go about optimising your textures instead?
@poly_elina
@poly_elina 2 жыл бұрын
@@Goochen Channel packing is one way to reduce the amount of texture files: "Channel packing means using different grayscale images in each of a texture's image channels... Red, Green, Blue, and optionally Alpha. These channels are usually used to represent traditional RGB color data, plus Alpha transparency. However each channel is really just a grayscale image, so different types of image data can be stored in them. " From Polycount wiki
@MeatFloat
@MeatFloat 2 жыл бұрын
Hey! You could actually subtract the vectors by 1, then sign the result, then lerp with the signed result between your divided result and the default mult100 to avoid the if statement. ;D
@VisualTechArt
@VisualTechArt 2 жыл бұрын
True! But I went for the IF because it doesn't actually get compiled as an actual branch and it was more straight forward to follow, I think :)
@theluc1f3r93
@theluc1f3r93 2 жыл бұрын
I use only 1 texture as even bump and normal map in unity, in many event games and small apps for phones etc. and always bake them in 3D + in Unity. It was way more efficient, but un unreal it look way more better (compare to standard shader in Unity, not other like uber etc.).
@3diec811
@3diec811 2 жыл бұрын
Awsome explanation!. Is this more expensive in performance than using the normal map?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
It probably is, but if the usage of the cache is efficient as I think it may actually be not that different, I want to profile it in the future :)
@maybebix
@maybebix 2 жыл бұрын
Wow, very interesting material! 👍But in theory, is it possible to pack multiple grayscale maps into one channel and split them in a shader? I saw something like that in Bungie presentation from GDC, they packed 7 params into 4 channels
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Ah, like packing two maps in one channel by doing bit operations? Yes you can do it, I never personally tried. The downside would be having a map with a way lower value range, so if you need it for maps that are very blocky and mostly uniform colours it would be a smart way to compress them!
@johnsarthole
@johnsarthole 2 жыл бұрын
I've shipped a game where we did that. It will work, precision is an issue - especially with the lower quality compression methods - but you get some of that back from texture filtering.
@mncr13
@mncr13 7 ай бұрын
@maybebix do you have the link to that video by any chance? thankss
@maybebix
@maybebix 7 ай бұрын
@@mncr13 sure, it was called "Translating Art into Technology: Physically Inspired Shading in 'Destiny 2'" by Alexis Haraux, Nate Hawbaker. You can find it on gdc vault site
@cad97
@cad97 2 жыл бұрын
The biggest downside to this approach is IIUC going to be that it needs to use its own material. If all of your megascan materials share the same material and just vary by what textures they're using, it's typically going to be easier on the GPU to draw multiple actors using different material instances than it is with fully different materials. As with all things, there's a tradeoff to be made - if you're strapped for VRAM, deriving normals from the heightmap texture will certainly help.
@mindped
@mindped Жыл бұрын
i made a material with paint chipping on the edges.. i did this by using vert color and a texture mask to randomize up the edge vert color so its uneven. Is there a way to generate a normal map for the edge of the paint?
@VisualTechArt
@VisualTechArt Жыл бұрын
Well you can generate a normal from the texture mask like I did here with the height... But you can't look at data coming from vertices in that way, so it's a bit tricky, you have to come up with some euristics based on your use case :)
@KittenKatja
@KittenKatja 2 жыл бұрын
This video didn't make me realize anything, but it made me relive an old idea I had with transparent pictures. Is it possible to remove all white from the picture, and translate it into alpha channel? This way, if there's a white background, the transparent picture will appear to be completely normal. If the background weren't white, the texture itself would look rather dark, or deprived of any light. I also would like to do this with any kind of color, not just white.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
You would need to implement a proper Chroma Keying, like the stuff they use in cinema all the time to remove greenscreens
@KittenKatja
@KittenKatja 2 жыл бұрын
@@VisualTechArt There are some chroma keys on paintNET, two default, one custom. The two default are magic wand, and one of the effects. The custom one is made to remove the white/black background of an object, and leave the shadow it casts intact, and also the see-through area, like on a magnifying glass. But it leaves like 10% white in the pixels. Does Photoshop have something like that?
@cepryn8222
@cepryn8222 2 жыл бұрын
Maybe i dont understand exactly how channel packing works in term of optimization compared to this method but wouldnt be a channel packed texture a similar gain of performance while using much faster workflow? If we lets say use a channel packed texture with Diffuse, Normal and Roughnes (or anything that's needed) you can just basically plug the texture add a fraction of instructions you use to the material and the work is done. Please correct me if im wrong, and thanks for awesome work :)
@VisualTechArt
@VisualTechArt 2 жыл бұрын
This video was more of an academic experiment :) To understand if an approach like this may be worth we should test it in a much wider context (what's the production pipeline, system requirements, actual performance measurements, etc)
@sc4r3crow28
@sc4r3crow28 2 жыл бұрын
Interesting. Was not thinking that calculating textures in memory would be better then sampling a texture ... but it makes sense. Unfortunately i would not want to loose roughness textures. But would it be good to use the same textures across multiple materials? If for example i have a the same roughness texture i put on multiple materials. Currently i use AO, Roughness and Metal in one RGB texture for a material. But would it be a benefit to split this and reuse the same Roughness texture across materials?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Having a single channel texture is generally a waste... And yes, deleting the Roughness is a risky thing, it really depends on the assets. But you could have cases where you don't need the Base Color maybe, so you get 3 channels to play with :D The texture in this case should be more flexible in what it can contain I think (speaking of an hypothetical game that takes this approach as production pipeline).
@artemg9753
@artemg9753 2 жыл бұрын
What will you do if there are some contrasting patterns on the texture, and/or materials with different properties? Rhetorical question.)
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I wouldn't go for this approach or I would change the texture I'm using to store different kind of data :D
@0805slawek
@0805slawek 2 жыл бұрын
Why not working for me? I analized every sec video but cant find mistake in my nodes, my normal map is flat, could you upload somewere this material pls?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I wasn't planning to upload it as it's not a versatile material (didn't take the time to add parameters etc..), but you can join my Discord channel and we can try to make it work together :)
@benceblazsovics9123
@benceblazsovics9123 2 жыл бұрын
First of all, splendid work! Love it!
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I don't use Blender that much but... Doesn't it have the equivalent of UE's Custom Node for materials, where you can type in you HLSL code?
@Itsme-wt2gu
@Itsme-wt2gu Жыл бұрын
Can you list the draw calls of both?
@VisualTechArt
@VisualTechArt Жыл бұрын
I'll do a separate video where I profile several things I did at once :) But spoiler: I already did a check on renderdoc few days ago on this, turns out that my solution performs around 25% better than the reference material as you see it in the video, while basically the same if I pack the reference material textures into 2. I shared the timings in my Discord few days ago :D
@Kaboom1212Gaming
@Kaboom1212Gaming 2 жыл бұрын
Is there a particular reason you didn't use the "height to normal" node instead of all of the custom node setup in the shader graph?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Yes, both to explain the concept behind this function and because I like less the output of that function, which is an even more hard approximation of the normal than mine :)
@Kaboom1212Gaming
@Kaboom1212Gaming 2 жыл бұрын
@@VisualTechArt I see, very interesting. I will give your approach a go next time, it seems useful in a few ways!
@FishMan1nsk
@FishMan1nsk Жыл бұрын
Hey. This method of generating normal maps. Can it be used for generating a normal map between two blening materials? Lets say. I have a metal and a paint on it which is added using vertex paint. Is it possible to add a normal map on the edges like in painter using this method? Or may be you know other method? This is probably a good theme for a video btw.
@VisualTechArt
@VisualTechArt Жыл бұрын
I think it can be done :) but you have to consider that since the transition comes out of the intersection of two maps, to obtain the adjacent pixels you'll have to repeat that multiple times too, worth having a try though, good idea!
@Kio_Kurashi
@Kio_Kurashi 2 жыл бұрын
For the Roughness section, aren't you having to store the values that you're calculating from the original texture in a separate memory instance from the texture? Isn't that essentially having two textures in memory? Transient though one might be.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
In GPUs you can say (a lot of approximation here, just to pass through the idea) that every pixel computes independently, from scratch, every frame. That means that you don't have access to other screen pixels in the same frame and you don't "remember" anything you did in the previous frame. So once you calculate the Roughness you're not saving "a texture", but you're just saying to the shader how you want the surface to react with light for that frame and that's it :)
@Kio_Kurashi
@Kio_Kurashi 2 жыл бұрын
@@VisualTechArt Ah okay, Thanks for the clarification!
@plasid2
@plasid2 2 жыл бұрын
Could you use your magic to make flow map based high map, like lava flow from mountain?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Nice idea! I'm gonna add that to the list :D
@sarahlynn7807
@sarahlynn7807 4 ай бұрын
Pretty good. The only real thing I really see is that it's slightly lighter.
@Psyda
@Psyda 2 жыл бұрын
While yes this makes the files smaller, when running the game procedural textures blow up using memory somewhat comparable still.
@nonchip
@nonchip 2 жыл бұрын
"do they look the same?" my eyes: ummno? "good!" OY!
@cmds.learning7426
@cmds.learning7426 2 жыл бұрын
amazing! i will pay for your tutorial
@falseanimatronicsstudio6371
@falseanimatronicsstudio6371 5 күн бұрын
is there a "named" node thing in UE4, can't fine one
@VisualTechArt
@VisualTechArt 4 күн бұрын
I don't remember in which version they were added, look for Named Reroute Node
@falseanimatronicsstudio6371
@falseanimatronicsstudio6371 4 күн бұрын
@@VisualTechArt yeah, I use 4.16 version and it was added in 4.27
@wpwscience4027
@wpwscience4027 Жыл бұрын
Ideas on how to use this to fake or rederive the ambient occlusion map would be neat.
@VisualTechArt
@VisualTechArt Жыл бұрын
You can definitely do some cavity, the AO is a bit more complex :) I may try one day though
@wpwscience4027
@wpwscience4027 Жыл бұрын
@@VisualTechArt Adding a bump to the UVs looks nice and is cheap since you already have the heightmap. I've spent the evening exploring building something for the AO. As of yet I have settled on using that heightmap kernel to calculate a proxy for the openness to light by getting the volume of the cone it makes with the center height pixel. This yields a map of how sharp the cavities are but it looks pretty different than a regular AO map. I feel like I could get something better using the same information and the hillshade calculation that gets used in GIS and other landscape mapping applications, but that requires picking a direction and height away from the texture. I feel like 12 degrees (goldenhour) and 315 (northeast) for alt and azimuth would look nice for textures laid flat in the xy. For vertical objects that would put the light behind the viewer and to the upper left. I think that's passable but it feels wrong to just pick a shadow, but that's what I'm going to try next anyways.
@VisualTechArt
@VisualTechArt Жыл бұрын
@@wpwscience4027 I think the only issue is that to calculate AO you would be forced to use quite a big kernel as it's extend is always beyond the first pixel in terms of distance, but a simple cavity map can be computed for sure :)
@btarg1
@btarg1 2 жыл бұрын
This could save a lot of storage space in larger games which would otherwise have many textures, nice
@VisualTechArt
@VisualTechArt 2 жыл бұрын
It's nice to see that every once in a while someone gets your point xD
@ls.c.5682
@ls.c.5682 2 жыл бұрын
This reminds me of a project i did as a hobbyist before i got into the industry where I used height map generated terrain so of course had to calculate the normals in the shader. However, like any engineering problem this has massive tradeoffs. 4 texture samples per pixel? That could be a lot of bandwidth, granted there might be some values in the gpu cache lines depending on the tiling mode of the textures but I'd be curious running this through something like pix to see the overall cost. Also I wonder about the VALU cost of all the calculations per pixel across shaders running on a gpu unit. With block compression of normal maps and other textures to help with memory I'm not sure if this would be a net win. I could be wrong, but i need to see metrics. Creative solution though, and good for thinking originally
@VisualTechArt
@VisualTechArt 2 жыл бұрын
I'm quite confident that Cache Misses would be fairly low and ALUs wouldn't be causing a bottleneck! But yes, giving it a run on PIX would give the answer
@jakubklima5193
@jakubklima5193 11 ай бұрын
Nice idea, but is it even worth it now that we are using virtual textures pretty much everywhere? It also needs manual adjustment pretty much for every material, I wonder if this workflow would actually pass and if this method is wildly used in any projects. Maybe for something like mobiles it would be more beneficial ?
@VisualTechArt
@VisualTechArt 11 ай бұрын
It's of course to not give for granted that something like this would fit in a project, to be honest it was more of an experiment I wanted to make :) I personally see some areas of application for it, but I wouldn't base an entire project on this
@jakubklima5193
@jakubklima5193 11 ай бұрын
@@VisualTechArt Ah got ya. Cool stuff. Thanks for sharing 😃
@y.h.lee.5288
@y.h.lee.5288 2 жыл бұрын
이런 방법으로 하나의 텍스쳐로 각각의 텍스쳐를 구현할 수 있군요. 어메이징.
@Hellwalker855
@Hellwalker855 2 жыл бұрын
Try multiplying the texel size by -2 and 2 instead of -1 and 1 to reduce the staircase effect.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
That would also blur the normal map though, I did test it but I decided to go with what you see in the video :) Also because I would also increase the Texture Cache Misses by fetching pixels that are further apart!
@multiupgame
@multiupgame 2 жыл бұрын
Is such material not difficult for GPU? Runtime calculation. I'll have to test later🤔
@VisualTechArt
@VisualTechArt 2 жыл бұрын
You need lots of ALUs to make a GPU feel them :D My gut says that's not that much of an issue, but if you run some tests definitely let me know please!
@multiupgame
@multiupgame 2 жыл бұрын
@@VisualTechArt Ok In discord if I don't forget😅
@chasingdaydreams2788
@chasingdaydreams2788 2 жыл бұрын
is there a derivative node in ue? if so, you can derive the exact same normal map from the bump accurately. right now your normal conversation isn't as accurate as it can be.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
The derivatives are screen space, it's a bit long and difficult to explain here, but if you try that you'll see that the result is pretty bad, actually! Plus with them you can only use 3 samples instead of 4, which has also more drawbacks (in terms of output quality) :)
@EclyseGame
@EclyseGame 2 жыл бұрын
you are very smart, take my sub incredible value tutorial thank you
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Thanks!
@sc4r3crow28
@sc4r3crow28 2 жыл бұрын
You said RGBA = 2xRGB when compressed ... is that true? Or did you mean its just bigger by 1/3
@VisualTechArt
@VisualTechArt 2 жыл бұрын
It's true and you can check it yourself :D If you add the Alpha channel to a texture it doubles in size (on the other hand that channel is the one that gets the least amount of compression artefacts and has the best quality among all 4)
@sc4r3crow28
@sc4r3crow28 2 жыл бұрын
@@VisualTechArt ok thank you that is good to know
@sc4r3crow28
@sc4r3crow28 2 жыл бұрын
@@VisualTechArt Sorry, i have another question ... if i have a RGBA texture and sample only the R channel it will always load the whole RGBA texture right?
@cedric7751
@cedric7751 2 жыл бұрын
@@sc4r3crow28 Textures are compressed in blocks of 4x4 texels. For the rgb, 2 most "extreme" color values are saved for each compression block (2colors x 16 bits per color) and 2 new values are interpolated between those 2 extremes to form a 2bits indexed color table (4 colors: the extreme colors and the 2 interpolated values). Each of the 16 texels of the block will then index one of those 4 colors, for a total of 16 texels x 2bits index = 32bits + the 32bits from the 2reference colors = 64bits or 8bytes per compression block. For the alpha, the 2 reference colors only have 8 bits of depth but 6 new colors are interpolated to form a 3bits indexed color table (8 colors: the extreme colors and 6 interpolated values). So we have 16texels x3bits index + 2reference colors x 8bits = 64bits or 8bytes of data per compression block for the alpha. This is why adding an alpha doubles the size of a compressed texture when using the BC format, which is the default in Unreal. This is no longer true with other formats like pvrtc (power vr mobile architecture) or console specific formats. As a side note, the 3 channels of the rgb are stored as a single 16 bits value. Since 16 is not divisible by 3, the red channel is stored as a 5 bits value, the green channel as 6 bits and the blue channel as 5 bits for a total of 16. The reason why the green channel receives more bits of data is because the human eye is more sensitive to shades of green.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Yes
@guybrush3000
@guybrush3000 2 жыл бұрын
you might’ve saved some texture samples but you made the shader massively massively more processing intensive. sampling a texture is much faster than this. Is saving the VRAM worth taxing the fill rate like this? I would never recommend that anyone make something like this
@romank9121
@romank9121 2 жыл бұрын
is calculating shaders in real time is more performant than loading textures?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
There's a threshold :) I don't think I'm passing it with this shader, but doing a performance test would be great
@toapyandfriends
@toapyandfriends 2 жыл бұрын
Will this cut down under CPU usage or GPU usage that a game needs to run... If so can you put a link under this video or video or at least to this comment about other videos you made that have this level of scientific efficiency genius so I could grow in your light and become like a like a UE5 scientist too! 👊😎'kapaw
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Ahahahahaha! Well about the CPU I don't have an answer to be honest, maybe if you applied this approach to the entirety of a game yes, you would be requesting less textures, making smaller draw calls? Don't know, I'll look into that. I'd say go to my channel page and watch everything! :D But you may be especially interested about the Voronoi ones and the Grass Animation? Start from those ;)
@3DWithLairdWT
@3DWithLairdWT 2 жыл бұрын
Would it not be more effective to just take the min between vector {1, 1, 1} and the derived normal? If statements are costly
@VisualTechArt
@VisualTechArt 2 жыл бұрын
These "node IFs" are not usually compiled as actual branches, but as ternary operators, which at the end of the day is like doing a mask with the Min, as you suggested. To completely avoid any doubts I usually don't use them, but for clarity in the video I decided to, this time :D
@whyismynametaken123
@whyismynametaken123 2 жыл бұрын
The "expensive" part of if statements comes from them running all the code from each potential result so it depends on what your outputs are. If result A is 200 instructions and result B is 200 instructions then it will always end up running 400 instructions and thus in that case it would be very expensive. On the other hand if result A is the number 0 and result B is number 1 then it's very cheap. You can output the material's HLSL code to look over. It will take a bit to decifer what's happening the first time you do it due to how UE optimizes your material graph, but after that it's fairly straight forward to read. [EDIT: The optimizer will re-use the result of a block of code multiple times if it doesn't change .. I think my above explanation will lead people to think that isn't the case. I should probably just go to sleep instead of inserting myself into tech discussions lol]
@VisualTechArt
@VisualTechArt 2 жыл бұрын
For what the GPU is concerned (and as far I know) running both branches and discarding one result is cheaper than an actual branch where you first check the condition and the run only one, if the sum of the instructions doesn't surpass the issues for the threads to loose sync and having to stall in wait state. Something that's usually worth if avoiding texture fetches inside the if statement, for example.
@Jetravard
@Jetravard 2 жыл бұрын
The one on the left has much more granular detail.
@Mittzys
@Mittzys 2 жыл бұрын
How computationally expensive is this? I would assume it's a bit of a trade off, less VRAM usage but more processing time?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
It is a trade off :D I didn't do a performance pick but I wouldn't worry too much about valu performance here to be honest
@stefanguiton
@stefanguiton 2 жыл бұрын
Excellent
@tylergorzney8499
@tylergorzney8499 2 жыл бұрын
I think this is a great excercise, but not very useful. This is a super heavy shader just for abasic PBR material. Taking this and adding in even more shader effects makes it very hard to work with and very very expensive compared to just using 2 texture samples compared to your "complex" math and many many more texture smaples. From my knowledge, texture samples are very slow. I think another btter method would be using texture arrays with small (512) textures with high tiling and texture maps that define the geometry features such as AO and Cavity (can be combined in a single channel and seperated in shader, Edge Map, Curvature Map etc. You will have a much cheaper shader, smaller texture sizes, higher detail and can make a shader where yu can rotate a mesh and textures will render appropriately so a single mesh rotated will appear like a different object.
@GDPROD
@GDPROD 2 жыл бұрын
Very interesting. I have a personal question, are you italian?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
@TheT0N14
@TheT0N14 Жыл бұрын
This is not very useful in the case of photogrammetry because you can capture normal map and roughness. You'd need a darkroom for that. If you need to capture something outside, you use a tent or just scan at night. For normal map you need a small metal ball and a light source that you can move. And software, of course. Substance Designer or Details Capture | Photometric Stereo. For roughness you need to take two pictures without moving the camera, one picture as usual and the other with polarising filter and polariser on the light source. This will give you two images with and without glare. The one without glare will be our Albedo. Now we turn both images into greyscale, count the difference between them and get something similar to rougness.
@Annyxel
@Annyxel 2 жыл бұрын
the real proof here is to made a small landscape of some sort with a town and forest. after doin it normally, copy it and made one with your method. then put then to the test. which has better frames and results.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Yes, I want to do it at some point
@zxcaaq
@zxcaaq 2 жыл бұрын
can someone pls make a GLSL code or HLSL for this pls?
@JordiTheViking
@JordiTheViking 2 жыл бұрын
Putting it into Alpha makes the engine consider it as 2 textures pretty much
@chosen_oNEO
@chosen_oNEO 2 жыл бұрын
But will that really help performance 🤔🤔 idk man… I’d say just save time and go the old fashion
@andrewsneacker1256
@andrewsneacker1256 9 ай бұрын
Its a good technique, but if you need specific normal map with specific features its not the wat at all.
@ivanm.612
@ivanm.612 2 жыл бұрын
Normal Map is 100% needed for games in UE4. In Blender I make Character with 7million tris and after Retopology and adding the Normal Map I can add it with the same detail but only 20K tris. I dont know for UE5 because of lumen etc.
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Don't disagree for assets that need ad-hoc normals! Even though I would try to bake an heightmap instead and see what comes out of that :D I'll be trying for sure
@ivanm.612
@ivanm.612 2 жыл бұрын
@@VisualTechArt will not work in blender because I bake from multi res modifier which can only bake n map or displacement 😄. But there is program called Materalize it is 100% free and can create AO, Height map,Smoothness and Normal. Just need Diffuse and another map to generate all the others.
@alessiobertolacci5280
@alessiobertolacci5280 2 жыл бұрын
where are you from?
@VisualTechArt
@VisualTechArt 2 жыл бұрын
Italia :D
@allalongthewatchtwer
@allalongthewatchtwer Жыл бұрын
molto molto utile grazie :D
Outline Stylized Material - part 1 [UE5, valid for UE4]
23:02
Visual Tech Art
Рет қаралды 51 М.
Physically Based Cel Shading
36:41
Visual Tech Art
Рет қаралды 138 М.
Une nouvelle voiture pour Noël 🥹
00:28
Nicocapone
Рет қаралды 9 МЛН
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 42 МЛН
VIP ACCESS
00:47
Natan por Aí
Рет қаралды 30 МЛН
HYPER-REALISTIC Grass Wind Animation [UE4, valid for UE5]
50:08
Visual Tech Art
Рет қаралды 38 М.
Stylized Material UE5 (Cel Shading) How to | Part 1
17:30
Unreal Bucket
Рет қаралды 2,5 М.
Your Triplanar is wrong. Here's how to make one that works. [UE5]
24:48
4K textures are USELESS!
5:31
Visual Tech Art
Рет қаралды 18 М.
EVERYTHING You Need To Get Started Texturing Large Game Assets
1:10:12
Welcome to Shaderland - An introduction to shaders in Godot
1:12:51
Maximizing Your Game's Performance in Unreal Engine | Unreal Fest 2022
41:53
Reducing Draw Calls in Unreal! [UE4/UE5/Blender] (Check Description!)
40:11
How To Create Atlas Master Materials in Unreal Engine 5
12:54
Z-Axis Up
Рет қаралды 13 М.