What's interesting, is that when testing on linux I get this strange "The Vulkan spec states: commandBuffer must not be in the recording or pending state" Error which I can't get to disappear. The app and resizing is functioning, and I wanted to ask if I should be bothered. (Tested on Ubuntu 24.04 WSL, made using g++ just after the "Updating drawFrame()" section, implemented both fixes in the pinned comment)
@mananbhardwaj39762 күн бұрын
Please continue this series, Finding this type of series is like finding a four leafed clover. I understand it's tying to create videos so take your time but do please continue it, We are still waiting for the shader tutorial.
@mananbhardwaj39763 күн бұрын
Vulkan the struct heaven. Gives me joy every second
@Iyht7 күн бұрын
Why you didnt used an r-tree?
@mohammadalaaelghamry80107 күн бұрын
Great resource. Thank you.
@NunTheLass9 күн бұрын
nice job! so to get it exactly right is impossible for even 3 particles, right? No matter how small you make the steps, there will always be leakage. because all the interactions in between are being skipped. That math just doesn't exist, no matter how super your supercomputer. At least, that's how i thought it works.
@mostafaghobashy27249 күн бұрын
guys, I have been a c/opengl dev for a long time. I wanted to upgrade to vulcan, so I started writing cpp. I noticed that the readFile returns a vector of char. Why can't it return an std::string?
@rajat34611 күн бұрын
can somebody explain this in easy explanation like what exactly the need of homogenous coordinate system .. like why we need an extra dimesion ?
@NisalDilshan12 күн бұрын
Good Content!🤩
@haniissa199017 күн бұрын
in 15:40 I open the window, the triangle start very fast , but when i re-size the window , the triangle become slower , is this normal ?
@robbeflot542818 күн бұрын
when I try to compile the shaders (on windows) I get the following error Error: no binary generation requested (e.g., -V) (use -h for usage) I can't seem to figure out how to solve this problem.
@muchas_gracias19 күн бұрын
we still wait for texture tutorial and other stuff😇
@VFusioN6922 күн бұрын
can anybody help me setup on mac
@axmaz_lazy23 күн бұрын
Makes it looks like a galaxy. or a cluster in fact. a real genuis implementing that kind of optimization. insane how you can run that many at once. how you run it i feel like the clock speed of the cpu divided by 1000 equals the amount of particles.
@christiancaliendo687523 күн бұрын
I keep getting a linker tools error even though I copied the entire files it keeps failing to run if I build with the device
@limeisgaming27 күн бұрын
great series, I was wondering wether some future episode might cover mesh/taskshaders or using hlsl instead of glsl
@kotleni27 күн бұрын
I love you.
@m.z661028 күн бұрын
But doesn't it mean that people playing this game with your api key will actively burn your money? It's unsustainable to release game like this
@BrendanGalea28 күн бұрын
Ya I would not recommend doing it with your own api key if you were doing a game like this. Probably the way to go would be hosting your own machines running some open weights LLM and then charging like a monthly subscription fee or something. Probably would also want to fine tune your own model and do something similar to how apples doing their AI where you have a simple LLM running locally on the users device for easier requests and then only use api calls and the larger weights model for more complicated functionality.
@martinmaters28 күн бұрын
Fantastic Content... Thanks so much!. To get it to work on my Mac I had to add something a bit hacky to createInstance() ..... perhaps there is a better way, but this fixed it anyway. createInfo.pApplicationInfo = &appInfo; createInfo.flags |= VK_INSTANCE_CREATE_ENUMERATE_PORTABILITY_BIT_KHR ; /* MJM ADDED */ auto extensions = getRequiredExtensions(); extensions.emplace_back("VK_KHR_portability_enumeration"); /* MJM ADDED */
@flocelaАй бұрын
I guess I do like the sorter videos.
@tylersage4750Ай бұрын
I just discovered this channel and i finally got through episode 0 and 1 over two days. I haven't programmed anything in 6 years. Kinda rusty on the fundamentals, but i think im following along okay sofar. Definitely gonna need to study the basics of c++ while im doing this series.
@GraphiceNerdАй бұрын
Hey, Thank you, You helped me great my own Engine :D.
@yante7Ай бұрын
ah youtube compression had a fun time with this one
@haniissa1990Ай бұрын
I have this error: Present mode: Mailbox make: *** [Makefile:63: test] Segmentation fault
@haniissa199027 күн бұрын
fix it => createInfo.oldSwapchain = oldSwapChain ? oldSwapChain->swapChain: VK_NULL_HANDLE;
@JayAetherАй бұрын
quite interesting that despite the implementation classical mechanics being flawed/incomplete, we can still recognizr the forms and shapes of galaxies nontheless
@BiggestDucksterАй бұрын
my laptop is heating up watching this video lol
@flocelaАй бұрын
In the end of the last video, you sort of said in the createRenderPass() method, we're passing in VkSubpassDescription subpass with .pColorAttachments and .pDepthStencilAttachment to ultimately create the render pass. The render pass is the blueprint for information for the frame buffer. When did we pass in color and depthStencil information to the frame buffer? Is it in createCommandBuffers() clearValues{}, then clearValues[0].color, clearValues[1].depthStencil? This just clears the values, when did we set the values? Thanks, still just a beginner here.
@mokshsurya1681Ай бұрын
Can someone explain me from from basics image vs object and its relation with 3D geometry it seams like there should be some connection according to video.
@rockoman100Ай бұрын
this makes no sense to me whatsoever
@Trooperos90Ай бұрын
n1
@TarcisioN-SАй бұрын
I want change the skybox, how i do this?
@golxzn_channelАй бұрын
Seems like the window resizing support costs pretty much if we move the command buffer recording to the render function :c I don't really want to follow those changes but seems like the future changes won't be compatible with the previous one.
@NemwazGamesАй бұрын
By far best tutorial series I've ever watched! It taught me so much - thank you! I really hope you return to making tutorials agian, they are incredible.
@1337erBoards2 ай бұрын
I'm not sure if this is correct but I spent a serious 2.5 hours wrapping my head around the matrices section and wanted to save anyone else curious about the details the hassle of putting this together if they didn't want to, but still want to know what's going on. Sorry in advance if this is unintelligible. Also note that glm is in col major format (this caused me some headache figuring out). Thank you for your great series, but for those writing out the matrices remember that vec3 is a column vector and the dot product of 2 column vectors where the u, v, and w are the normalized basis vectors along their respective x, y, and z axis (not real x, y, z, but like a virtual camera x, y, and z), will result in grabbing the scale and therefore translational magnitude of the position, where you can assume that the position is a vector of p.x, p.y, and p.z: The T at the end is the transpose because it's a column vector with RowxCol (MxN) equal to 3x1, not a row vector which it looks like in text. The values here are in respect to the camera view, u.x, u.y, u.z, v.x, etc, should technically be here, but this is for explanation purposes. Column vector(u) = [1, 0, 0]T Column vector(v) = [0, 1, 0]T Column vector(w) = [0, 0, 1]T Column vector(position) = [p.x, p.y, p.z]T This is why you get -p.x, -p.y, and -p.z for this code (at least from the camera's perspective, but it still works when replacing the above orthonormal vectors with their respective u.x, u.y, u.z, v.x, etc.: m_viewMatrix[3][0] = -glm::dot(u, position); m_viewMatrix[3][1] = -glm::dot(v, position); m_viewMatrix[3][2] = -glm::dot(w, position); Altogether, the Rotation, Translation, and View matrices are: Rotation matrix (R): u.x, u.y, u.z, 0 v.x, v.y, v.z, 0 w.x, w.y, w.z, 0 0, 0, 0, 1 Translation matrix (T): 1, 0, 0, -p.x 0, 1, 0, -p.y 0, 0, 1, -p.z 0, 0, 0, 1 View matrix (V): V = R * T u.x, u.y, u.z, -dot(u, position) v.x, v.y, v.z, -dot(v, position) w.x, w.y, w.z, -dot(w, position) 0, 0, 0, 1 In the code, you negate the value to translate back to the origin (or think of it like traversing to position x, y, z via the position vector and then you negate the values to travel backwards where the arrows lead you to where you came from). Also, the normalize is omitted from v because it is implied since w and u are already orthogonal to each other and facing their respective coordinates. If you took the cross of u, w (backwards instead) you would be getting the opposite direction when using the right hand rule, so order matters.
@gokulpranesh2892 ай бұрын
Great Tutorial! Please continue this!!
@PP-ss3zf2 ай бұрын
In this video I dont understand how duplicating a vertex position twice will allow for the face to be colored in one color
@semikim14322 ай бұрын
Hi, I have been following your tutorial lectures well, but I have a question. I exported the Suzanne (monkey) model from Blender as an OBJ file and rendered it, but it is rendered upside down. The OBJ file you provided renders correctly, but why is this happening? I suspect it is due to the difference between the OpenGL viewport and the Vulkan viewport. How did you solve this issue?
@BrendanGalea2 ай бұрын
I think when exporting there are some settings in blender you can set. Can’t remember off the top of my head though
@Quasar-q7t2 ай бұрын
What if you used that engine to simulate millions of particles, but you combined it with particle life? also 8:53 is a clear example of conservation of momentum, and how galaxies start spinning.
@sardinhunt2 ай бұрын
Is there a way to talk to you about it? Because I couldn't find your email or related
@PP-ss3zf2 ай бұрын
I was initially confused about perspective matrix involving orthographic matrix, since we didnt do any calculation for perspective involving ortho in the code, then I realised it was already done for us :D would it be fair to say the following?: 1. there are TWO types of projection matrices here, orthographic and perspective. 2. there is ONE type of transformation matrix here, which is the perspective transform matrix. 3. we can use the orthographic projection matrix on its own, OR we can combine it with the perspective transformation matrix to create the perspective projection matrix
@BrendanGalea2 ай бұрын
Yup you got it!
@PP-ss3zf2 ай бұрын
@@BrendanGalea Thanks Brendan, loving the series I have learned so much already thanks to you. I have one more question about rotations. You have provided us with a lovely way of accessing camera rotation using the tait bryan angles and keyboard input. After some research I found that using the tait bryan angles also prevents the gimbal lock problem, since they are not re-using an axis. However this got me thinking, if gimbal lock problem is not an issue, which normally people suggest quaternions for to solve, then is there any need for quaternions with regards to world rotation to camera? Im thinking, if tait bryan angles dont suffer gimballock, then why do I need quaternions?! just a bit confused about why they would be needed if tait bryan solves the problem they are also solving *edit* turns out the tait bryan angles also have gimballock when the middle axis is rotated ±π/2
@lagmaster1022 ай бұрын
awesome series tysm i finally reached the end. would love to see textures be covered in the event you return to this tutorial series 🎉
@artahir1232 ай бұрын
is this last video of the series ?
@TheThorMalleuson2 ай бұрын
The indices were not creating the cube. I had to change them to: ModelBuilder.Indices = { 0, 1, 2, 0, 3, 1, 6, 5, 4, 5, 7, 4, 10, 9, 8, 9, 11, 8, 12, 13, 14, 12, 15, 13, 18, 17, 16, 17, 19, 16, 20, 21, 22, 20, 23, 21 };
@TheThorMalleuson2 ай бұрын
I had to reorder the vertices in the cube for the Right, Top, and Nose faces for it to render correctly.
@tgc5172 ай бұрын
1, why do I need a orthographic volume 2, why do I need to convert it? 3, why do I need a matrix? 4, have you ever coded before? 5, when you go to college but never actually program 6, you do not need to know this to program a viewport, college jipped you.
@emperor87162 ай бұрын
i really just watched this entire video even though it felt like someone was rambling into my ear while i was daydreaming 😂
@GoofySurferSkater2 ай бұрын
Is this specifically how to project a scene into the Vulcan format?
@BrendanGalea2 ай бұрын
Yes. The general principals are universal in terms of the process to derive the projection matrix but the final result will differ depending on the graphics api or your engines conventions/ shaders. Like matrix * vector or vector * matrix ordering in your shaders, left vs right handed coordinate systems, the range of the viewing volume for the api (0 to 1 vs -1 to 1 vs etc) Definitely can be frustrating when getting started when things are flipped or not displaying and debugging it can be painful. Maybe you get lucky with throwing in a -1 to flip things and it works out but ya that was kind of my whole motivation for going into so much detail for this video in terms of building up the process. Like especially if you are following tutorials from different sources having a good understanding of the first principals for all of this is necessary so you can adjust what the tutorial is doing to work properly for you and your technologies conventions.
@TheThorMalleuson2 ай бұрын
I get the following at the end of the video when I build and run: Validation Layer: Validation Error: [ VUID-VkGraphicsPipelineCreateInfo-renderPass-09028 ] Object 0: handle = 0xee647e0000000009, type = VK_OBJECT_TYPE_RENDER_PASS; | MessageID = 0x4d0c2b9f | vkCreateGraphicsPipelines(): pCreateInfos[0].pDepthStencilState is NULL when rasterization is enabled and subpass 0 uses a depth/stencil attachment. The Vulkan spec states: If renderPass is not VK_NULL_HANDLE, the pipeline is being created with fragment shader state, and subpass uses a depth/stencil attachment, and related dynamic state is not set, pDepthStencilState must be a valid pointer to a valid VkPipelineDepthStencilStateCreateInfo structure (vulkan.lunarg.com/doc/view/1.3.290.0/windows/1.3-extensions/vkspec.html#VUID-VkGraphicsPipelineCreateInfo-renderPass-09028) I went back through this tutorial again, but I didn't find anything that I missed. Any help is appreciated.
@TheThorMalleuson2 ай бұрын
It occurs in the lve_pipeline.cpp code in the call to vkCreateGraphicsPipelines(..)
@writili2 ай бұрын
why did you stop this tutorial serie : (
@BrendanGalea2 ай бұрын
Work unfortunately got to busy. I’m hoping to start it up again. Been doing a bit of work in the engine when I have time. Trying to get a few intermediate features completed and added then I’ll push a branch to the GitHub and then go back and start creating tutorials for each part. But still will probably be a few more months, but really can’t make any promises… sorry 😞
@writili2 ай бұрын
@BrendanGalea no worries at least you're still active ;) keep up the good work ! And take you're time !
@user-es6wn6pz3e2 ай бұрын
Dude, sick video! I watched it like a year ago, and since I've been getting into compute shaders and stuff for the past year, I was inspired to try myself. For the first one, I tried doing regular parallelized incrementing, which got me about 50 000 particles at 60 fps. Then, I tried with a version of what you did, where each particle compares itself to a lower resolution texture which represents densities in larger areas, and that got about 1 000 000 particles at 60 fps, but it was a bit inaccurate since I learned you can't reliably increment (texture[xy] += 1) on the GPU, which is obvious in hindsight. So then I did exactly the same method you did in this video, and I just gotta ask, why did you stop at 4 mill? On mine, I was able to get 200 million before even dropping below 60 fps. 4 mill got about 140 fps, and the only thing that prevents me from running more than 260 million is the CPU cache size. This was using a few compute shaders in unity. Still, great video and great execution of the simulation, I'm just genuinely curious why you didn't do more particles.
@BrendanGalea2 ай бұрын
Oh wow that’s amazing!! Maybe my implementation was not as good as yours, or worse hardware, the computer I was running this on is only a gtx 970. I was very curious to know what it would run like on better hardware but even on something modern I don’t know if what I wrote would be capable of 200 million! Other reason is I implemented things in fragment shaders rather than compute shaders which probably isn’t as efficient
@user-es6wn6pz3e2 ай бұрын
@@BrendanGalea Oh wow, in a fragment shader? I'm not even sure how you'd do that in a fragment shader, that's impressive. I'm probably doing quite a few things that were literally impossible to you then. One thing I think is saving a lot in time, is that I'm going through all the particles, and just putting their position directly on a 16K texture as a white dot, then to get the 2K texture, I just add together 8x8 chunks of the 16K texture, then for 0.5K texture, I do 4x4 chunks of the 2K, and so on. That way I can get the chunk data very quickly, only incrementing each particle once. Then, when I'm comparing the chunks, I just put the delta V directly into the g and b channels of the chunk textures, and go through the particles again to apply it, so it's only going through each particle twice to do everything it needs to do. I don't think stuff like this is possible at all in a fragment shader, but I think it saves a lot of time. I'm on a 3060 though, which is probably a pretty big factor too. If you haven't learned compute shaders yet, btw, I highly recommend. They're actually very intuitive, and feel much more modern than fragment/vertex shaders. They just sound scary.