So… got any more comedy for me to look at? 👇 Also don’t forget you can try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/TheCherno . You’ll also get 20% off an annual premium subscription.
@shafiullahptm9096 ай бұрын
bro i really love your videos can you pls make a c++ one shot video pls
@Silencer13376 ай бұрын
I'm interested to learn how you would cap the framerate when vsync is off. I've always looked for alternatives to sleep() because it likes to oversleep, but never found anything.
@heavymetalmixer916 ай бұрын
Given that you're using the standard library in this video I'd like to ask: As a game engine dev what's your opinion on the standard library? Most game devs out there tend to avoid it but I'm not sure why.
@theo-dr2dz6 ай бұрын
@@heavymetalmixer91 Standard library design and implementations are optimised on correctness and generality. That can be suboptimal on performance. For example, the standard library calendar implementation is designed to get leap seconds right. That will probably not be relevant for games but it will never be completely free. Also the standard library uses exceptions quite extensively and exceptions create some unpredictability in timing. So, if you really need ultimate performance and every cpu cycle counts, like in AAA games, high frequency trading and that kind of applications, creating some kind of custom implementation of the standard library (or some kind of alternative for it) can be worth the effort. But generally C++ code is very fast, even without doing all kinds of optimisation tricks. I would say the standard library implementations in leading compilers are fine, except in really cutting edge performance critical situations.
@Brahvim6 ай бұрын
@@heavymetalmixer91I don't know as much as other people around here, but I like to think that the reason why it's so is because there are new edge cases for them to know of, it takes up space wherever it's taken, it may use a few `virtual`s around the place, I think, so... mostly because it's a library, the implementation of which, they don't know a lot about! It _does_ make life easier once one gets into its mindset, though.
@dhjerth5 ай бұрын
I am a Python programmer and this is how I would solve it: import os import sys import time # All done, Python takes 5 minutes to start
@madking35 ай бұрын
I usually create list with 500 random numbers and sort it with bubble sort it gives me 5 min best case
@jongeduard5 ай бұрын
Yeah, but let's also talk about performance in Python and how you want to compare it to anything like C, C++ or Rust.
@thuan-jinkee99455 ай бұрын
Hahahah
@MunyuShizumi5 ай бұрын
@@jongeduard whoosh
@iritesh5 ай бұрын
@@jongeduardwooosh
@christopherweeks896 ай бұрын
Remember: this is the stuff we’re training our AI on
@monad_tcp6 ай бұрын
job security for humans
@enzi.6 ай бұрын
@@monad_tcp 😂😂
@Avighna6 ай бұрын
💀☠️💀☠️💀
@platin21486 ай бұрын
It doesn’t matter as LLMs have inherited fuzziness as them being a statistic model.
@codinghuman99546 ай бұрын
good
@AJMansfield15 ай бұрын
As a firmware engineer, my first instinct was "set the alarm peripheral to trigger an interrupt handler"
@jamesblack27195 ай бұрын
That was my thought also, but I come at it from a C background and his approach just didn't seem elegant. It seems overly complicated on something that is rather simple to do. Shame AI will be trained on this approach.
@cpK054L5 ай бұрын
Wtf is an alarm peripheral? Did you mean Timer?
@AJMansfield15 ай бұрын
@@cpK054L on a system with a free-running continuously-increasing system clock, you set the alarm register to generate an interrupt when that system clock reaches the set value - in this case, you'd take the current time, add two minutes worth of clock ticks to that value, and set the alarm to that value.
@3xtrusi0n5 ай бұрын
@@cpK054L MCU's have hardware timers that you can use without consuming a thread. Depending on the CPU and type of timer implemented (in hardware), you can have it trigger a hardware interrupt which will then kick off a given task/instruction. It's a peripheral alarm, because it is a peripheral on the hardware/MCU. You can also call it a timer, but either name means the same. Alarm would indicate you are 'counting down' and timer would indicate you are 'counting up'.
@cpK054L5 ай бұрын
@3xtrusi0n I've never heard jt called an alarm. Also, timers don't have "counters" from what I've seen...they only have flag bits The ISR just waits for it to raise then you must reset otherwise it do t work the next cycle
@TwistedForHire5 ай бұрын
Funny. I am an office application engineer and my first thought at looking at your code was "noooooo!!!" We try to use as little resources as possible and a 5ms constant loop is "terrible" for battery life. It's funny how people from different coding worlds approach a problem different. My first instinct was much closer to the sleep/wait implementation (though I wouldn't waste an entire thread just to wait).
@Brenden.smith.9215 ай бұрын
I was thinking the same thing. I would've had a thread sleeping and then doing whatever needs to be done after the sleep timeout using a callback. If there was a need to share data with the main thread and I didn't want to do safe multi threading I'd use a signal to interrupt the main thread (unless it was something that wasn't very important, unless, unless, unless). Looping over and over like that and sleeping for 10ms is the exact same solution as the second guy except he slept for 1s which is what was laughed at, but it's fundamentally the same solution. Just a lot sloppier.
@wi1h5 ай бұрын
@@Brenden.smith.921 as for your second point, it's not the same. the "game loop" solution presented is off from the final by at most 5 ms, the second solution from the thread is off by (loop processing time required) * (loop iterations, in that case 300)
@RepChris5 ай бұрын
As with anything "engineering" (to clarify: coding and CS has a lot of stuff that's sitting in the fuzzy zone between science and engineering, not trying to knock your status as an engineer), there isnt one "best" solution, even just by cost and development time being in the picture. In a game engine the (relatively) minuscule overhead doesn't matter since youre doing a lot of other stuff per frame/simulation step which is way way more costly, and the inaccuracy youre going to get is probably a nonissue since a game generally doesn't need a 5 minute timer to be accurate down to the millisecond. So the time spend thinking about a better solution and implementing it is going to be better spent working on something more important. Completely different picture for something that needs to be very accurate, or actually power/compute efficient (which games certainly are not in any capacity, at least 99+% of them)
@youtubehandlesux5 ай бұрын
Me writing a video game and trying to make it stable up to 300 fps: A whopping 5ms??? In this economy???
@livinghypocrite52895 ай бұрын
Yeah, coming from yet another background, I immediately catched other stuff. Just reading the original problem my immediate question was: How accurate does the timer need to be? Because I constantly have to explain people, that I can't give them millisecond accuracy on an operating system, that isn't a real time OS. So, I saw the Sleep solution and my immediate reflex was: That isn't going to be accurate, because a Sleep tells the OS to sleep at least that amount of time, so the OS can decide to wake my application at a later time. Could be fine, but this depends on how accurate the timer needs to be. Also when seeing the recursive function, I also noticed the stack usage of that solution, but also the problem, that a loop is simply faster than a recursive function, because a function call has overhead, building that stack takes CPU time, so simply by calling the function recursively the timer will get more inaccurate, without even looking at how long the stuff that is executed while running the timer takes.
@systemhalodark6 ай бұрын
Trolling is a art; Topnik1 is a true artist.
@mabciapayne166 ай бұрын
an* ( ͡° ͜ʖ ͡°) And I don't think he made a bad code on purpose.
@херзнаетгражданинЕбеньграда6 ай бұрын
@@mabciapayne16 trolling is art, and @systemhalodark is an true artist
@mabciapayne166 ай бұрын
@@херзнаетгражданинЕбеньграда You should really learn English articles, my dear friend ( ͡° ͜ʖ ͡°)
@mabciapayne166 ай бұрын
@@херзнаетгражданинЕбеньграда a true artist* ( ͡° ͜ʖ ͡°)
@benhetland5766 ай бұрын
And top it off with a recursive call _#seconds_ deep instead of iterating, just to increase the chance of stack overflow on a long waits I assume.
@akashpatikkaljnanesh6 ай бұрын
You want your users to hate you? Tell the user in the console to set a timer for 5 minutes, wait for them to press space and start the timer, and press space to finish it. :)
@no_name47966 ай бұрын
Just have the user manually update the timer at this point...
@HassanIQ7776 ай бұрын
just have the user manually write the code
@dandymcgee6 ай бұрын
just have the user go touch grass, then they won't need a timer.
@akashpatikkaljnanesh6 ай бұрын
@@dandymcgeeWonderful idea
@DasHeino20105 ай бұрын
Just have the user prompt ChatGPT! :3
@add-iv5 ай бұрын
sleep doesn't take any cpu resources during the sleep time since the thread will be put into the pending queue (on most OS). Periodically checking will consume CPU time, even if it is minimal, and is a very Game Engine like solution.
@nerdError0XF5 ай бұрын
Isn't creating a thread expensive by itself?
@tylisirn5 ай бұрын
@@nerdError0XF Sleep isn't creating any threads, it puts the calling thread to sleep.
@nerdError0XF5 ай бұрын
@@tylisirn okay, makes sense
@Abc-jq4oz5 ай бұрын
So who checks the OS’s pending queue then? And how often?
@tylisirn5 ай бұрын
@@Abc-jq4oz The OS's task scheduler does in conjunction with hardware. The scheduler maintains a priority queue which has all tasks organized by priority and when they need to wake up. When a task finishes its timeslice the scheduler looks at the next task in the priority queue and if it's ready to execute, it executes it. If the next task is not ready to execute the OS sets a hardware timer to raise an interrupt when the next task is scheduled to run and puts the CPU into low power sleep state (usually ACPI state C1 (halted) or C2 (stopped clocks), these days even C3 state (deep sleep) is used for ultra low power computing when on battery power; in C3 state most of the CPU core is powered down and caches are allowed to go stale requiring cache refresh when the CPU reactivates). The hardware interrupt wakes up the CPU at the scheduled time.
@cubemaster12986 ай бұрын
I am not trying to protect topnik1's code in the video, it is pretty bad indeed BUT I am pretty sure it is not going to be 300 stack frames deep. From the looks of it, it is a tail recursive function, so any major compiler (e.g. clang) will do tail call optimization.
@JFMHunter5 ай бұрын
This should be higher
@JuniorDjjrMixMods5 ай бұрын
But then you would be expecting for the compiler to fix a problem that shouldn't exist...
@MeMe-gm9di5 ай бұрын
@@JuniorDjjrMixMods Tail Call Optimization is often required to write certain algorithms "pretty", so it's often guaranteed.
@Kazyek6 ай бұрын
Good video overall, but the part about precision at 15:21 is a bit lacking. To be honest, precision is most likely not very important when sleeping for 5 minutes, but the overall takeaway of how sleep work is a bit wrong. Sleep will sleep for *AT LEAST* the time specified, but could sleep for quite a bit longer depending on other task's CPU utilization, the HPET (Hardware Precision Event Timer) used by the system (or not, on some system there might not even be one), the OS's timer resolution settings, the virtual timer resolution thing that windows do on laptops for powersaving where it will actually stretch the resolution, etc etc... Therefore, when very high precision is desired (for example, a frame limiter in a game, to have smooth frame pacing), you don't want to sleep all the way, but rather, sleep for a significant portion of the time, but busy-loop at the end. This fundamental misunderstanding of how sleeping work is why so many games have built-in frame limiters with absolutely garbage frame-pacing, and that you get a much smoother experience by disabling it and using something like RTSS's frame limiter instead.
@Kazyek6 ай бұрын
And by "quite a bit longer", I mean that on a windows laptop in default configuration, a sleep(1ms) might sleep for over 15ms sometimes!
@Fs3i5 ай бұрын
Yeah, “make something happen at x time” is a hard problem, and really hard (near impossible) to write in a portable fashion
@shadowpenguin34825 ай бұрын
When I was younger I was always surprised how sleeping for 0ms is much slower than sleeping for 1 ms
@JohnRunyon5 ай бұрын
You can get pre-empted anyway. If you need to guarantee it'll happen at an exact moment then you should be using an RTOS. Thankfully you almost never actually need to guarantee that. A frame limiter should be maintaining an average, not using a constant delay, and then it won't even matter if the OS delays you for 15ms. Btw, a 15ms jitter is completely and totally unnoticeable.
@TheArtikae3 ай бұрын
@@JohnRunyonBro, that’s a whole ass frame. Two if you’re running at 144 Hz.
@hi11711728 күн бұрын
there is what I would argue as a better way than any of the methods described here that does not require async, it does not require threads, etc. It's just plain simple single threaded and it still allows your program to do other stuff while the timer runs. what you do is you use the settimer function from C, and then register a signal handler for SIGALARM. in your signal handler, you unset the timer if you are done or you can keep it there if you want it to continue for the next 5 minutes.
@asteriskman6 ай бұрын
"Train the AI using the entire internet, it will contain all of human knowledge." The AI: "derp, but with extraordinary confidence"
@Pablo360able5 ай бұрын
this explains so much
@scowell5 ай бұрын
In embedded land we have real timers! Talk about accurate... sub-nanosecond is easily doable. Overhead? It's a peripheral! Ignore it until it interrupts you.... or have it actually trigger an output without bothering you if you really need that accuracy. Love timers.
@JohnSmith-pn2vl5 ай бұрын
time is everything
@gonun695 ай бұрын
They are great but you better have the datasheet and calculator ready to figure out how you need to set them up.
@RepChris5 ай бұрын
@@gonun69 thats the case for pretty much everything embeded
@muschgathloosia58755 ай бұрын
@@gonun69 I can't imagine you would ever not have the datasheet ready
@scowell5 ай бұрын
@@gonun69 Exactly... gets easier when using a PLL to run the clock... I do this for syncing to video.
@brawldude26566 ай бұрын
I recently made a discord bot. The task was giving every user a cooldown timer. Well at first glance it may seem like an insane task but once you realise time just goes on you don't have to do any computation meanwhile. You can just compare start and end whenever user needs to be updated. And this is how many village/base building games use on their playerbase. Like for example you need a buliding that takes 3 days to bulids. When the player is online you can just update every second but when the player is offline you can have the end date and compare to that when the player logs in again. Or someone interacts with that user.
@theairaccumulator71446 ай бұрын
Duuh like if you can't figure this out you really shouldn't touch an ide
@boycefenn6 ай бұрын
@theairaccumulator7144 asshole alert!
@brawldude26566 ай бұрын
@@theairaccumulator7144 there are many people who can't even get close to figuring this out I'm not even kidding
@Brahvim6 ай бұрын
Lazy-loading, pretty much, right?! Nicely used as always! Some things are okay to do right before their consequences are needed...
@Brahvim6 ай бұрын
@@theairaccumulator7144 Don't act so, please...
@0xkleo6 ай бұрын
I'd never think i would ever spend 20 minutes to watch a 11 year old post about a 5 minute timer. but i learned something Edit: 350 likes??? Damn i must be famous
@monkeywrench41665 ай бұрын
He doesn't look 11 year old tbh
@driz63533 ай бұрын
@@monkeywrench4166 11 year old *post*
@rogercruz15475 ай бұрын
25 years ago when I started coding I took setTimeout and setInterval in ActionScript for granted, I was 8. Now I was thinking of a thread with a loop and events that trigger callbacks as other threads depending on timers you set that would mimick that behaviour but when you mentioned Promises I realized it would be way easier to open a thread for each timer and just sleep...
@KieranDevvs6 ай бұрын
The best solution for this is asynchronous execution. That way you can decide how the execution is performed i.e on the same thread or on a separate thread, and when the execution / timer is complete, you can decide if you want to rejoin the execution context (thread) back to main and take the perf hit, or run your logic on the background thread without any perf hit. You get all the benefits i.e you don't need to worry about thread safety and its fully configurable in how you want it to run.
@phusicus_4045 ай бұрын
Wonderful, how to do it in C++?
@KieranDevvs5 ай бұрын
@@phusicus_404 std::async? I thought that was pretty obvious.
@phusicus_4045 ай бұрын
@@KieranDevvs he used this in his code, you use it in other way then
@KieranDevvs5 ай бұрын
@@phusicus_404 Nope, the way shown in the video is correct more or less. The thread sleeping is bad, but apart from a few fixes, the general premise is there. If you put the thread to sleep and don't use a state machine to allow the thread to return, you block the main thread in async cases where you only use one thread (mainly in cases where you're using a UI).
@w花b19 күн бұрын
What about C
@kuhluhOG5 ай бұрын
17:15 Btw, small nitpick for the C++14 users (and above): move your callback into the lambda capture, because if the callback is an object with a defined operator() (like a lambda), there could be big-ish members (like with a lambda capture).
@peterjansen48265 ай бұрын
A game-developer who cautions to not use OS-dependent libraries. Music in my Linux-gaming ears. 😉
@andersonklein35876 ай бұрын
I'm surprised no one brought up interrupts, I don't know about modern C++, but I've seen in old school assembly this concept of setting a "flag" that interrupts execution and calls/executes a function before handing back the CPU.
@MrHaggyy6 ай бұрын
On embedded devices, this works like a charm as dozens of timers are running all your peripherals. So you pick one of them and derive a logic for all the other timed events
@sopadebronha6 ай бұрын
This was literally the first thing that came to my mind. I think it's the instinctive solution for a firmware programmer.
@sinom6 ай бұрын
I'm not an embedded programmer so I might just not know something, but afaik the C++ stl doesn't provide any device agnostic way of handling interrupts, so anything you do with interrupts will always be hardware dependent and non portable. If you are using some specific microcontroller and don't care about portability then interrupts would probably be a good way of handling the problem.
@fullaccess26456 ай бұрын
If I want to run the callback on the main thread, could interrupts avoid the while loop that checks for the task queue?
@sopadebronha6 ай бұрын
@@fullaccess2645 That's the whole point of interrupts.
@valseedian5 ай бұрын
haven't watched for even 1 second, but, the answer is a thread that sleeps for nearly 5m, then a few ms until the time is reached, then calls a callback or sets a flag. when I was making my scratch gui system in c++ I had to solve the timer issue so I wrote a whole scheduler and event handler subsystem.
@mike2000175 ай бұрын
Coming from POSIX land, where anything interesting has a pollfd (file-descriptor) at the bottom of it, event loops consist of something that gathers all the interesting events and then calling "poll" on their pollfd's (or calling "epoll" or "select"). So, in that world, a timer like this is either implemented via a timerfd (you tell the kernel to create a "file" and trigger it at a specific time) or by simply setting the timeout for the poll call to the earliest wake-up time among your active timers (personally, I prefer that, gives more control). No messing around with threads. Coroutines are another way to do the same thing (coroutines are syntactic sugar on top of the same mechanisms).
@TryboBike6 ай бұрын
This threaded timer has subtle bug. If 'work' performed during the timer duration takes longer than the timer itself then after the timer concludes its scheduled work will need to wait for the 'join' thus delaying the execution by more than the 5 minutes. On the flip side - moving the 'timer' callback to the timer thread will require work of main and 'timer' to be concurrent which brings its own set of problems. Frankly - having any sort of 'delayed' execution done in a single thread whil stuff is happening during the wait period is a pretty difficult problem to tackle. Unless it is something like a game, where there is a game loop or an event driven application. But even then, depending on resolution of the loop the wait period might be very, very different to what was specified.
@delta32445 ай бұрын
That's not what thread::join() does. thread::join() has _no effect_ on the thread corresponding to the std::thread it is called on. It only affects the thread which calls thread::join(), by making it block until the std::thread which .join() was called on finishes. Without thread::join() at the end of main(), the code following the timer would fail to run if main ended before the timer did. That's why it exists. To reiterate: it does not tell the timed thread to do any work. It tells the main thread to wait for the timed thread's work to finish before ending the program. The timed thread does work on its own, once the OS wakes it up (which will happen sometime after the sleep duration).
@Templarfreak2 ай бұрын
the best way to handle a timer: if you ever can avoid calculating the timer yourself, you should. what do i mean? if you ever have to try and calculate the current time that the timer has ran for or how much longer the timer has to run for, then your timer *will always* be more inaccurate compared to when you are *not* doing that, because you will always be using time to calculate that timer's progress and that will change when you check when the timer is completed. not by a lot, but at best you will have a different and more insidious version of an off by 1 error that can cause problems that are very difficult to debug. so, this solution you have in this video is very good on the basis that it avoids that problem. there are other useful features to have for generalized timers (pausing/unpausing, getting remaining time, getting current time, having more than one callback, whether to repeat the timer or not, etc) but this covers the absolute basic necessities to just get the timer working and functioning as one would typically expect and that is good in my book
@Reneg9736 ай бұрын
... And then you notice that your 5sec timer needs 5.03sec on your first PC. On the second it takes 5.1s and after some debugging you find out the OS moved the thread onto an E core and that your thread priority was not high enough. Would be nice to extend this video to handle more details. Like higher+highest accuracy or lower+lowest CPU usage.
@dozogАй бұрын
Please let 2013 know about this issue.😂
@htpc002Weirdhouse25 күн бұрын
@@dozogNah, 2013 is dealing with clocks that jump forwards and backwards as they're moved from core to core.
@htpc002Weirdhouse25 күн бұрын
Once had to write a Bayesian model to determine the inverse function for the sleep-like function to (gradually) determine what delay to request to actually get the desired delay.
@deltamico4 күн бұрын
Why not just use exponential checks
@soonts26 күн бұрын
Unlike game engines, well-written applications do not have a loop which runs every frame. They sleep most of the time and only wake up when they need to do something. On Windows, see SetTimer function and WM_TIMER message. On iOS and Mac, see Timer class. On Linux, timerfd_create function to create the timer and poll() to sleep, ideally integrate into the same poll which waits for data from x-window socket. Pretty sure none of that stuff is available in the C++ standard library.
@alexanderheim969019 күн бұрын
Create a thread with a binary heap holding deadlines and a conditional variable. Pop the binary heap in a loop and block on the condvar max until deadline occurs. If a new deadline is inserted, just notify the cond var. Add additional logic for timers and for repushing deadlines that aren't finished yet but got interrupted by a smaller and newer deadline.
@virkony6 ай бұрын
9:21 for that case tail call elimination should fire unless there were stack allocations done in "dowhatuwantinmeantime". So it effectively turns into jump to beginning of function.
@motbus36 ай бұрын
Fork Execve bash -c sleep 5
@yoshi3146 ай бұрын
isn't that 5 seconds wait?
@sadhlife6 ай бұрын
sleep 300
@ProtossOP6 ай бұрын
@@yoshi314easy fix, just multiply by 60
@Pritam2526 ай бұрын
MS Windows be like:
@no_name47966 ай бұрын
Or bash -c sleep 300 on linux...
@oleksandrpozniak6 ай бұрын
As an embedded developer I like to use SIGALRM and handler in case I'm sure that I'll need to have one timer only at the same time. If I need to have several timers I use timer_create aka Linux timers.
@jdrisselАй бұрын
I like the classes in the QT library, but that is awfully heavy weight if all you are using is a timer. My preference would be to use a mutex to signal between two threads, and use one thread for the timer and the other for the work. The work thread gets the mutex, does some work and releases it (If it doesn't get the mutex, the timer has expired.) The timer thread checks the time and then loops, possibly using sleep or usleep if there is a lot of time left. When the timer expires it begins trying to grab the mutex. If it succeeds, the thread terminates. When the working thread fails to get the mutex, it cleans up (wait on the timer thread, destroy the mutex) and does whatever needs to happen when the timer expires, but possibly not in that order. This should work on almost any platform or OS. In general, I would sleep in the timer loop for 1 second if there is more than 10 seconds left, then 100ms until .1 second is left, then, assuming your processor is fast enough, 10ms until 1 second is left, etc until I am down to some minimum time at which point we just spin until the time is up.
@not_herobrine37526 ай бұрын
My way would include obtaining a timestamp at the beginning, checking every iteration of the application loop whether the time elapsed is greater than or equals the start time, then doing whatever if said condition was true
@ruix6 ай бұрын
This is also what I thought
@harald4gameАй бұрын
One point is missing: If you already are using a specific environment that has times I really recommend to use those. In a program using native windows api having a message loop and a window use SetTimer and the WM_TIMER event. In a Qt application use the timer provided by Qt, e. g. CTimer::singleShot. In MFC use CWnd::SetTimer and OnTimer(..) message handler. In a console application the std sleep_for is fine. And definitely if more jobs are needed don't create a thread per job (thread count is limited). Instead create a single worker threat together with a producer/consumer pattern. Also never use time() or any localized or adjustable time for those kinds of timeout.
@radumotrescu38324 ай бұрын
I think this is one of the best situations where Asio (also packaged in Boost) actually makes sense if you are planning to do this kind of thing multiple times in a project. If you have to run multiple callbacks on a repeating and variable timer, and you have to handle IO in general, slapping an Asio io_context and a few steady timers is super easy and extremely reliable. You also get nice functionality like early cancelation, error code checking and other things that make it nice for production.
@mikefochtman71645 ай бұрын
We had to run code in 'real-time' in the sense of training simulators. This means we had to perform a lot of calculations, then do I/O interfacing with the student's control panels in a way that the student couldn't tell the difference between the simulator and the actual control room. So update the I/O with new calculation results at LEAST every 250 ms. I know sounds slow by gaming standards, but we did a LOT of physics calculations for an entire power plant. So we set up what had to be done in each 'frame' and used a repeating interrupt timer configuration. A frame ran doing calcs and I/O then sleeps until the next interrupt. If we occasionally 'miss' an interrupt because the calcs took too long, we had to 'catch up' the next frame. (one way to do this was the interrupt service routine increment a simple frame_counter and main loop checks if we 'missed' an incremental step) For time delays, we simply did a counter in the main code that would count up to 'x' value because we knew each time the code executed it was 'delta-time' step since last execution. So for 5 minutes at a frame time of 250 ms, simply count up to 1200. This was a few years back, but you can see it's similar to your 'game engine' concept.
@jamesmackinnon61085 ай бұрын
I remember when I was first starting programming I learned visual basic script (Why I chose that I have no idea), and I was looking up how to wait for a period of time and ended up on a forum that said the way to set a timer was to ping google, figure out how long that took, and then divide the time you want to wait by the length of the ping and ping google that amount of times.
@tunk_2ton1685 ай бұрын
I also have chosen this path. I chose vbs because it doesn't require much. Literally just open notepad and you are good to go and its easy to learn. What did you move onto from that?
@jongeduard5 ай бұрын
Yeah people can really think in too simple ways about such a thing, but that forum thread was really bad. LOL. As someone with experience for many years in several programming languages, especially including C# professionally and but for example also Rust nowadays (and it is my favorite now), I can only say, that this modern type of Async code at the end of the video was obviously the solution I was thinking about immediately, even though I didn't exactly know the modern C++ implementation for async code. But this is how in modern programming this kind of thing is generally done. Many languages do very similar things. All these things are also a related to having programming experience here. If you have done enough concurrent and parallel programming, then it gradually becomes far more natural to think that way.
@sub-harmonik6 ай бұрын
generally the extensible way is to maintain a priority queue that contains time values and callbacks. Every loop poll the first element of the priority queue and remove until the time value > current time. That way you can have as many timers as you like. Things get way more complex if you need accurate sleep without spinning though. You pretty much need to get into platform-specific api as well as setting certain thread priority/interrupt rate. Recent windows has pretty weird and relatively undocumented timer handling.
@FastRomanianGypsies6 ай бұрын
Now pause it. Get the amount of time left. Modify the time left. Change the callback. Allow for async callback. Allow for long running callbacks that continue through power-disruptions and errors by writing to disk. And by writing to disk, include the callback with payload as a serialized file so when the timer resumes, it doesn't require resetting the callback. Allow for greater precision by decreasing sleep at end of loop as the timer nears completion. There's a lot more than just calling sleep on a separate to creating a useful timer, and implementing a timer with all the aforementioned features is quite the challenge, but in many cases absolutely necessary to handle real world business logic.
@sebibence025 ай бұрын
Timing in CS is an artform, basically an optimization between precision and CPU usage. The best approach is to go with the lowest level hardware interrupts and register a callback on the interrupt event. In higher level code the more precise timing you want, more frequently you need to schedule your timer thread which will lead to higher CPU usage. If you optimize to have lower CPU usage, the thread will be scheduled less often, therefore decreasing precision (the thread won't be able to check for elapsed time as frequently). Considering this the == approach in one of the replies is a huge mistake, because it is guaranteed that the timer never will be exactly equal due to the operating system's added thread scheduling overhead. Even with hardware interrupts there will be a thread swap operation losing some time until the instruction pointer is set to the callback method. Good stuff
@sumikomei6 ай бұрын
at first glance I totally didn't read "using namespace std::cherno_literals;"
@ADAM-qd9bi6 ай бұрын
I’ve always thought of us, and used to always misspell it with “cherno” 😭
@satibel5 ай бұрын
note that doing what you did with the system_clock or high_resolution_clock (in case it's not steady) instead of steady_clock can work most of the time, but you'll get issues when the time changes due to daylight savings or such, and you can accidentally get a one hour and 5 min timer
@delta32445 ай бұрын
or a zero minute timer, for that matter
@robwalker46535 ай бұрын
For the first idea example you showed I would have just calculated now + 5 mins when the timer is created, store that time as target time. Check in loop if current time is greater or equal to the target time, if so, the timer has triggered. Rather than casting a duration of one time minus the other each loop.
@szirsp5 ай бұрын
20:00 My use cases of timers usually involve programming the interrupt controller, setting up HW timers or RTC alarms in microcontrollers... setting up "sleep"/standby/poweroff states What different worlds we live in :)
@pastasawce6 ай бұрын
Yeah def getting into thread pool territory. Would love to see more on this.
@xlerb22865 ай бұрын
Just shows that nothing is simple. What type of app are you working with? Do you need the thread to remain alive while the timer is running? Do you care about multi-platform? How much accuracy do you need? How important is it that code have low processing overhead? And the list goes on. (And that recursive example is going to keep me awake tonight, it takes a special type of person to write code like that)
@aakashgupta62856 ай бұрын
As an embedded engineer, I would just use a built-in timer interrupt, which should be available for all platforms, although not portable.
@thelimatheou4 ай бұрын
A fascinating historical snapshot of the Indian application development process. Thanks!
@siddy.g61464 ай бұрын
What makes it Indian?
@thelimatheou4 ай бұрын
@@siddy.g6146 copy, paste and iteration of code on stack exchange...
@thelimatheou4 ай бұрын
@@siddy.g6146 copying and pasting crappy code from stack exchange
@thelimatheou4 ай бұрын
@@siddy.g6146 copy/paste/stealing code from forums
@justsomeguy6336Ай бұрын
@@siddy.g6146Indian code is infamous for being atrociously bad
@woobilicious.6 ай бұрын
I was thinking about the "busy wait" issue you end up with in game loops, especially if you need to serialize timers / handle game saves when the user quits, and I came up with, storing all your deferred functions in a heap/priority queue, and then just check the head of the queue, and sleep for that amount of time, if you have a DSL, you could potentially have your code look like "bad" code that just calls sleep(), but really it's just a co-routine that yields the CPU.
@dawre31246 ай бұрын
If you need to wait in performance critical multi threaded environment for an accurate amount of time as shortly mentioned in the video keep in mind sleep functions are not accurate (I would assume async can not fix this). with more threads that cpu cores the time sleep oversleeps tends to go up too. for full accuracy empty loops are the only way I know, for something reasonable reduce the sleep time with an empty loop afterwards. When I had problems with this I split the sleep into multiple calls (I felt like shorter sleeps are more accurate). I used something like this (C): void my_sleep_fast(const int64_t target_t) { int64_t time_diff; int64_t temp_sleep; time_diff = target_t - get_microseconds_time(); temp_sleep = time_diff - (time_diff >> 3); while (temp_sleep > SLEEP_TOLERANCE_FAST) { usleep(temp_sleep); time_diff = target_t - get_microseconds_time(); temp_sleep = time_diff - (time_diff >> 3); } usleep(SLEEP_TOLERANCE_FAST); }
@Tuniwutzi5 ай бұрын
It's interesting I never thought about how involved the simple question "how to delay code execution by X time" actually is. I usually work on stuff that is IO heavy and focuses on processing events as they come in (ie: a button was pressed, a socket received data, a cable was connected, ...). More often than not I already have an event loop, for example based on file handles and epoll/select. So my first instinct for a non-blocking timer was: create a timerfd and put it into the existing event loop. This video made me realize that I've never considered how many things become more straightforward if you're running a simulation that has one thread continuously running anyway.
@dennissdigitaldump8619Ай бұрын
Really it comes down to accuracy. If it's "tell me in 5 minutes", ms accuracy is probably too much resources. Vs the rocket needs to launch in 5 minutes. Different techniques for each case. In the extreme a separate thread that syncs & tests against an atomic click, calibrate the system interrupt to the atomic clock. Do some calculations & set an assembly interrupt. BTW I had to do this once.
@rikschaaf4 сағат бұрын
If you have a game loop and that game loop gets called often enough to provide a good enough resolution for your timers and you have data that can be processed in such a game loop, then your solution would indeed be viable, but do you see how it does have those 3 requirements? For a game, that's probably fine, because it (preferably) updates at least 60x per second and the processing in between is essentially just there to calculate the next frame based on the given inputs. Not all programs have a game loop though and you might not want to introduce one, just to be able to create a timer. In that case, the multi-threading solution is completely fine, as long as the thing you want to activate after the timer runs out is thread-safe.
@lurgee17066 ай бұрын
sleep() is great untill you realize you can't cancel your timer and notify the user about it right away, so if you do need to handle cancellations (either manual or due to the process' shutdown), you're screwed. So: * If you want a delay in the current thread just use condition_variable::wait_for. * If you want it to be executed asynchronously, either spawn a thread yourself or spawn an std::async task (which may very well spawn a thread under the hood anyway) and, again, wait on a condvar. * If you want your solution to be generic and scalable, you're bound to end up with some kind of scheduler, so you either use a library like boost asio (whose timers do use a scheduler under the hood), or write one yourself. As "simple" as that. Frankly, seeing how easy it is to do the same thing in other languages like C#, coming back to C++ is just painful.
@DerHerrLatz5 ай бұрын
Thank you for pointing out the obvious. (since nobody else does.) Would be nice to have an event- or mainloop in the standardlibrary. But it would probably not work if you don't have an OS to provide the underlying functionality.
@BitTheByte26 күн бұрын
C# garbage collector kept eating my timers ;~;
@ДмитрийКовальчук-р9и5 ай бұрын
That's a nice video! And what I like the most and that you seem to be one of the very few people I know, who actually use the steady_clock for timers and stopwatches, which is by the way the intended application of this tool. The vast majority resorts to high_resolution clock and then go around in panic when their system time gets updated. And man, is it a pain to search for a root of such a bug because it's really hard to reproduce it on your own machine and the behaviour just seems random. By the way, any implementation of sleep only guarantees that you sleep for at least a timespan or at least until the point in time. There is actually no upper limit on how much time would pass after that PS I guess the so-called expert wanted to do something similar to the main loop concept with a step of second instead of display frequency but just messed this up so badly that ended up with recursive calls. As for your point in the video, I saw a lot of samples of custom games where the time for actually running your game was just completely forgotten in your wait function at the end of the loop.
@leedanilek51915 ай бұрын
Yeah... i don't think "most applications" behave like games with a loop that runs at 60hz. At least I've never worked with one, from iOS to CLI tools to backend to database to analytics tools. Game development is a special kind of inefficient
@J.D-g8.15 ай бұрын
A sleep function is literally a timer, unless its very accurate in embedded systems, in which case it can be no ops tuned to clock cycles. But human time scale sleep func needs a basic Starttime = Systime If (systime - starttime > x )
@sebastianconde13416 ай бұрын
Coming from a C background I would actually use an alarm :) Sort of like this: #include #include void handler(int s) { /* Whatever you want */ } int main(void) { struct sigaction sa; sa.sa_handler = handler; sigemptyset(&sa.sa_mask); sa.sa_flags = 0; sigaction(SIGALRM, &sa, NULL); /* Set up the alarm for 5 minutes... */ alarm(5*60); /* Rest of your code... */ } This way, your code (the process executing it) will be interrupted 5 minutes after the alarm() call was made. You can keep doing work until then. When the interruption comes (from a SIGALRM signal) your code will execute the handler function.
@harold27186 ай бұрын
TBH I really don't like all those "sleep"-based solutions, which (1) consume an entire thread just to have it do nothing, and (2) make the actual waited time depend on when the kernel decides to schedule the thread after the sleep runs out, depending on CPU load at the time etc. (at a scale of 5 minutes that's not very important, but still, it's a fundamentally inaccurate approach) To be fair to the people who suggested it, C++ doesn't really give us the tools to actually build a timer. (but windows does, so I guess we're back to #include after all)
@ashton79816 ай бұрын
The waited time depending on the scheduling can be mitigated by using std::this_thread::sleep_until instead of std::this_thread::sleep_for. So instead of going off after the thread has been running for 5 min, it'll go off the first time it's scheduled after the 5 min mark.
@anon_y_mousse6 ай бұрын
With a hosted environment, you can't depend on a timer to more than a few milliseconds of resolution anyway. This isn't an unhosted realtime OS that most people will be using this for. Also, there are better OS's than Windows that someone should be using if they don't enjoy having their data stolen and sometimes erroneously deleted by a piracy checking algorithm.
@sub-harmonik6 ай бұрын
if your timer is 5 minutes it shouldn't matter too much. It's when you get
@XiremaXesirin4 ай бұрын
16:11 I do have my own Code Review thoughts. 😉 Specifically: I would create a time_point object at line 12, before we call std::async, which is the current time + the duration the user specified, and then inside the std::async call, I would use this_thread::sleep_until, instead of sleep_for. This way, you account for any possible delays in the execution of the lambda function. std::async is not _technically required_ to immediately start execution of the provided functor right away in a new thread, even when the std::launch::async option is provided. It might be delayed if another functor is running and the thread pool is exhausted. So by determining "this is when the thread should awaken" preemptively, you make it more likely that the time the user provides will end up being accurate. Of course, the real solution is using boost::asio::steady_clock with a dedicated executor, which lets you cut down the code to only like 3 lines, but I guess the requirement was to use only vanilla C++, so...
@pschichtel5 ай бұрын
The 300 stack frames comment on the recursive function... there is a thing called tail call optimization, which apparently C++ compilers have been doing a while, that optimizes this into a loop. There is quite a few people that think more in recursion than in iteration, especially in a functional context. the async vs thread thing is nitpicking for the sake of it. there is really no advantage to be had _here_ by using async instead of just directly spawning a thread. you don't gain control, you don't gain performance, you are just obscuring the fact that a thread is spawned and suspended by wrapping it up in async. And when this async stuff get's put into a context where this might be scheduled into a thread pool, now you have a thread from the pool blocked for 5 minutes. From game engines you are probably used to cooperative multi tasking, which could have been an interesting spin and the one solution being bashed from the forum actually describes the idea of cooperative multi tasking, albeit with some problems.
@abraxas26585 ай бұрын
19:34 If I wanted it to happen on the main thread, I'd probably have a game loop (as you showed) but with an integrated event system. This would be implemented as a min-heap with the time it should be called at as the value being sorted on. Then all timers could be checked with a single comparison. (If the lowest time has not been reached, all the others are guaranteed not to have been reached.) At this point though, we are very close to a full game engine core haha
@stonebubbleprivatАй бұрын
A while loop is a bad idea, as it uses many resources. Sleeping instead of checking every 5ms gives other threats time to run and our thread doesn't gets throttled by the scheduler. The scheduler puts threats that use all their processing time in a lower priority queue and prioritizes i/o-bound threads that call an interrupt early. By checking constantly we waste our limited processing time. A thread that waits five minutes has a high priority and therefore is likely to be called exactly after 5 minutes.
@Xudmud5 ай бұрын
I know I've done a similar thing using Boost (boost::asio::deadline_timer() and then used boost::posix_time::seconds() to get the timer value), and that had worked for me, plus kept it asynchronous so it wouldn't hold up the rest of the system. (Of course part of that was having to use c++0x, I'm sure there's a better way to do it, but had to work with what I had)
@uNiels_Heart3 ай бұрын
In your example code you already go std::chrono all the way (including your literals), so your duration_cast is redundant and just clutters the comparison up unnecessarily (it's easier to read without it). Comparing any duration with any other duration (of course I'm talking about values of the type duration rather than an unadorned number) will work as intuitively expected even if they carry with them a different unit. Or I guess I should say *because* each of them carries with it its unit.
@Chriva6 ай бұрын
Condition signals is probably something you want with huge delays like that. Especially if you want to exit cleanly without waiting forever
@ccgarciab5 ай бұрын
Do you mean std::condition_variable?
@Chriva5 ай бұрын
@@ccgarciab That would also work but it's really finicky to use with non-static bools (ie it's hard to spin up several instances of the same thing)
@ccgarciab5 ай бұрын
@@Chriva what's the name of the API that you're referring in your original comment then?
@ciCCapROSTi3 ай бұрын
Yeah, my first thought was the same, just a bit more complex. Implementing a component that can handle any amount of timer requests, and run it with the game loop just as any other component. And it calls the callbacks for all the timers expired on the frame. Probably needs a priority queue or more likely a sorted vector. Has the advantage of calling the callbacks on the main thread.
@JuniorDjjrMixMods5 ай бұрын
I code more than a decade in gta3script (the proprietary script language that Rockstar Games uses from GTA 2 to GTA V, maybe GTA VI too), and is just this: SCRIPT_START { WHILE timera < 300000 WAIT 0 ENDWHILE PRINT_STRING_NOW "ok" 1000 } SCRIPT_END Or just WAIT 300000 but would be basically a Sleep. PRINT_STRING_NOW would be PRINT_NOW (for translation support), but I'm using the modding variant for this example. The "NOW" is high priority, doesn't matter. Another detail, old GTAs like GTA SA have a bug and need to set NOP or some other command in the start of the script before any WHILE. But I like how big game companies simplified this.
@hmm5470Ай бұрын
in go this is very easy with time.Timer package main import ( "fmt" "time" ) func main() { timer1 := time.NewTimer(2 * time.Second)
@IncompleteTheory2 ай бұрын
Never time your loops using any variation of sleep(duration) because this always results in a drift determined by the amount of stuff you run in your look. Always look for an OS or language construct that sends you a signal, or runs your callback, at specified intervals. Game engines usualy give you some kind of frame-rate synchronisations.
@Ozzymand5 ай бұрын
never knew (nor did i think to check) if async and promises exist in c++ after using them in JS. Awesome
@FodaseGoogreorio-h7v3 ай бұрын
void timerCallback(TimerArg_t timerArg) { timer::Timer *timerInternal = (timer::Timer*)timerArg; if(timer == *timerInternal) { std::cout attach(timerCallback, timer); timer->start(); while(true) if(timer.running() == false) break; delete timer; } Try figure out this and make it happans.
@sayo93946 ай бұрын
This is a great video 👏 I vote Yes for more videos of this format
@sviatoslavberezhnyi10595 ай бұрын
When I was at university in 2006, I had a lab about a timer, I don't remember exactly how I solved it, but the computer has a built-in timer that runs 18.2 times per second, I remember that I wrote this program in C with some assembly language inlining, which actually copied an interrupt from a certain port, then replaced it with my interrupt, my interrupt was executed 18.2 times per second and in it I made a decryption of the timer that the user entered, and when the timer was completed, I sent a certain byte to port 61h, but I may be wrong, to cause the speaker on the motherboard to beep, which signaled that the timer was over, then I replaced my interrupt with the one I copied earlier, and I used C only so that the user could enter the timer and display a successful message after it was completed, that's the story)
@Evilanious6 ай бұрын
I think the questions I'd like to see answered here are not 'how to do it in c++', but rather, how does the computer clock work. How do you call it? How do you keep it counting while doing other stuff? The library I'll end up using isn't the most important. Though I guess if you need to solve this very specific problem it's time consuming to take that step back.
@yabastacode77194 ай бұрын
my idea is to use the observer design pattern to watch for the thread to finish. when the thread finish it send a signal to all subscribed objects to execute their functions (slots). i was inspired by Qt and its class QTimer witch was implemented using observer design pattern. i am not sure if i should be single or multi treaded tho. i need to write code to figure it out
@nenomius11486 ай бұрын
Шел Черно по интернету, увидел форум, заглянул в него и сгорел.
@phusicus_4045 ай бұрын
🤡
@anon_y_mousse6 ай бұрын
If you only have a few timers then the best way, assuming that cross platform is considered better than platform specific, would be to take the current time, add the timer amount and use that as the end trigger for that timer. Then it's a simple matter of checking in the main loop whether you've reached the target time or beyond. That's basically the way a coroutine would work too, if we're talking about the original working method and not the unholy hidden thread garbage that is usually used for async code these days. One of the things I love which they added with C++11 was UDL's. So adding 5min to a time is pretty easy and downright enjoyable now. I just wish they'd add that to C, especially since they added constexpr with C23.
@reddragonflyxx6576 ай бұрын
If you have a lot of timers you can put them all in a priority queue (sorted by earliest end time) and just check for/remove/process any finished timers from the front of the queue in your main loop.
@anon_y_mousse6 ай бұрын
@@reddragonflyxx657 As long as we're talking about a dozen or so, then yep. Once you get into the couple of dozen and above range, you might want to consider multithreading.
@reddragonflyxx6576 ай бұрын
@@anon_y_mousse Why? You can check if there's an expired tjmer in constant time and add/remove timers from the queue in log(n) time (per timer). If you have lots of timers going off, need to do a lot when they do so, and can't wait for that in your main loop, multithreading is a good idea. Otherwise, based on "On Modern Hardware the Min-Max Heap beats a Binary Heap" you can expect a priority queue to take ~100 ns to pop a timer with ~100k entries in the queue.
@anon_y_mousse5 ай бұрын
@@reddragonflyxx657 If you've got modern hardware, then that's fine, but you should aim for the most efficient methods always, because you might not always get to target modern hardware. Although, hopefully you wouldn't need so many timers as to clog the main loop, especially on lower powered devices. Maybe I'm just used to working on devices with speeds measured Hz.
@reddragonflyxx6575 ай бұрын
@@anon_y_mousse What hardware is slow enough for a heap to be too slow, but also supports multithreading? I think this solution would be excellent on a lot of embedded platforms, with reasonable tuning for cache/branch prediction/memory performance if performance is critical. That article should apply to the last decade or two of PCs at least.
@TheEdmaster876 ай бұрын
Timers are easy especially for hardware with different CPUs and MCUs. Some have even their own libraries for this, others you can setup a function to do this. It really depends what type of timer you need flr what. Most important thing is not to block other code that suppose to run in the "background" while the timer runs.
@56a8d3f55 ай бұрын
futures can’t destruct with a running thread, usually there’s no need to check for the status with the purpose of ‘make sure it doesn’t get destroyed before the thread finishes’ 17:35
@ender-gaming5 ай бұрын
I don't code in C++, mostly do powershell scripting, but when I saw this I though of a simple while loop like your final solution with a running timer. Though I'm interested why you used 'While (true)' instead of 'while (satus == std::future_status::ready)'. I will say timing code is always an interesting challenge with far deep rabbit holes at least in the languages I've played with, usually in-built function have some noticeable overhead.
@vloudster6 ай бұрын
Great video. You should do more videos like this where you are looking at fundamental things like timers etc. The video was funny in relation to the code suggestions in the forum but also educational when you explained them and presented your professional solution.
@lukiluke92956 ай бұрын
Wow your first Video on multithreading and you introduced async, threads, sleep and context. I was actually looking for a video on the topic of multithreading this morning - couldn't find one and now here it is, just a little bit more complex ^^
@akashpatikkaljnanesh5 ай бұрын
This isn't his first video on multithreading I believe
@johnmckown12674 ай бұрын
Interesting. At 71, I've finished my professional learning time. But I continue to learn. Helps keep the brain functioning.
@KeyYUV5 ай бұрын
This really makes me appreciate the convenience of QTimer::singleShot(Duration msec, Functor &&functor). Implementing the event loop manually is such a pain.
@Yulenka-5 ай бұрын
The simplest way to fix the last solution is to just move the finish-code after "timer.join()". Boom. This does exactly what was asked and doesn't actively waste CPU time (& battery) if there is no more work to be done (compared to the game loop approach which is constantly spinning and checking).
@trbry.6 ай бұрын
love this kind of content almost as much as your other content, be it hazel coding reviews and more
@casdf75 ай бұрын
the while loop approach only works if you want to do something repeatedly. If you really want to do only one thing you need a thread obviously.
@JkCxn6 ай бұрын
You can put your timer-finished code inside the if (status == ready) block or after the loop and then your timer class is responsible for fewer things
@rafazieba99826 ай бұрын
Those are two different ways to build timers. 1) You have a loop (UI loop for example) and you need your code to execute on that thread. 2) You need something to be done in some time and you don't care about the thread it runs on. Relying on a UI loop for timers is only marginally different from relying on "windows.h" unless you are doing it for a UI specific functionality (update an animation).
@TurtleKwitty6 ай бұрын
its VERY different if you care about being available on other platforms at all
@mcawesome97055 ай бұрын
among my first thoughts was something like: std::chrono::time_point stop = std::chrono::high_resolution_clock::now() + std::chrono::duration(5min); while (std::chrono::high_resolution_clock::now() < stop) { // do stuff } // do other stuff for most things™, this should be fine, but it's worth noting that it won't interrupt whatever it's in the middle of doing when 5 minutes pass.
@shikyokira30656 ай бұрын
After coding in C# for few years, its quite natural for me to just think of using async based on the description because of how integrated async is with the language. In C++, there are so many ways to do it, but it doesn't mean async isn't one of the best options for it.
@inulloo6 ай бұрын
Your analysis and explanation were very helpful.
@bentomoАй бұрын
Hardware engineers invented timers in CPUs so you don't have to worry about code blowing up your timer code. Starting a timer and checking it later is the way to go as you did. I'm assuming the future library compiles into the special purpose instructions that enable and check the CPU hardware timer.
@danielmccann297913 күн бұрын
Tbh I think the sleep approach is good, just use a separate thread to either run a one time pece of code or change a state variable for the main thread after sleeping is done. You could even make it more advanced by having a callback variable version or a flag change version.
@danielmccann297913 күн бұрын
The idea of using sleep like this is to get away from wasting resources by polling the system time or internal time.
@petrikillos5 ай бұрын
I don't hold myself too highly in regards to my coding capabilities, but damn the second I read the first response and the recursive timer function I started laughing so hard I cried.
@oschonrock5 ай бұрын
consider avoiding std:function and the likely heap allocation... use template < Callback> instead.
@theo-dr2dz6 ай бұрын
The problem is of course that the original question is way too vague. A vague question will receive useless answers. A lot more information and context is needed. 1- is portability an issue? If the code is full of windows API and directx code anyway, there is no need to make this bit platform independent. If you only care for one platform, platform API's probably have facilities for this kind of problem. If you care for two platforms, you could implement both and put a little abstraction layer on top. 2- is there some kind of loop that periodically calls some kind of update() function? In that case just checking every frame if the elapsed time is greater or equal to the delay time is the easiest solution. However, there are two issues with this. First, the real delay will be somewhere between the specified delay and the specified delay plus the frame time. This may of may not be acceptable. Second, the scheduled task is added to one frame, so if the execution time of the scheduled task is not small compared to the frame time, this might cause a noticeable hiccup. 3- if the scheduled task is completely independent of ongoing processing, multithreading seems to be the way to go. If the scheduled task and ongoing processing both write to the same memory, you're in for all kinds of concurrency issues. 4- if it is a pure fire and forget task that is performed just for a side effect (like playing a sound) I would look into std::async. 5- does the task have function arguments? Does the timer have to be generalised for any number of arguments of any type? Enter fiddling with variadic templates and std::forward. 6- does the task return a value that has to be collected by the main thread? You will need a future for that. And at some point, the main thread will have to wait for the task to finish.
@AndrewRedW5 ай бұрын
Inexperienced people writing funny code - my favourite form of entertainment :D
@techpriest47875 ай бұрын
My goal to solve this problem is to use a Nodes Systems application kernel because you need a robust kernel for a game engine anyway. Unlike an Entities Components Systems my own written kernel just uses nodes like Godot. So just a single data instead a group that is called entities. The kernel simply would take a function like an ECS in the Bevy engine and use it as a system. But it would not execute per tick or frame but on demand like a event. Of course all I need in addition is to build into the kernel a timer that waits before it executes that system event. The kernel would provide the data to the event system in a thread save manner and execute multithreaded like a ECS kernel would. Another note is that I consider to execute all events systems before the ticked systems. That way there is no need to put the system into an algorithmic order like the ticked/per-frame systems that tell the kernel in which order the system is to be executed. Bevy so ECS requires this order. Just my NS kernel does it cleaner than Bevy ECS because Bevy also needs a special paramater for events for some reason.