FLTK logo

Re: [fltk.coredev] Re: RFC: unify timer callback handling on all platforms

FLTK matrix user chat room
(using Element browser app)   FLTK gitter user chat room   GitHub FLTK Project   FLTK News RSS Feed  
  FLTK Apps      FLTK Library      Forums      Links     Login 
 All Forums  |  Back to fltk.coredev  ]
 
Previous Message ]New Message | Reply ]Next Message ]

Re: Re: RFC: unify timer callback handling on all platforms Bill Spitzak Jul 09, 2021  
 
I vaguely remember that repeat_timeout, if the calculated remaining time was zero or negative, would punt and instead act like add_timeout. My feeling was that if a program was too slow it would be running the timeouts continuously if the alternative of just calling it immediately was done. There was certainly no testing as to whether this was the correct solution or not.


On Fri, Jul 9, 2021 at 7:14 AM Albrecht Schlosser <AlbrechtS.fltk@online.de> wrote:
On 7/9/21 3:25 PM Manolo wrote:
Le vendredi 9 juillet 2021 à 14:31:35 UTC+2, Albrecht Schlosser a écrit :
What do FLTK developers believe should be the priority of Fl::repeat_timeout()?

My personal opinion is that the next timeout should be scheduled as soon as possible if the calculated "next" timeout has already passed (if I understood your question).

Fl::repeat_timeout() should be triggered as exact as possible after the point in time where the last (current) timeout should have been triggered plus the delay given as the argument to "allow for more accurate timing", as the docs express it.

In other words: the above described sequence of n timeouts should not "drift away" as it probably does on Windows in our current implementation because there's no correction applied. I had planned to write such a demo program anyway. I'll do this shortly and post it here.

When the timeout is just a little bit late, say by delta, the correct solution is clear: schedule the next timeout to now + delay - delta.
My question arises when the timeout is very late, more than the delay between successive timeouts. What to do in that situation?
Either :
- skip one iteration, because its time is over, and schedule for next iteration;
- play two iterations without delay in between.

We do not know what exactly happened in such a case, i.e. where in the application the system has spent so much time that the next timer iteration is "now" (when Fl::repeat_timeout() is called) already passed. To see what I'm contemplating, see this timer callback pseudo code:

void timer_cb(...) {
    some_stuff();
    Fl::repeat_timeout(delta, timer_cb);
}


First question: is this order useful/correct, or should repeat_timeout() be called early in the callback?

Maybe the user program can only determine whether to call repeat_timeout() after doing some calculations which costs some time. That said, I believe that this is legit user code.

Second question: did some_stuff() spend too much time or has it been other system load (or event processing) that prevented the timer callback from running in time?

If it has been some other event handling or another thread and if some_stuff() is really short, and if the next iteration doesn't suffer from a similar system load, then some_stuff() may indeed be called twice with only little or practically no delay. But how can we know this, and how can we know what the author of the application anticipated?

OTOH, if the application code is in the opposite order:

void timer_cb(...) {
  Fl::repeat_timeout(delta, timer_cb);
  some_stuff();
}


What would happen if timer_cb() is called too late by 0.6*delta and some_stuff() needs 0.5*delta to execute? We'd have scheduled the timer already with delay 0.4*delta but it will be executed after a minimum of 1.1*delta. And it will be executed (not skipped). The next iteration would then be 0.9*delta later.

If the same timing happens in the first callback model, the delay would already be at 1.1*delta when Fl::repeat_timeout() gets called and the macOS code would silently skip one iteration, whereas the Unix code would execute the next iteration as soon as possible.

I hope you (all) can follow me, it's difficult to describe what I'm thinking. Given these two examples I believe that the Unix approach is more consistent.

That all said, however, we're talking about real border cases with very high timeout frequencies when the overall application and system load is higher than the application can process correctly. In that case we're in the realm of undefined behavior and every solution would be possible. In the Unix code the normal timer loop would be turned into something like an idle callback, and that's likely to be expected. If this happens, there must be something wrong with the application anyway. Skipping timer callbacks would IMHO not a be proper solution.

--
You received this message because you are subscribed to the Google Groups "fltk.coredev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fltkcoredev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fltkcoredev/9ec3088f-4007-d154-5ae5-3ffd738ab993%40online.de.

--
You received this message because you are subscribed to the Google Groups "fltk.coredev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fltkcoredev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fltkcoredev/CAL-8oAjoaCGXT1FieHfirCy%2Bu02%3DxeQq70h86VwA9N-9Faq-Cg%40mail.gmail.com.
Direct Link to Message ]
 
     
Previous Message ]New Message | Reply ]Next Message ]
 
 

Comments are owned by the poster. All other content is copyright 1998-2024 by Bill Spitzak and others. This project is hosted by The FLTK Team. Please report site problems to 'erco@seriss.com'.