On 7/9/21 7:25 PM Ian MacArthur wrote:
On 9 Jul 2021, at 17:54, Bill Spitzak wrote:
I vaguely remember that repeat_timeout, if the calculated remaining time was zero or negative, would punt and instead act like add_timeout. My feeling was that if a program was too slow it would be running the timeouts continuously if the alternative of just calling it immediately was done. There was certainly no testing as to whether this was the correct solution or not.
The current solution is clearly to deliver the next timeout as soon
time += missed_timeout_by; if (time < -.05) time = 0;
After this statement the timeout is queued with delta == time (0).
However, my tests seem to indicate that something's going awry with
the calculation of missed_timeout_by . There are some strange effects
which I'm still investigating.
But, to me at least, it sounds like it probably is.
In don't know yet what exactly is happening with the current code.
But I imagine that a user program that's not able to process the
timeouts fast enough would be entering a loop, similar to an idle
callback. Skipping single repeat_timeout's would lead to
And it does (in parts), see my test program logs below.
The crux, like Bill said, is that is you are running so slowly that you miss the timeout, then trying to “fill in” all the missing timeouts is only going to make matters worse, I imagine...
Sure, as I said above, that's what I'd expect. This is a program
error but wouldn't it be easier to diagnose the error if FLTK would
not "try to help" and skip particular timer callbacks?
I'm attaching a test program as announced: timer2.cxx.
The following constants describe the test case:
const double delay = 0.500; // timer delay
const int timeouts = 8; // number of timeouts to be tested
const int load1 = 600; // simulated workload in ms before Fl::repeat_timeout()
Fl::repeat_timeout(delay) is called in a timer callback after a
simulated workload of 600 ms duration which is clearly longer than
the timer delay. The test is repeated 8 times. The outcome is
"interesting": deterministic but different on all three major
platforms. Note that I tested Windows in cross-compiler mode on my
Linux box, but this does hopefully not matter.
Tick -2 at 50.5994
Tick -1 at 51.1001
Tick 0 at 51.6000, delay = 0.5000
Tick 1 at 52.7004, delta = 1.1004, total = 1.1004, average =
Tick 2 at 53.3009, delta = 0.6004, total = 1.7009, average =
Tick 3 at 54.4017, delta = 1.1009, total = 2.8017, average =
Tick 4 at 55.0020, delta = 0.6003, total = 3.4020, average =
Tick 5 at 56.1030, delta = 1.1010, total = 4.5030, average =
Tick 6 at 56.7033, delta = 0.6003, total = 5.1033, average =
Tick 7 at 57.8042, delta = 1.1009, total = 6.2042, average =
Tick 8 at 58.4046, delta = 0.6003, total = 6.8046, average =
$ wine bin/test/timer2.exe
Tick -2 at 4.7490
Tick -1 at 5.2510
Tick 0 at 5.7520, delay = 0.5000
Tick 1 at 6.8550, delta = 1.1030, total = 1.1030, average =
Tick 2 at 7.9570, delta = 1.1020, total = 2.2050, average =
Tick 3 at 9.0600, delta = 1.1030, total = 3.3080, average =
Tick 4 at 10.1630, delta = 1.1030, total = 4.4110, average =
Tick 5 at 11.2660, delta = 1.1030, total = 5.5140, average =
Tick 6 at 12.3700, delta = 1.1040, total = 6.6180, average =
Tick 7 at 13.4730, delta = 1.1030, total = 7.7210, average =
Tick 8 at 14.5760, delta = 1.1030, total = 8.8240, average =
Tick -2 at 0.4407
Tick -1 at 0.9397
Tick 0 at 1.4412, delay = 0.5000
Tick 1 at 2.4445, delta = 1.0032, total = 1.0032, average =
Tick 2 at 3.4420, delta = 0.9975, total = 2.0008, average =
Tick 3 at 4.4445, delta = 1.0025, total = 3.0033, average =
Tick 4 at 5.4402, delta = 0.9957, total = 3.9990, average =
Tick 5 at 6.4444, delta = 1.0042, total = 5.0032, average =
Tick 6 at 7.4398, delta = 0.9953, total = 5.9986, average =
Tick 7 at 8.4445, delta = 1.0047, total = 7.0032, average =
Tick 8 at 9.4445, delta = 1.0000, total = 8.0033, average =
(1) Linux: The effective delay alternates between 1.1 and 0.6
seconds (reproducibly). This is certainly not as designed and very
likely a bug in the calculation and handling ((not) resetting?) of missed_timeout_by. I'm investigating...
The average is closest to the intended delay: ~0.85 sec.
(2) Windows: there's reproducibly no correction, the effective delay
is always ~1.1 + x seconds, hence the average is also ~1.1 sec.
(3) macOS: the effective delay is ~1.0 seconds, as Manolo described:
2 * 0,5 = 1.0 sec. Average 1.0 is the intended delay times two.
That is: three platforms -- three different implementations -- three
Sure, these are border cases, but I see that Wayland and Android are
other candidates for having their own implementations. This should
What we IMHO need to do is:
(a) define and describe and eventually document the "correct
(b) unify all platforms by providing a platform-agnostic common
The discussion here is good to solve (a) and I'm striving to do (b)
which should use an algorithm defined by (a) and can be modified in
one place for all current and future platforms.
Please feel free to use my test program with other cases and report
your findings. The constants at the top of the program may be
modified as you need. A better test program would have a GUI to
modify the test params, but I don't know whether I'll ever do that.
And there will likely be completely different test scenarios...
You received this message because you are subscribed to the Google Groups "fltk.coredev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/fltkcoredev/b82068ec-a035-40ef-d5f1-ae2774d88c9a%40online.de.
[ Direct Link to Message ]