Descriptive Problem Summary:
Album of all the 514 tests i have done since ~5 days ago.
https://imgur.com/a/x85x41c
I don't think SendMaps cost would naturally look like this, it looks very similar to the problem I reported before with map_cpu being rounded to increments of 0.125 except with slight offsets between the "levels"
https://imgur.com/a/fJ07MNf
Here is the graph I made when I had first tried out 514 testing from before the rounding issue was fixed
Expected Results:
world.map_cpu measurements to have continuous values over >1 hour tests
Actual Results:
map_cpu measurements almost act like theyre being rounded to certain intervals with small offsets
ID:2656363
Feb 18 2021, 4:13 pm
|
|||||||||||||
| |||||||||||||
Feb 18 2021, 8:00 pm
|
|
It's conceivable there's some slight rounding going on in there on some level, but really hard to say what. I can tell you for sure that the timers involved are a lot more sensitive now. I switched from using our timelib.Now() function to using timelib.Ticks() which tries to get a reasonably accurate time measurement down to a resolution of 100 ns (1 millionth of a standard BYOND tick).
|
I've heard that there's a possibility that the map_cpu measurement from any particular tick could be averaged across this + the last x ticks, is that true? And if the measurements from it aren't from "raw" data could something past the timing phase be the issue?
|
Yes, world.map_cpu just like world.cpu is averaged over several ticks. It is not a strict snapshot of the most recent tick. Although it would be possible to make that data available in the future, I don't really see a use case for it. We have world.tick_usage already that measures how much of the current tick has been used, and that's what's actually needed to make fine adjustments.
|