ID:152027
 
Hey all, I was wondering if anyone has ever written a guide on how to profile the "world" you are hosting.
Dream Daemon gives you the option to start profiling and keep track of which procs are ran and such.
I personally don't know what all the information meens.

What do the following things meen:
Self CPU
Total CPU
Realtime
Calls (Well, quite obvious but still had to add it to the list)

How does checking the "Average" box affect these things and which guide lines sould be followed when trying to figure out what the numbers meen.
Do they need to be low or high, are there any limits, etc etc.

Hope someone with experience can write a little article about this or clarify things for the people who would like to check how their game is running.

Thanks in advance.
Fint wrote:
What do the following things meen:
Self CPU
Total CPU
Realtime

Each of those represent the total time, in seconds, taken by the procs. Self CPU is the time a proc takes. Total CPU is the time taken by a proc and everything it calls, so a main proc that calls a bunch of other procs (like a level generator for instance) will see a big discrepancy. Real time is the amount of actual time taken, which may differ a bit from the other time values if some internal routine takes a while. There is a known issue where sometimes, certain procs will report an extraordinarily high realtime value; this can be ignored.

How does checking the "Average" box affect these things and which guide lines sould be followed when trying to figure out what the numbers meen.

When you're looking at averages, then the times you see are the average time per call. You can get the same numbers by taking the totals and dividing by the number of calls. This can be important because if a proc is using up a lot of total time but it called a great deal, it may be as efficient as it's going to get. (Or, it can point to something you can optimize by doing more stuff during the call and maybe cutting out some redundant calculations.)

Do they need to be low or high, are there any limits, etc etc.

When profiling, lower times are always better. The ideal case is that your program takes no time at all to run and its speed is only constrained by server ticks. The profiler tells you which procs take the longest to run, which should tell you what you need to focus on to optimize it.

In Runt, I used profiling to tell me that loading data from text files using savefile.ImportText() was too slow, so I switched to using raw savefiles instead which was faster. In SotS II, the profiler helped me pick the best routines for the Stickster's AI so it wouldn't take forever on the calculations. And it was invalulable in Incursion for pointing out which parts of the map generator were taking the longest.

Lummox JR
My most notable optimisations lately have been with regards to caching data and eliminating redundant security checks. I frequently "bulletproof" my code, which is to say that I set up my procedures so that if an improper variable slips in that it can still convert into a proper one, it will perform the conversion automatically before continuing. This is excellent for a procedure that is called manually, but when a procedure is called in a long chain spawned off from a global loop, the redundant validation calls of all of the similar procedures really add up.

When this becomes an issue, my usual step is to identify the procs and see whether I can import those procs' lines directly into the calling function that requires optimisation, with a prominent comment so I know what I need to update. I can then remove all of the redundant validation calls because the value only needs to be validated once.