Well, I'm running into some logistical planning snags on Hedgerow Hall that have to do with the maximum map size that BYOND can handle. I searched the forum for mentions of world size limits before I settled on a framework to use for the overworld map and the dynamically generated burrows and other instance-layers.
Unfortunately, I seem to be running into practical limits of my hardware that are lower than that. Dreammaker will not compile a dmb with more than 20-30 z layers (depending on which of the available machines I use) without choking and dying, and DD/DS declares insufficient memory to add new z layers at runtime after about 40-50 of them.
Now, the runtime limit I had based my planning on was 1024x1024x200, referenced in multiple forum posts. Based on this, I had planned for an expansive overworld consisting of a 7x7 grid of 1,000x1,0000 square maps (49 z layers), plus an "underground" and "interior" map that goes with each of those, and then having way more z layers left over for dynamic burrow generation than I would ever logically need.
I can scale the world down; my plans were based on making the optimal use of the space that was available. My question is: is that limit of 1024x1024x200 just based on what the person testing's system was able to support? Does it depend on the memory of the machine running? Is there a way to estimate how much memory is needed for a world of that size?
Unless by "now enforced" you mean a very recently pushed out update, I'm pretty certain it's actually not... my experiments with a map generator suggest that DS/DD will let you keep adding more turfs to the world until it eventually crashes, or fails with an "insufficient memory" error, and off the top of my head some of those tests got up to 1000x1000x40+, which is in excess of 40 million turfs that are all theoretically there. Though I couldn't say that I actually did anything with them.
Treating ~16 million turfs as a practical ceiling will be helpful, I think. |
Lexy got those numbers from my tests on the subject.
I managed to increase the size of the map to 1024x1024x200 at runtime and everything seemed to work on my system. However, I'd crash at the slightest change to the map after that. Of course, my tests were almost a year ago, and not at all extensive. |
I'm not sure where those dimensions came from originally, but that isn't correct. Turf IDs realistically can't go past 3 bytes (after that, the \ref macro gets confused), so 1024x1024x16 is the absolute maximum you'd be working with.
For memory size, turfs on the map currently take up 4 bytes each, not counting any var/content info which is stored separately for any turfs deemed "interesting". Most of those 4 bytes are wasted at present--it used to be 2 bytes but there's now a 1-bit flag that says if the turf has contents, overhangs, or animation. (This flag is for performance on loops that check turf contents. I deemed the doubled size a worthy tradeoff for performance, plus 4-byte alignment is probably nicer to the CPU anyway.) The 2-byte value is an internal ID, which can be shared by many turfs, that is used to look up the turf's type, appearance, area, etc.
A 1024x1024x16 map with no contents or vars would take up 64MB of memory, not including the cell structs that contain type/appearance/area info (at least 20 bytes for each unique case).
I'm guessing it was based on an individual test, only it didn't take practical considerations into account, like actually trying to work with those high-end turfs. Realistically anything past 224 turfs is not going to work well, and IIRC it's now enforced.
The bulk of the memory would be strictly from the main array of 4-byte values, so you could start with turfs*4 as an estimate. Add uniques*20 for type/appearance/area combinations. Any turf with contents or vars (that is, vars differing from its type) will have structs with that info, carried in linked lists, and that's a hard number to calculate--but it should not be very big if most turfs are inert.
For an expansive world my recommendation would be to use smaller maps and swap them in and out dynamically. (E.g. with a server holding 50 people, you might actually only need 30 maps at any given time, maybe a little more if you want to hold onto them for a bit longer for smoother transitions.) My SwapMaps library can handle that, although I believe pif_MapLoader is actually a little more advanced in some areas.
(One of my all-time dream features I've wanted to implement since forever is the concept of connecting maps on different z-layers, so you could even split a map into much smaller segments and it would look seamless, allowing for more dynamic loading. I've had concepts on how to do that but never had a good time to move forward with an implementation.)