In a VecStorage, while the memory is indeed on the heap, components are stored in a contiguous spaces. This is what makes the storage cache efficient and fast.
Heap allocated memory has two main drawbacks:
- Allocating is slow: finding where to put your block of memory takes a while. So actually allocating should be avoided.
- It tends to be fragmented: if you are unlucky, the order in which you free and allocate memory blocks might not be adapted to making your total heap be very compact. For example, if you allocate two 8 bytes buffers, then free the first one and allocate a 12 bytes buffer, you won’t be able to store the 12 bytes in the place of the first one. You will have to allocate them at “the end” of the heap, leaving an empty 8 bytes of wasted space (until something 8 bytes or less is to be allocated). RAM nowadays is still much slower than CPU clocks, so to save time on memory access, when the CPU needs to load a specific memory address, it actually loads a whole block of RAM (stored in CPU cache) for quick access. This means that if you access two memory addresses close to one another, it will already be in the cache (you avoided a cache miss). But if your memory is very fragmented, like it is in object-oriented video game paradigms, memory is not geographically stored in places that make sense, and you therefore have a lot of slow cache misses.
In specs, we try to reduce cache misses and the sparsity of the heap by using storages like VecStorage, which allocate close spaces of memory in which it puts the same components (which are likely to be read and written to at the same time). Note that there are even better strategies, in fact they are being studied for integration in Amethyst (but they have no influence on your problem).
So to answer your last question: yes, there is, because depending on the strategy, memory on the heap can have different costs.
So to solve your specific problem, I think you are on the right track. In the context of dynamic components, you can consider them by their size and not their actual meaning (as long as you keep them in different storage, of course). However, if you want to implement VecStorage for dynamic components, I think std vectors are not quite flexible enough for your use case: you want to store byte arrays of arbitrary sizes, which I believe it cannot do. Maybe you can look for a crate implement that, or make one of your own. If you choose the second path, I recommend actually copying the std’s code and modifying it so instead of using
sizeof the type for allocation, it uses a dynamic parameter you specify. (keep in mind the std is licensed under MIT/Apache2.0, but I don’t think in our case that would be of any issue)