Thanks a lot for working on the physics crate! Having an integrated way is really important, so your work is very welcome.
I have a few notes:
How often does a developer really need to switch or update physics engines, do they benefit a lot from a wrapper like this (I don’t know, I never professionally developed games, so it would be great for my understanding)? Are the differences big enough (license, performance, feature set,…) to warrant implementing this additional abstraction? Who would maintain all the engine-adapter crates?
You are confusing me. What are the specs dependencies for, if we cannot even use them to run a system before another; why do we need a special method here?
While I think the naming is funny, there already is an opinion about amethyst_rendy, which I want to re-iterate and second. The naming aside, I think that all sub-modules of Amethyst should clearly describe what they are in their official project name. What is a “Rendy”? What is a “Phythyst”? Without the knowledge about the names, people new to the project are left to guessing. Using funky names does not help with clarity at project level. Which is why I vote for “amethyst_physics” - you can keep your name as an internal name, though.
Is it possible to define different kinds of physics behavior on entities? For example is it possible to disable physics? How can I define that an entity is cloth? How does your crate handle special capabilities, like water and hair simulation, which might not be provided by all engines? Do you have anything planned?
Be able to switch physics engine, especially for project where the physics engine plays an important role, is fundamental.
As I said above, during the development can happen that a physics engine doesn’t behave like expected. Or you would like to use a special soft body feature of PhysX, which is not the physics engine that you are using; and at that point, the work to do to integrate PhysX, is so huge that you will be in trouble.
Even worse scenario happens for an update.
Indeed, an update such as the one that I mentioned above (NPhysics, v0.11 to v0.12) would have be a major breaking changes for Amethyst.
Forcing all the users (company, or an alonewolf developer), that have already implemented a huge part of their game, to not update to the next Amethyst version. Since update it means rewrite too huge part of the game forcing them to keep using an old version.
Or could happens the opposite, the physics engine update is not integrated for a really long period, leaving an important update like that one behind.
Also, constrain a game engine to a particular physics engine is not a good idea.
Indeed, some games require specific physics engines, and to be able to integrate it easily is an important key factor during the selection of the game engine to use.
I don’t think integrate the abstraction rather then integrate it directly will impact the performances in a noticeable way.
Especially considering that at a certain point you need to take the same solutions you would have taken by implementing it through the abstraction layer.
And in my experience, I can say that the performance problem, doesn’t derive from the abstraction layer.
To do an example: when I integrated the Bullet physics engine in Godot, not only all physics issues opened got closed, but also the performance were improved even if both engines were using the same abstraction layer!
And here the prove: https://www.youtube.com/watch?v=4X4GRbmfSnc
So, I think that the benefit of the Abstraction Layer are more important in the long term than that frame lost using the abstraction layer.
The physics engine stepping must be performed multiple time each frame to be frame rate agnostic, and with it all the Systems that control the physics objects in the scene.
Basically, the Batch is a second dispatcher that is used to step the physics and all its Systems.
With the with_pre_physics() function, you are registering a System that is executed always before a physics step.
Usually small projects deson’t need it, but it’s a key factor during game engine selection for all the projects that have to use a specific physics engine.
For example, UE4 uses PhysX without any abstraction layer. And this is a shame, since if you want to use it to take advantage of the awesome rendering capabilities, you can’t if you need another Physics Engine because replace it require too much work; at the point that it’s easier write your own rendering on another game engine which use the physics engine that you need.
However, this is only one part of the benefits.
Regarding the ECS-oriented physics engine, I imagine that it will not use directly the main Amethyst dispatching system because it would be too difficult to integrate in a game; also Amethyst would be too much dependend on this physics engine.
On the other hand, the physics engine could not be used anywhere else and this is a shame.
So the ECS-oriented physics engine, will have its own dispatcher, with its own storages and so on.
During the integration phase, you will use an opaque ID to refer to a certain rigid body, exactly like it’s done in Phythyst.
So I think that it’s not a disadvantage for an ECS-oriented PE, considering also that Phythyst leave free any backed to manage the memory as it prefer.
So isn’t Phythyst just hiding these integrations for amethyst behind an extra layer of abstraction? Your complaint about UE4 seems to be that you don’t want to integrate Bullet for example on your own. Well, if someone has integrated Bullet for Amethyst, what benefits would using a wrapper over that integration over the direct API integration which doesn’t obscure the specifics of Bullet? Phythyst still requires someone actually implementing each of these API’s anyways, it doesn’t seem to fix this problem you talk about imo.
I don’t understand at all. Amethyst provides a fixed update  where integration, along with any other time-dependent systems, could happen at a fixed interval. What’s missing from that feature for this? And if there are shortcomings in the fixed update implementation, why not improve that instead of sidestepping it?
EDIT: Previous discussion about improving timers for fixed_update to cover more use-cases, including dilation, rubber banding in networked scenarios, the issue of hierarchical clocks, etc, can be found on Discord here
Well, I’m not the kind of person that complain to integrate things (:
Said this, let’s do an example to understand well what mean swap a physics engine in an engine like UE4.
In UE4 there are a lot of things that uses a ray trace (The audio actors, editor utilities, all the physics objects, …), but let’s suppose that it’s used only 5 times.
To integrate just 1 API, such the most simple ray_trace API, the work to replace it (which is not just find and replace with the IDE) must be done 5 times.
PhysX / Bullet / and any physics engine has hundreds of APIs like that one, and the job to replace them requires at least 5 times more effort.
Once done that part, you still can’t compile the engine, and you neither done the 20% of the actual work to do.
So blindly, you have to first understand every part of the engine (which is not a silly task to do with an engine like UE4) and once done you can start to integrate the RigidBody in the five different spots… Because, yee, UE4 doesn’t use an abstraction layer to avoid repeat writing the same code over and over.
Now let’s consider that, engine to engine the exposed APIs and the way of how to use these is different; and integrate it in an environment that is studied for just a particular physics engine is not an easy task as you are saying.
Once replaced everything in the engine, you still need to convert your 2 or 3 games to use the new physics engine.
A description can’t never give the idea of how tough a job like it, is; and I’m sure that you are underestimating the work to do.
Phythyst allow many things.
The implementation hiding, the support to many backend, mistake avoidance, zero update cost, and others, are features which deserve to be recognized and can’t fall under the description of just “just hiding these integrations”.
However, let me mention other things that Phythyst does:
Synchronize the transformation
Provide simple to use APIs to interact with physics objects
Step the physics engine and all its systems
Sync rigid body and shapes
Sync area and shapes
Allow shape sharing
Provide a way to fetch area overlap events
Provide a way to register physics Systems
Provide a way to have many back end
Automatic Component Garbage collector
Each of this are meant to be simple to use, and to guide the developer to implement the physics backend properly.
It have to be recognized otherwise the talk is not constructive.
Only step the physics engine is not enough, for this reason I didn’t use that function.
If you think about it a bit more, you can see that the behavior of each physics object depends also on the systems that apply forces to it, the System that move the near kinematic objects, the one that change the joint forces, etc… and all these Systems must be executed along with the physics step. For this reason the Batch mechanism got implemented.
These are the kind of mistakes that I was mentioning in the first post, an abstraction layer prevent it by default.
However, my point was different.
I’m talking about that, the ECS-oriented physics engine directly integrated inside Amethyst is not a good thing for both of them:
Impossible to integrate a different Physics engine in Amethyst.
Impossible to integrate the ECS-oriented Physics engine in another game engine.
All the Amethyst default functionalities (like the kinematic character, audio spatialization, physics particle effects, etc…) will be all depended on an physics engine which is impossible to replace, due to its uniqueness.
However, let me say that the ECS-PE performance are not achieved cause it runs on the same game engine dispatcher.
Also I think that the main bottleneck of any physics engine are not the scripts/systems that use it.
So there is not a real benefit of integrating it directly in amethyst.
Instead, let me add the benefits that a new ECS-physics engine take from an abstraction layer.
Let’s suppose that a new engine like that is ready for production in 1 or 2 years, and perform 4 times better than any other.
All the developers that are using: Nphysics / Bullet / whatever will want to switch to your engine since it perform 4 times better. But if it requires a full rewriting of the game they will never use it, because doesn’t worth the extra job to do.
So I don’t see Phythyst a threat for your work in any way.
Yes, the execution of code pertinent to say, force application, requires a certain order and even at times repetition within that order (a great case is in Runge–Kutta integration, where there is commonly four increments within a step) but you can just build the dispatchers you need based on this and execute them in the order necessary in the fixed update, repeatedly executing dispatchers where necessary for sub-steps. Another possibility is simply using generic parameter or struct field Fn closures to provide applicator routines to be executed within stepping. Here I still don’t see the problem, you’ll have to illustrate it in concrete terms, because everything you’ve described you can do perfectly fine in fixed_update as it stands.
I understand that physics engines have many API’s. And although implementing those API’s in an integration, which you just explained, is very different from performing a migration between physics engines, in the use-case thread I’ve never heard anyone write that they need to be able to swap out physics engines midway through developing a game. In this way it seems like a solution looking for a problem to me. It necessitates every single user (most of whom would really never care to switch physics engines) up front the cost of a few users who find it necessary to completely switch physics engines midway through development of a project. A great example of when this happened was the short period in Amethyst where everything relied on being able to change the numeric type used for transforms. 99% of users suffered for an API change that didn’t even work that great for the 1% of users that would benefit from it, only because we didn’t consider the most common case as the primary one, and edge cases as problems that must be solved within the constraints of that common case.
Speaking more broadly on this abstraction, Amethyst’s core design philosophies include the philosophy of data-oriented design, and this type of abstraction directly contradicts that primary philosophy in that it intends to treat the software of physics engines as a platform by giving greater credence to the code that interfaces with a physics engine than to the data required to solve physics problems. For most users, the actual problem in bad physics performance are most often not actually the physics engine itself, but their own mistakes in utilizing that physics engine. How many users don’t actually understand the costs of the computations they intend to perform with their physics engine, and the API’s they call in that physics engine? This sort of abstraction only works to make these things worse, and if we are to make this a part that lives under the core amethyst crate, we should also abandon advertising data-oriented design as a principle of the engine. I do think this library could be useful perhaps for testing the waters of certain physics engines before committing to one in a project, but given this I don’t think it should live in the main Amethyst repository.
No threat taken I only discuss here the question of merging this abstraction into the main Amethyst repository. I do think that personally I’d like to see it under the Amethyst org as its own crate, where I think it could serve a good role for those who need to do testing between different physics engines quickly, or for some reason predict in the future they will completely change their physics engine. If a physics API is to live as a core part of Amethyst, I believe it should be built around the philosophy of the engine as a whole, as a single concrete physics engine with plain-struct data-orientation, swappable integrators, and self-evident costs. This work will most likely need to begin with collision by providing geometric queries across the engine, such as in the renderer and spatial audio code.
Basically the Batch System is the proper way of doing it. The proposed solution seems a workaround, so I don’t really see your point here.
The Batch is a System that can run in parallel with any other System, and all its Systems can run also in parallel.
You can choose where to put this system in the dispatcher, in full ECS style.
And the user is not forced to create any weird game state (which doesn’t respect the ECS philosophy at all).
Notice the Phythyst simplicity.
The thing is that this it’s not the only benefit that the abstraction layer gives.
All the before exposed features, are necessary to build a game engine that is able to serve many purposes to create any type of game.
All brought examples are far away comparable to this case.
Ok, this part is a bit weird.
The data is processed by an algorithm. This algorithm can be a collision detection algorithm, or the solver.
It’s not possible in any way to implement a feature like the water physics (or whatever) by just submitting the data.
Following your reasoning even rendy invalidates the Data-Oriented pattern because the mesh, the textures, the materials are stored with components.
However, until now I didn’t read a strong complain on the design that I chose.
Rather, my opinion is that this abstraction layer will give the correct direction to get a multi thread physics engine implementation.
This is very interesting! For my team, as we design Arsenal, it could be very useful to be able to trivially support other Physics engines. We have already gotten an issue asking whether we would be able to support PhysX in Arsenal, and I was not sure that we would want to for various reasons, but if we could easily support swapping out the Physics engine it would be a big deal for users of Arsenal.
The cases where someone might switch a physics engine are numerous. It is possible, even likely, to be well advanced into development when discovering that the physics engine you use is limited in a specific way, which is non-obvious (cannot be known before reaching that point of development), non-common (other people aren’t interested in fixing it, or don’t understand the problem), and non-trivial, or even impossible to fix.
If you have an abstraction layer, you have the option of:
changing the back-end
adding a different physics engine for the particular interaction that you’re requiring, and running it concurrently when needed. This additional physics engine might be one you develop yourself specifically for this interaction, or a 3rd party one.
If you do not have an abstraction layer, you have the option of:
changing the whole architecture of your game, or at least re-write a whole part of it. With a bit of luck, a chunk can be refactored automatically, but probably no more than 50%.
Should one want to do anything a little bit advanced with off the shelves physics engines, this is a situation that is common.
Apart from being able to switch, or run different physics engines concurrently and have tools acting on both, as @kabergstrom says, it also allows 3rd parties to develop further plugins, improvements, and tools, which will de facto work for all users of Amethyst, regardless of which physics backend they chose.
It also means that if I need to develop my own engine for whatever reason, I have fewer questions to ask myself about how to integrate it with the rest of Amethyst, as I have a clear path forward; I know my external interface in advance.
Lastly, it means newcomers, as in the @zicklag example, need to learn one API only, and are free to choose and experiment with whichever backend they prefer, without having to wade through (too much) documentation every time.
This makes for a much simpler ecosystem, where knowledge sharing and code reuse are promoted.