Improving Float ergonomics (Proposal)

(Marco Fruhwirth) #1

Problem

The current state is not optimal for new users, because it introduces boilerplate in many areas.

Examples:

  • ControlTagPrefab Uses Float::from
  • examples/pong/systems/bounce.rs
if (ball_y <= Float::from(ball.radius) && ball.velocity[1] < 0.0)
    || (ball_y >= Float::from(ARENA_HEIGHT - ball.radius) && ball.velocity[1] > 0.0)
...
let paddle_x = paddle_transform.translation().x - (paddle.width * 0.5).into();
let paddle_y = paddle_transform.translation().y - (paddle.height * 0.5).into();
  • examples/pong/systems/winner.rs
    uses let did_hit = if ball_x <= Float::from(ball.radius)
  • examples/pong_tutorial_05/systems/winner.rs
    uses let did_hit = if ball_x.as_f32() <= ball.radius

A partial solution

Most of the weird in-calculation conversions can be avoided if the underlying data is implemented as Float instead of f32. In my opinion, that should be done for everything that is related to transforms. In the pong example that would be the ball radius, the paddle dimensions and the ball velocity.

However, we are still left with the usage of constants. Especially while prototyping it might be annoying to convert constants to Float just so that we can multiply values together. Another problem that still exists is Time::delta_seconds. Maybe it makes sense to provide Time::delta_seconds_as_time?

Proposal

First some assumptions, so that we are on the same page.

Before discussing the proposed solution, please verify that we are holding the same assumptions. If that is not the case, please discuss the assumptions before discussing the solution.

Assumptions

  • The game developer is only concerned about either using the float64 or not using it,
  • The f64 Transform support is exotic and not something most of the userbase is concerned with.
  • In general, friction for users should be as low as possible.
  • Users needing the f64 transform should be aware of floating precision arithmetic.
  • Users not needing the f64 don’t have to be aware, at their own peril.
  • Using f32 is preferable to f64, due to memory and performance considerations, if precision doesn’t matter.
  • One compilation should be sufficient to verify the correctness of the code.

Implementation

Extend the traits on Float so that they also take f32.

  • PartialEq<f32> and PartialEq<Float> for f32
  • PartialOrd<f32> and PartialOrd<Float> for f32
  • AbsDiffEq<f32> and AbsDiffEq<Float> for f32
  • UlpsEq<f32> and UlpsEq<Float> for f32
  • RelativeEq<f32> and RelativeEq<Float> for f32
  • Add<f32> and Add<Float> for f32
  • Sub<f32> and Sub<Float> for f32
  • Mul<f32> and Mul<Float> for f32
  • Div<f32> and Div<Float> for f32
  • Rem<f32> and Rem<Float> for f32
  • AddAssign<f32> and AddAssign<Float> for f32
  • SubAssign<f32> and SubAssign<Float> for f32
  • MulAssign<f32> and MulAssign<Float> for f32
  • DivAssign<f32> and DivAssign<Float> for f32
  • RemAssign<f32> and RemAssign<Float> for f32

For normal users, those will be a noop, for float64 users, the f32 will be converted to f64 before applying the operation.

This will enable !float64 users to mostly ignore Float. They can write
let did_hit = if ball_x <= ball.radius again without bothering about floating point precision.

If it compiles with the float64 flag, it also compiles without it. I’m not sure if the other way around holds too, as there might be some weird corner cases…

Problems

  • What is the type of na::zero() in translate().x + na::zero()? The compiler doesn’t know, so you need to specify it (as either na::zero::<Float>() or na::zero::<f32>())
  • Consider
let a: f32 = f32::MAX; // Result of a computation
translation().x += a * 2;

With float64, this should be a valid operation, however, the intermediate result a * 2 is converted to f32 first.
In my opinion this indicates carelessness of the float64 user, as they should always be aware of overflow when working with f32 in a f64 environment.

Open questions

  • Can Floats be used in prefabs?
  • Are there any other problems/concerns?
  • Does it make sense to also implement the traits for f64?
  • I’m new to Rust so maybe I’m missing something obvious, sorry if that is the case so far and you read through all of this :stuck_out_tongue:
3 Likes
Use type Float
(Paweł Grabarz) #2

I think this is a bit shaky ground here. Rust defaults to f64 if you leave your precision ambiguous, working against that requires a lot of care. It’s hard to define when precision doesn’t matter, so pragmatic choice of higher precision is something that works more reliably.

I like the idea, but i don’t like the f32 preference here. I generally don’t agree that people using f64 should suffer the consequences of that choice. Whatever we do, I’d say we should keep this “symmetric”, i.e. implement those traits for f32 or f64 depending on the flag. The drawback is obviously that compilation will differ. BUT:

This is true, but only for engine, because…

So, we need to preserve flag-independent correctness in the engine code, but in user code we really care about ease of use first. That means we have to somehow implement the above traits in such a way that it’s impossible to use them in the engine itself. The thing is, i’m not sure how to achieve that. We can’t simply “macro it out” on the API, because there is one more point that has to be considered: As a engine library writer that has to have the same correctness guarantees, you must be able to interact with Float type directly. It must be a part of public API.

tl;dr,

I don’t agree with those

  • Users needing the f64 transform should be aware of floating precision arithmetic.
  • Users not needing the f64 don’t have to be aware, at their own peril.

IMO we should preserve “feature parity” between f32 and f64 code as much as possible. Ideally that means both groups of users don’t really have to be aware, just set up once and forget.

Extra Assumptions

  • Float type must be in public API, because library writers care about correctness in both cases. Basically libraries should be written as the engine code itself
  • User code doesn’t need the “flag independent correctness” guarantee, as long as the engine itself holds it. You “set it up once and forget”. There we care about ergonomics first.
(Andrea Catania) #3

I really appreciate if someone could explain why having something like the below is not possible.

 #[cfg(feature = "float64")]
 type Float = f64;
 #[cfg(not(feature = "float64"))]
 type Float = f32;

 #[test]
 fn check_float() {
     let a: Float = 2.0;
     let b: Float = 5.0;
     let c: Float = 10.0;
     assert_eq!(a * b, c);
 }

We could consider to change what make the above code not doable and then improve the Float. As now the Floats are a bit too tricky and also not so useful since if I want to compile the engine with double precision (or single) I would like that the entire engine and also my code respect it.

For the most part I’m referring to the Physics, because there will be necessary understand how to handle the float properly.

1 Like
(Marco Fruhwirth) #4

The current Float implementation is under the assumption that only Transform switches between f32 and f64.
It is not a goal to use f64 everywhere. In that case a type alias might have worked, as you’d want to avoid using f33 and f64 directly.

Atm Float needs to coexist with f32 and f64 and there are boundaries where conversion needs to happen. (E.g. renderer needs f32)

I will modify my proposal with the feedback I got from @Frizi as soon as I have some time. Im confident it is possible to have a better api for Float.

(Andrea Catania) #5

The goal should be, have consistency across the entire engine.

What’s the point of having the timer as f32 and the transform as f64?
or the custom system X which use f32 to manipulate the Transforms?

Also using a struct to represent the float make the conversion between the physics transforms which use f32 (or f64) to the Float.

Is common to have a type def that can change depending on the compilation flag. Which is used in all part of the engine where both types are allowed.

So an user can just use the type def Float instead of f32 in their custom systems and thus be able to change the float precision at any point of development without any extra effort.

So to me the assumption that only the Transform must have the possibility to change its precision should be better clarified.

(Jasper) #6

As a relatively new user to amethyst who just tried to upgrade their project to the latest version of amethyst to take advantage of newer features, my time was mostly spent doing one of two things.

  • The first half was spent trying to figure out how to switch to the Rendy
  • The second half was spent scrolling side to side in the console trying to figure out why nothing would build (Example Below). Which was then followed by giving up and changing every usage of f32 and f64 in my project to Float, because it felt like too much of a pain to deal with otherwise.
error[E0277]: cannot add `nalgebra::base::matrix::Matrix<f32, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1, nalgebra::base::array_storage::ArrayStorage<f32, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1>>` to `nalgebra::base::matrix::Matrix<amethyst_core::float::Float, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1, nalgebra::base::array_storage::ArrayStorage<amethyst_core::float::Float, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1>>`
  --> src\systems\important_system.rs:32:58
   |
32 |             let target = future_position + direction;
   |                                          ^ no implementation for `nalgebra::base::matrix::Matrix<amethyst_core::float::Float, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1, nalgebra::base::array_storage::ArrayStorage<amethyst_core::float::Float, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1>> + nalgebra::base::matrix::Matrix<f32, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1, nalgebra::base::array_storage::ArrayStorage<f32, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1>>`
   |
   = help: the trait `std::ops::Add<nalgebra::base::matrix::Matrix<f32, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1, nalgebra::base::array_storage::ArrayStorage<f32, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1>>>` is not implemented for `nalgebra::base::matrix::Matrix<amethyst_core::float::Float, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1, nalgebra::base::array_storage::ArrayStorage<amethyst_core::float::Float, nalgebra::base::dimension::U3, nalgebra::base::dimension::U1>>`

My thoughts as an inexperienced user:

  • If I need to use f32::from(x), Float::from(x), or x.into() anywhere when doing arithmetic, something is being done wrong. What used to be readable code is now complete gibberish.
  • If something should be using f32, make it use f32. If it might use both, use generics. If everyone agrees that Transform should be using f32, just do that and don’t force everyone to use a new type. Especially when it involves such a fundamental part of amethyst.
  • Losing precision is bad, but should I really care? Was that precision going to be used in the first place? Will there be a noticeable drop in quality that justifies obscurating my code base?
  • Other crates don’t use Float. If it is used everywhere in amethyst, it becomes a pain when I need to do anything else. As pointed out by @marot, it introduces a lot of boilerplate to the point that Float seems much more harmful than it is useful to the end user. Sure it makes some things better inside amethyst, but very few of those benefits actually reach me.
  • Float was the most painful part of moving from 0.10 to the latest version of amethyst.
  • I am completely fine with needing to convert between f32 and f64 as needed in my code.

TL; DR

As a new-ish user that does not completely understand the issues related to this decision, I found Float to be a massive pain to work with. While the suggested ergonomics improvements would make it slightly better, I don’t see them being able to outweigh the issues.

(Andrea Catania) #7

I want just to clarify that I don’t mean to put Float as is now everywhere in the engine where is expected to change the precision.

Rather I mean to have float as type alias in order to be able to use Float as an alias of f32 or f64 (depending the cases) and thus keep it simple to use.

For example Bullet physics engine have a type alias of float that is called btScalar, or Godot game engine have a type alias of float that is called Real, .
Even having different naming they still floating point variables and can interact without much effort.

type Real = f32;
type BtScalar = f32;
type Float = f32;

let x : Real = 1.0;
let y : BtScalar = 10.0;
let z : Float = 100.0;

let result = x * y * z;

Here the rust playground to see it in action: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=4040b04f551dbbdd979e39b9cfe6d497

So having a type alias could allow to use it in the part of the engine that can beneficial a double precision (like Transform, Timers, and so on) without much effort.

@thelockj your post highlight exactly what I refer when I talk that physics interaction will be much more difficult since you need constantly convert it or use functions like add, sum that perform the conversion internally.
Also let me say that having double precision only on the transforms is not enough.

(Marco Fruhwirth) #8

I’ll try to answer as good as I can. My information is based on what I remember from a conversation with the core team.

Using f64 everywhere will have a performance cost and a memory cost. Afaik f64 takes twice the amount of memory. Since a lot of the ECS performance is due to memory alignment, I guess that is a strong argument against using f64 everywhere.

The goal of the f64 transform is a little bit hidden - I’m not sure if it is even documented… The f64 transform should allow ‘big’ worlds, where coordinates might be greater than the f32 range. I don’t know how this is supposed to work, because the render system uses (and will use f32). So there needs to be some loss-free conversion from the f64 coordinates to f32 - maybe using an origin offset?

Right now that would be Float::from(f32value), but this can be simplified using a macro f!(f32value). A conversion from f32 to f64 also needs some boilerplate: f32value as f64.

I think if the goal would be to have the same float across the entire engine, a typedef would work. But that is not the case.

I agree that this needs more documentation.

If you define type BtScalar = f64 the code does not compile anymore. This means the code would need to be tested twice to check if it compiles with type BtScalar = f32 and with type BtScalar = f64. This introduces significant overhead for the CI and for the developers.

Hopefully physics will be integrated into Amethyst. In that case, it might also use Float

I’m not sure where it would be needed. Physics maybe, but that doesn’t exist yet in amethyst.


I think the situation can be improved to a point where explicit conversion only needs to be done on initialization of Float and on assignment, for most users. If we have some shortcuts, like f! to do this, it is fine for me personally. However, it makes onboarding harder…

(Thomas Schaller) #9

A simple typedef doesn’t work, because it will break the compilation as soon as the float64 feature is enabled. That is because the public API changes because of a feature, which should never be the case. Features must be additive, i.e. API can only be added.

I personally would prefer generics for all this, but I don’t know all the aspects of the change. I suspect @Frizi removed the generics since they don’t play well with trait objects and dynamic deserialization.

(Kae) #10

I think users are OK with the public API changing when selecting the float64 feature, as it fundamentally is not an additive change.
As far as I can tell, features are the only way to conditionally compile the engine in various ways and I personally don’t think we should adhere to the guidelines for features in this case.

1 Like
(Thomas Schaller) #11

Well it’s not only about the guidelines. As soon as you’re using an extension to Amethyst - let’s say amethyst-imgui - the crate would fail to compile since it still passes f32 to the Amethyst API.

1 Like
(Marco Fruhwirth) #12

Everything in the ecosystem would be required to use the typedef. It would basically replace f32.
But let us stop this discussion, as there are no plans to introduce a global float type.

(Andrea Catania) #13

The thing is that compile only the Transform with f64 is not enough, because if I integrate a transform with the Timer, or any other f32 parameters my transformation will be no more precise.

I think that who compile the engine with f64 need accuracy rather performances. So we should find a way (that possibly doesn’t bloat the code) to unify the floats in the engine and really double the precision of the simulation.

Also I think that Physics Engines use their internal Transform with float, so would be nice if amethyst float and physics engine float are the same to facilitate the copy.

(Marco Fruhwirth) #14

You will only lose precision if you convert to f32 first and you should never do that. It is perfectly fine to convert the f32 timer to a f64 Float and multiply it with a f64 Float.
That is something my proposal would make easier

(Kae) #15

Only if the library is not properly using the type alias in its own code.

1 Like
(Kae) #16

The feature in question is semantically different though. The scene positions are optionally f64, but rendering doesn’t need to be, and other things like particle systems or physics would only need to care when converting between world space and local space. Maybe Float should be named WorldFloat to make this obvious.

I don’t think the entire ecosystem needs to support f64 compilation out of the box, because it actually requires some serious thought, and I think a type alias in the public API will be sufficient to indicate the intention.

(Kae) #17

It’s fine to use f32 for local space computations like particle simulations, rendering, animation and (some parts of) physics. The problem is when you do world space computations in f32.

(Andrea Catania) #18

You will only lose precision if you convert to f32 first and you should never do that. It is perfectly fine to convert the f32 timer to a f64 Float and multiply it with a f64 Float.
That is something my proposal would make easier

Casting a f32 value to f64 doesn’t increase the precision of that value. So the result of any mathematical equation with an f64 that before was f32 is not precise by definition.

(Marco Fruhwirth) #19

@torkleyy included this part in his post that started this sub-discussion:

I think if the goal would be to have the same float across the entire engine,

so I thought we were discussing that case, sorry for the confusion.
I think we are on the same page here.

(Andrea Catania) #20

This could only lead to make the code even more bloated. You don’t need to have a different float for each usage of the float.

Not the entire, but the sensible parts that make the simulation really precise. Otherwise doesn’t make sense have f64 Transforms at all.