Obiwanus's long link time feedback

(Andrey Lesnikov) #1

A thread to express my concerns about the @rust_gamedev ecosystem (potentially also about the wider Rust ecosystem, but I have yet only looked at the former):

First of all, I like Rust. Cargo is also a great tool that is extremely easy and fun to use. 1/7

However, if you look at a project of a decent size (e.g. @AmethystEngine) you will see that it consists of 400 crates of dependencies, each of which is compiled separately. As a result, changing a single line of your code results in ~20s of linking by the default linker. 2/7

Granted, you could try using a more efficient linker that might reduce the waiting time to ~5s, which is much better but is still a lot, since this is how much is takes to link the most basic project you could start with. 3/7

This web of dependencies that a single library can drag into your project reminds me heavily of the NPM ecosystem. Although Cargo won’t suffer from the infamous left-pad issue, it doesn’t discourage people from mindlessly adding dependencies for trivial things. 4/7

The fact that people seem to be OK with it is what worries me. Of course, it doesn’t look particularly bad when you come from C++, but overall is this the right direction to go to? 5/7

In 5 seconds it should be possible to compile several million lines of code while doing much more than a C++ compiler does (cc @Jonathan_Blow), but what we have is a basic project with 400 dependencies that takes that much to link. 6/7

This all makes me sad and I even think of dropping the language altogether if this is the Rust way of doing things, since using the language but not using the ecosystem doesn’t seem right. 7/7

3 Likes
(Alec Thilenius) #2

+1 for compilation times.

Rust compilation times from a cold cache are truly miserable. It’s an LLVM language though, and C/C++ compilation times have been a problem forever. There are ways around it, for example tup monitors the FS during build ops to create a very fine-grained transitive dependency DAG. It can then short-circuit all kinds of build ops without ever touching the FS. If you’re Google and have a data center to throw at it, you can also shard compilation (and use the same type of caching as tup, assuming code is mounted in a VFS).

Compilation times are my #1 complaint about Rust by far, but it’s also pretty young. Switching back to C#, or GoLang for a bit makes me really grit my teeth when I cargo build.

I’m personally not concerned by the ‘crates are too small’ dependency explosion problem (but that one seems rather opinion based?). If compilation/link times were no problem then -shrug- I personally wouldn’t care. NPM black-holes install / build nice and fast with yarn / parcel these days, so I also don’t care about those for the same reason. I might be an outlier there though…

Me: Not a compiler designer or creator, and mostly not qualified to give opinions :wink: But also spent a few years at Google where they do have multi-million line projects they build from source.

3 Likes
(Kel) #3

Speaking individually as a member of the team, I’m not ok with the status of heavy usage of unnecessary/“ergonomics” dependencies all throughout the tree. I’ve stated my stance about vendoring code that would otherwise be brought in through microdeps on link aggregators before, and additionally I’ll add that Amethyst’s current API re-exposure story isn’t great either, such as where massive external API’s are confusingly blended into Amethyst API’s. One thing I’d really love but I’m not entirely sure is feasible is making the costly chain of derive proc macro tooling optional, so that users that would rather wait longer during compilation than write out Component impl’s can still do so, but users that really need to trim the fat on their builds can also get a super big win.

However, I don’t think any of these problems are something we can tackle at just this minute. Perhaps in a few months, but there are super big changes going on all throughout Amethyst’s API, and I don’t think we’re in a great place to start auditing dependencies. But we could at least start the discussion about the justification process for dependencies, what the process might be for auditing and dealing with things deeper in the tree, and ways that we might be able to trim the low hanging branches in the near future.

(Kel) #4

Furthermore, on the topic of compilation time bloat, I just saw this mentioned elsewhere and it seems like a fantastic option for managing proc macro compilation times: https://github.com/dtolnay/watt

(Kel) #5

Some good insight from @termhn: https://twitter.com/fu5ha/status/1191774828937633793

This used to not bother me but it has bothered me more and more recently. I think the biggest problem is that since dep graph is a tree, seemingly ‘small’ deps often times end up indirectly bringing in other large deps, perhaps even ones you already have but a different version.