In this post I want to give a short overview about a part of the current Rust ecosystem regarding Computer Graphics and discuss a few concepts and ideas which could make Rust more interesting for this field.
Overview
While Computer Graphics is often associated with 3D rendering (real-time or offline), it covers several subfields including: geometric modeling, animation, rendering and a few more.
In particular, real-time 3D rendering is the most healthy subfield, mostly driven by the effort of the gamedev community, resulting in the creation of quite a few different libraries:
winit: de-facto standard Rust library for window handlinglyon: 2D graphics rendering library with path tessellationgfx-rs: focusing on abstracting the open 3D graphic APIsgliumandluminance: wrapper libraries for OpenGLvulkano: higher level Vulkan API wrapperamethystandthree-rs: higher level game engines or libraries including 3D graphics rendering
Overall, the ecosystem here is quite mature, even though the mentioned libraries haven’t reached stability yet. The other fields haven’t received that much love so far, but that’s not unique to Rust. For rigid-body simulations, there are currently two competitors:
npyhsics: quite mature physics library built uponnalgebrarhusics: more recent library usingcollision-rsunder the hood and compatible with ECS
Besides the (great!) languages features offered by Rust itself, how could the current ecosystem be more attractive for the Computer Graphics community?
1. Math libraries
The basis of Computer Graphics is math, so we need to improve the current math libs, heh!
We currently have access to two great libraries (among others): cgmath and nalgebra. Both cover slightly two different use-cases and have a place in the ecosystem:
cgmath has a smaller API surface and focuses on the needs of real-time rendering, while nalgebra provides a more generic and richer interface for linear algebra.
Looking at kinematic, rigid-body or fluid animation, it’s often desired to write the API interface generic over the dimension or generic over functionality (implements a trait). Quite a few algorithms can be implemented for multiple dimensions like particle based fluid simulations. There is a large benefit to be able to test and visualize your implementation in lower dimensions (2D) and run for final simulation in 3D without the overhead of porting pieces over to 3D, possible introducing bugs. nalgebra allows to write dimensional genericity via MatrixMN and VectorN with the generic_array crate, but this will result often in long where-clauses and can be quite annoying to deal with.
Therefore, the ecosystem could greatly benefit from the (upcoming) implementation of const generics!

2. Rust Shading Language
A long standing dream, writing your shaders in Rust! But, actually why..?
- Make use of programming language features like
trait - Build up a collection of reusable Rust shader crates
- Possible optimization of shader code via the LLVM pipeline
- Shared shader interface between device and host (structs, location values, etc.). Defining a struct once for shader and host code can reduce bookkeeping or remove reflection costs.
But wait, there is more! With the current convergence of shading models (e.g BRDFs) between real-time rasterization and raytracing, it would be tempting to share common functions between both, shaders and raytracing kernels, allowing for easier validation tests of approximation models and further integration of raytracers to generate reference images. A CPU rasterizer could be easily built with Rust shaders, allowing to improve reference tests.
How could this be implemented? With the release of SPIR-V alongside Vulkan, a new Rust backend could be implemented for emitting SPIR-V code after finding a valid language subset, which can be supported + additional features like texture and shader entry points. During the normal Rust compilation process, code is transformed into multiple different IRs:
AST -> HIR -> MIR -> LLVM IR -> Machine Code
At each point a translator into SPIR-V could be integrated, but LLVM IR to SPIR-V might be the most future-proof approach, in the hope of an upcoming LLVM SPIR-V backend and the light of the OpenCL-SPIRV compiler project clspv.
There are a few projects, tackling this problem already, but nothing production ready yet!

3. DSLs and Meta-staging
Computer Graphics is an very computational intensive field, which algorithms running on different platforms, requiring different optimization mechanism. On the other end are library users, which aim at optimizing their resources. The library API will be the bottleneck at which hardware-abstraction and user intentions will meet. Therefore, domain-specific languages are developed to broaden the bottleneck and allow to preserve the semantic meaning of the user code better by providing a more expressive API. ebb, Simit and Spire are examples of recent DSLs in the Computer Graphics area. On of the core features which enables writing DSLs is meta-staging, where the code runs through multiple compilation stages, which may generate new code. An interesting approach for adding meta-staging functionality to languages is called language virtualization:
The Scala LMS library enhances the Scala language with meta-staging functionalities by ‘virtualizing’ language features. It exposes interfaces for overriding the behavior of language features like
if $cond {
..
} else {
}
by rewriting the staging code as function calls:
ifThenElse(...)
This alone wouldn’t be very useful and only shows it’s true strength by introducing a Rep<T> type, which overrides the default language construct implementations.
The difference between Rep<T> and T is the time at which these will be evaluated. Values of type T will be evaluated during the staging process (compile-time). Rep<T> will be translated to a corresponding value T at compile time and evaluated at run-time, allowing us to generate Rust code from Rust code!
During the staging process a graph representation of the generated code will be built, which can be translated into platform specific code. Additionally, it should be possible to define new representation types, containing domain specific information, allowing for domain specific transformation steps on the graph representation.
Let’s see the one of the advantages in a small example for particle simulations:
// library interface
fn foo() {
for particle in particles {
// do calculation
}
}
fn boo() {
for particle in particles {
// do other calculation
}
}
// User code
fn main() {
foo();
boo();
}
Our library exposes two functions foo and boo operating on the same set of particles. Unfortunately, we lose the semantic of particle iteration on the API surface level. If both calculation are independent we could fuse them together into one loop. On the other hand, the proposed toy API above hides the implementation detail for the underlying platform. For CPUs we may implement the for-loop via parallel iterators using rayon.
Via meta-staging we could override the particle for loop,
generate a special graph node, apply loop fusion on the graph and emit efficient platform-specific code, without interferring with the actual library API.
For implementation of meta-staging library, two nightly features are needed at least: procedural macros for functions and specialization.
It has been shown that language virtualization can be also be useful for other projects like efficient SQL query builders.
Conclusion
These are the building blocks/research topics for ‘Rust + Computer Graphics = ❤️’ I’m currently interested in and hopefully will be realized in the future. There are other high-priority topics like Graphical User Interfaces, but they have been discussed at other places already and cover a larger area of fields :)
Edit #1:
Fix description of rhusics as pointed out by /u/torkleyy.