< Back to Blog

Cross-Compiling Your Project in Rust

February 13, 2024

Table of Contents

Messing around with software build systems, toolchains, and operating systems is anything but simple. Many platform engineering teams can attest that building software in a variety of challenging, and regular, environments is difficult.

Here at Tangram Vision, as we gear up to release the cross-platform software development kit (SDK) for our [HiFi 3D Sensor](https://www.tangramvision.com/pre-order), we thought we might dive into cross-compilation, how we build for a variety of platforms without unlimited resources, and examine some of the challenges in trying to ship code to a variety of platforms and CPU architectures.

💡 Disclaimer: This post is going to assume some basic knowledge of Rust, C compilers, and cross-compilation. It may be a bit more involved technically than some of our content, but the nature of cross-compilation lends itself to a minimum level of complexity.

# Motivation

The first question might be: why would you want to cross-compile? Especially when testing and shipping a cross-platform SDK, it makes a lot of sense to just have the hardware to actually build and run the software itself — after all, how many possible combinations of hardware architecture + operating system could that be?

Well, it can be a lot! Even if you only want to ship your software to a handful of the most popular targets, that can still stack up fast! Even a handful of the following target triplets counts up pretty quick:

- x86_64-unknown-linux-gnu
- x86_64-unknown-linux-musl
- aarch64-unknown-linux-gnu
- aarch64-unknown-linux-musl
- x86_64-apple-darwin
- aarch64-apple-darwin
- x86_64-pc-windows-gnu
- x86_64-pc-windows-msvc
- aarch64-pc-windows-msvc
- etc.

Having a separate machine for each triplet (virtual or physical) is certainly possible; however, this starts to build a matrix of architecture (x86_64, aarch64, arm64, armv7, etc.), vendor (unknown, pc, apple), and kernel / libc (linux-gnu, linux-musl, darwin, windows-msvc, etc.) that can be costly to maintain and operate across.

## Cross-Compilation for Embedded Development

A common example where cross-compilation makes a lot of sense is when developing for some kind of embedded or compute-limited platform. It turns out this is actually a common problem in robotics: often times robotic deployments run on a far different CPU or operating system than might be used for development! A limited number of examples we’ve seen in the wild:

- Developing software on an M1-series (arm64 / arm64e / aarch64) Macbook, but deploying to a robot running x86_64 Linux.
- Developing software on an x86_64 Linux machine, but deploying to an aarch64 microprocessor
   - Likewise, to any aarch64 single-board-computer (SBC) like the Raspberry Pi, NVIDIA Jetson, etc.
- Developing software on an x86_64 Intel Macbook, but deploying to a robot running x86_64 or aarch64 Linux

Having the flexibility to work on whatever machine is most comfortable to you while also being able to deploy to a lower-powered machine (or one on an entirely different OS) is a significant advantage for your team. Especially in the case of deployments to smaller boards, it can be very frustrating to wait for something like a Jetson or Raspberry Pi to build thousands to hundreds of thousands of lines of software. A decent, high-powered, desktop-class machine is much more efficient for such a task.

## Cross-Compilation for CI / CD

Automated builds and deployment are the norm in software today. Even if managing a separate machine for each target-triplet manually is an arduous task, most CI systems today (GitLab CI, GitHub Actions, Travis CI, etc.) can make the automation and orchestration of such machines as easy as writing a config file to your repository.

But even then, the costs for these separate machines are not equal! While many CI/CD systems can automate running code on different architecture and operating systems, it can still be expensive (this time in real-world dollars) to operate. This is because most CI/CD systems rely on either shared runners that their host organization manages, or they dispatch to systems that you must run and maintain yourself. In most common configurations, CI systems run on top of x86_64 machines, often running Linux or some open-source operating system.

For many use-cases, this doesn’t pose a problem at all! However, the minute you want a macOS machine in CI, or some kind of semi-powerful aarch64 machine, you’ll run into a wall of small limitations in what’s available or what you can manage. Some limitations that Tangram Vision has discovered, for example:

- [macOS runners on GitLab](https://docs.gitlab.com/ee/ci/runners/saas/macos_saas_runner.html) cannot run custom Docker images
- [Windows runners on GitLab](https://docs.gitlab.com/ee/ci/runners/saas/windows_saas_runner.html) are much slower than Linux runners, and can sometimes introduce breaking changes that make previously-passing pipelines fail.
- ARM and `aarch64`  runners may not be in the same physical data-center as x86 runners
   - Extra fees due to data ingress / egress to share final build artefacts
   - Delayed start-up and processing times compared to common x86 runners
   - Payment to separate providers must be managed

Needless to say, being able to cross-compile code on the cheapest and fastest machine available in the cloud is often desirable. While it might still be necessary to run the final build artefacts on different platforms or machines in your CI pipelines, one can limit the costs associated with running long jobs on these machines by cross-compiling your software on a cheap, fast x86_64 Linux machine prior to shipping the final build artefacts to be tested.

# Cross-Compiling with Rust

Tangram Vision primarily [writes software in Rust](https://www.tangramvision.com/blog/why-rust-for-robots). One nice benefit of Rust is how easy it makes it to switch targets, and how much tooling is available to make cross-compiled builds consistent. For most Rust-only crates (crates that don’t bind to C / system libraries), you can just do:

$ cargo build --target=aarch64-unknown-linux-gnu

And boom: thanks to `rustc`  being backed by LLVM, you now have everything you need to build for your target (specifically, `aarch64-unknown-linux-gnu`). However, this does come with a handful of trade-offs:

- We still have no way to execute our “cross-compiled” binary or library.
   - This means we can’t run tests on that platform
- This doesn’t work for *every* target triplet. For example, linker options on Apple platforms will be different than your local linker binary might accept.

So while Rust does do pretty much all the work in switching targets right out of the gate, there’s still a little more we need to work effectively while cross-compiling our software.

💡 For the remaining sections, we’ll assume that we’re cross compiling from an x86_64 Linux machine, to other target triplets for the sake of simplicity.

## Using `cross`

The [`cross` project](https://github.com/cross-rs/cross) aims to be a zero setup means of performing cross-compilation and execution of a Rust project. By default, Rust operates on top of LLVM, which makes the management of most of the toolchains, code generation, etc. a breeze. Cross adds to this by:

- Running all the above toolchains in an appropriately configured Docker or Podman container, so that dependent C / system libraries are automatically built correctly.
   - One can even bring their own Docker images if needed.
- Adding the ability to execute the final artefacts (e.g. tests) within a QEMU container for that platform / architecture.

💡 QEMU has some limitations and can sometimes contain bugs for certain architectures or platforms (cross acknowledges as much in their docs); however, this can reduce the burden of needing to procure and configure an environment on several machines when you’re starting out.

What’s more, the difference in building a Rust project with `cargo` vs. with `cross` is as simple as swapping the commands:

$ cross test --target=aarch64-unknown-linux-gnu

So for a “standard” Rust project building with `cargo`, it is as simple as swapping out `cargo` for `cross` for whatever commands you’re used to using to build or test Rust projects.

## What about macOS?

Looking closely at [`cross`’s supported targets list](https://github.com/cross-rs/cross?tab=readme-ov-file#supported-targets), one will notice that while it does support images for Linux, Windows, and a few other architectures, it does not include Docker images for macOS. Cross-compiling for Apple targets can be challenging in the best of times — not because macOS requires vastly different assumptions as an operating system, but because cross-compiling and linking for macOS often requires the system libraries shipped with Xcode.

Unfortunately, Apple’s licensing terms for macOS don’t really allow for distributing a Docker image that runs macOS, nor would something like QEMU be able to emulate Apple’s platform specifically. This leads us back to the original problem prior to `cross`: how does one set up the correct toolchains to generate, compile, and link code for a different platform? Rust *mostly* does the correct thing, as do most `-sys` crates, but Apple targets tend to be problematic because default versions of `clang` on Linux don’t support the same flags that `clang` on Apple does. Some example errors you might see just by trying off the bat:

= note: cc: error: unrecognized command-line option '-arch'

= note: clang: warning: argument unused during compilation: '-arch x86_64' [-Wunused-command-line-argument]
       /usr/bin/ld: Error: unable to disambiguate: -dead_strip (did you mean --dead_strip ?)
       clang: error: linker command failed with exit code 1 (use -v to see invocation)

The secret here is that one needs a specific version of `clang` built from one of Apple’s supported versions in Xcode. The easiest way we’ve seen this accomplished is through the [`osxcross`](https://github.com/tpoechtrager/osxcross) project on GitHub. This project provides an easy way to generate a toolchain on your local system that can be used for any C, C++, or Rust cross-compilation. The steps are:

1. Download `Xcode_15.2.xip` (or whatever version of the macOS SDK you wish to target) from [Apple’s website](https://developer.apple.com/download/).
2. Install `clang`, `make`, `libssl-devel`, `lzma-devel` and `libxml2-devel` on your local Linux system.
3. Run the `./tools/gen_sdk_package_pbzx.sh` script from osxcross on the `Xcode_15.2.xip`  package you downloaded. This will create two tarballs (`MacOSX14.2.sdk.tar.xz` and `MacOSX14.sdk.tar.xz`).
4. Move the two tarballs created above to the `tarballs/` folder in your local clone of the repo.
5. Run `TARGET_DIR=/usr/local/osxcross SDK_VERSION=14 UNATTENDED=1 ./build.sh` in the repository directory with the appropriate system / root permissions.
6. Add `/usr/local/osxcross/bin` to your `PATH`.

And voila! This provides a full set of tools in `/usr/local/osxcross/bin` that will allow for linking macOS code on a Linux system.

To finish setting the correct linker for macOS targets permanently, one can add the following snippet to their Cargo configuration (either at `$HOME/.cargo/config.toml`  or in a local `.cargo/config.toml`  in your project repository):

linker = "/usr/local/osxcross/bin/x86_64-apple-darwin23-clang"
ar = "/usr/local/osxcross/bin/x86_64-apple-darwin23-ar"

linker = "/usr/local/osxcross/bin/aarch64-apple-darwin23-clang"
ar = "/usr/local/osxcross/bin/aarch64-apple-darwin23-ar"

This doesn’t allow us to execute tests or compiled binaries magically as `cross` does for its supported platforms, but it does mean that `cargo build` or `cross build` will work correctly and generate the appropriate cross-compiled outputs.

## Managing System Libraries

`cross` provides a number of examples and documentation about how to work with common build and packaging systems such as [Meson, vcpkg, or Conan](https://github.com/cross-rs/cross/wiki/Recipes#vcpkg-meson-and-conan).  For the most part, if `cross` provides a Docker image for your target triplet, you probably don’t have to worry too much about how it’s going to compile or link to system libraries, as long as they’re present in the docker image that `cross` is building with.

Some Rust crates do vendor their own versions of the system library they’re wrapping. This can be useful when you want to ensure either:

1. That a specific version of the library is wrapped.
2. That you’re statically linking against all of your dependencies, since you don’t know what kind of system you’re shipping to or how you might ship those dependencies separately from your own code.

This can sometimes mean that you have to do some extra work to make sure that the underlying system library is being built with the correct toolchain and project configuration. Despite `cross` mostly making cross-compilation look like magic, there can still be some rough edges depending on what kinds of crates you’re relying on. Moreover, as we saw above there are no supported Docker images for macOS targets with `cross`, which means that cross-compiling the appropriate system libraries is necessary in addition to the Rust code and Rust-only crates themselves.

One (rather undocumented) edge-case we at Tangram Vision have run into is cross-compiling vendored CMake libraries in crates. While `cross` by-default installs `cmake` in its supported Docker images, we needed to solve the problem of setting up the correct compiler and toolchains with `cmake` for macOS targets. Since `osxcross` is installed, it should be easy, right?

Fortunately, we found that it was actually quite easy. First we define a CMake toolchain file, as follows:


set(triple x86_64-apple-darwin)

set(CMAKE_C_COMPILER x86_64-apple-darwin23-clang)
set(CMAKE_CXX_COMPILER x86_64-apple-darwin23-clang++)



Assuming we placed this at `toolchains/x86_64-apple-darwin.cmake`, we can easily invoke `cargo` to build any vendored, system library built with CMake by:

$ export TARGET_TRIPLET=x86_64-apple-darwin
$ CMAKE_TOOLCHAIN_FILE="toolchains/${TARGET_TRIPLET}.cmake" cargo build --target="$TARGET_TRIPLET"

As a result, we can directly link to vendored system libraries (that use CMake) without even touching a Docker image.

# Common Pitfalls

## Cargo Config

One trouble with using `cross` is that it will still resolve your Cargo config the same way that `cargo` might under regular circumstances, but this can sometimes lead to hard-to-understand errors. For example, it is common to wrap `rustc` using `sccache`, often set in the `$HOME/.cargo/config.toml` file. This config is “global” in that it doesn’t apply to a single repository. When wrapping `rustc` this can actually be a useful feature, so that the use of something like `sccache` isn’t committed to the repository and thus doesn’t affect development if your environment isn’t set up for it. The file might look like:

$ less $HOME/.cargo/config.toml
rustc-wrapper = "sccache"

Unfortunately, if you try to run a `cross` command, it will by-default run inside of a docker image which likely does not have `sccache` installed. The obvious way to work around this is to just comment out or remove your global configuration entirely, but `cross` also provides [documentation on enabling `sccache` inside cross-compiled builds](https://github.com/cross-rs/cross/wiki/Recipes#sccache).

It is worth reducing the total amount of custom configuration outside of your cross-compiled project so as to avoid surprises when your environment differs from what a fresh user might have working with your repository. Optimizations such as `sccache` in your global Cargo config can and will lead to confusing error messages and add complexity to the entire build system, so be sure to either force that in the repository configuration itself if you believe it to be of value, or disable it to keep workflows consistent across your team.

## Tools and Toolchain Management

If you’ve installed Rust on your system using [Rustup](https://rustup.rs), re-configuring the setup above can sometimes be a bit of a burden every time you get a new machine (or conversely, every time a new developer joins your project). One easy way to tell Rustup to automatically download and install the supported target triplets for your system is to provide a `rust-toolchain.toml` file in the root of your repository:

channel = "1.75.0"
components = ["clippy", "rustfmt", "rust-std"]
targets = [
   # Linux Targets
   # Apple Targets
   # Etc...
profile = "default"**

This provides a clear and easy way to reference what targets are supported, but also makes it easy to always use a consistent toolchain for compiling and cross-compiling to different targets. The Rust ecosystem is often known for moving fast to upgrade to new Rust versions, but changing toolchains constantly can be a much greater hassle when you have to target many different target triplets.

We recommend tracking a `rust-toolchain.toml` file like the above in your repository when working on a project with many different targets. Even if you set `channel = "stable"` to use the most recent “stable” release of Rust, you’ll appreciate not being reminded to `rustup target add $TARGET_TRIPLET` every time you forget to add all your targets back when the stable channel updates.

## Build Scripts (`build.rs`)

If your project uses a [`build.rs`](http://build.rs) script in one of your crates internally, it can be tempting to try and automate some of the above steps via this script prior to building the crate source itself. As a result, the use of `build.rs` can sometimes lead to headaches as the underlying script tries to be “too clever” about what steps it needs to take on which platforms.

Remember: cross-compilation settings need to be set at the top-level, for all artefacts and dependencies during a given build. Settings in the [`build.rs`](http://build.rs) script only apply to the crate that the `build.rs` lives inside, not to any dependencies or dependent crates in your dependency graph.

It is generally preferable, especially when dealing with cross-compilation to focus on simplicity. For this reason, we advocate to avoid complex [`build.rs`](http://build.rs) scripts that try to automate some part of the Cargo build process based on the target triplet or environment. It’s often not worth the trouble!

# Conclusion

We’ve now covered a brief overview of why one might want to cross-compile, as well as some tips and tricks for managing cross-platform development with cross compilation tooling in Rust. This can be a tricky problem to solve, and is one where canned, magical solutions don’t always work or aren’t always problem-free themselves.

Tangram Vision today knows that we live in a world full of many architectures (x86_64, aarch64, i686, you name it), as well as many operating systems (Linux, macOS, etc.). As we build our cross-platform SDK for our [HiFi 3D Sensor](https://www.tangramvision.com/pre-order), we know that cross-compilation is only the first step in being able to support a device that can truly run anywhere. Follow us on LinkedIn and join our mailing list if you’re interested in how we build, deploy, and operate perception sensors over fleets of devices in challenging and diverse environments.

Share On:

You May Also Like:

Accelerating Perception

Tangram Vision helps perception teams develop and scale autonomy faster.