All posts
Trending Tech

Why Async Rust Still Feels Like a Prototype

Huma Shazia5 May 2026 at 8:23 pm5 min read
Why Async Rust Still Feels Like a Prototype

Key Takeaways

Why Async Rust Still Feels Like a Prototype
Source: Hacker News: Best
  • Async Rust generates significantly more MIR code than synchronous equivalents, creating measurable binary bloat
  • The problem is less visible on servers but critical for embedded systems where every byte matters
  • A new Rust Project Goal seeks funding to address async bloat at the compiler level

The Zero Cost Abstraction That Isn't

Dion, an embedded software engineer at Tweede Golf, has published a detailed critique of async Rust's current state. His argument is blunt: async Rust never left MVP status, and the 'zero cost abstraction' promise remains marketing material for embedded developers.

The core issue is binary bloat. When you write async code, the Rust compiler transforms it into a state machine. This transformation happens during a MIR (Mid-level Intermediate Representation) pass before the code reaches LLVM. The problem? The generated code is far larger than equivalent synchronous code.

Dion provides a concrete example. A simple async function that awaits two calls to another async function generates 360 lines of MIR. The non-async version of the same logic? Just 23 lines.

360 vs 23 lines
MIR output for async versus sync versions of the same simple function

Why Embedded Developers Feel the Pain First

On desktop and server systems, this bloat gets buried under available memory and compute power. You won't notice an extra few kilobytes when you have gigabytes to spare. Embedded systems are different. Every byte of binary size counts when you're targeting a microcontroller with 256KB of flash.

This creates an awkward situation. Async Rust's executor-agnostic design means you can write code that runs on huge servers and tiny microcontrollers. In theory, this is elegant. In practice, the abstraction costs enough that embedded developers must choose between async's ergonomics and their size constraints.

Inside the State Machine

To understand the bloat, you need to understand what the compiler generates. Each await point in your async function becomes a state in an enum. The compiler also adds states for starting, completion, and panic handling.

Dion breaks down the generated CoroutineLayout for his example:

  • Unresumed: the starting state before first poll
  • Returned: the completion state
  • Panicked: the panic handling state
  • Suspend0: at first await point, storing the first future
  • Suspend1: at second await point, storing the first result and second future

Each of these states must track the data it needs. Each transition between states requires code. The compiler generates all of this, and while later optimization passes trim some fat, the foundation is inherently larger than the synchronous alternative.

Existing Work and Its Limits

Dion acknowledges that people are aware of related problems. There's an open pull request (PR 135527) that addresses futures becoming larger than necessary and the copying overhead they introduce. But this PR tackles symptoms, not the root cause.

In a previous blog post, Dion outlined workarounds that developers can apply when writing async code. These help, but they're band-aids. They require developers to understand the compiler's internals well enough to avoid triggering bad code generation. That's not a sustainable solution for a language feature that's supposed to be a first-class citizen.

A Project Goal for Compiler-Level Fixes

Dion has submitted a Rust Project Goal to address async bloat at the compiler level. Project Goals are the Rust project's mechanism for prioritizing significant efforts that require sustained attention and resources.

The goal would translate the workarounds from his first blog post into compiler optimizations. Instead of requiring developers to write their async code in specific ways to avoid bloat, the compiler would generate efficient code by default.

Dion is seeking funding to support this effort. Compiler work is time-intensive, and the Rust project relies on a mix of volunteer contributors and sponsored development to advance these kinds of initiatives.

ℹ️

Logicity's Take

What This Means for Rust Adoption

Rust has positioned itself as the safe systems language. It's found traction in operating systems, browsers, and increasingly in embedded systems. But embedded adoption depends on predictable, minimal resource usage. Async is increasingly the default pattern for concurrent code. If async remains bloated, embedded Rust developers face a fragmented ecosystem where they can't use async libraries without paying a size penalty.

The alternative is maintaining separate sync and async versions of libraries, which fragments the ecosystem and doubles maintenance burden. Nobody wants that outcome.

Frequently Asked Questions

What is async bloat in Rust?

Async bloat refers to the larger-than-expected binary size and MIR code generated when compiling async Rust code compared to equivalent synchronous code.

Why does async Rust generate more code than sync Rust?

The compiler transforms async functions into state machines with multiple variants for each await point, plus states for start, completion, and panic handling. This transformation adds significant overhead.

Does async bloat affect all Rust projects?

All async Rust projects experience some bloat, but it's most problematic for embedded systems where memory and binary size are severely constrained.

What is a Rust Project Goal?

Project Goals are the Rust project's formal mechanism for prioritizing significant efforts that need sustained attention, resources, and coordination across the project.

Also Read
CopilotKit Raises $27M to Embed AI Agents in Apps

Another look at infrastructure tooling receiving investment

ℹ️

Need Help Implementing This?

Source: Hacker News: Best / Dion

H

Huma Shazia

Senior AI & Tech Writer

Related Articles

Tesla's Remote Parking Feature: The Investigation That Didn't Quite Park Itself
Trending Tech·8 min

Tesla's Remote Parking Feature: The Investigation That Didn't Quite Park Itself

The US auto safety regulators have closed their investigation into Tesla's remote parking feature, but what does this mean for the future of autonomous driving? We dive into the details of the investigation and what it reveals about the technology. The National Highway Traffic Safety Administration found that crashes were rare and minor, but the investigation's closure doesn't necessarily mean the feature is completely safe.