Farewell to Portable Assembly: I've Been Running Swift on MCUs for Seven Years

By 2025, Swift’s progress in the embedded field has officially moved onto the right track. Although Swift was born with a vision of being cross-platform, it still has a long way to go to capture a significant share of the “tough nut” that is embedded development.

In fact, long before Apple officially introduced support for embedded systems, Andy Liu and his MadMachine team had already been deep-diving into this field for years.

Unlike the mainstream focus on “squeezing every last drop of resource out of constrained hardware,” Andy keenly foresaw the rapid increase in hardware performance and the continuous drop in costs. He believed that in increasingly complex development scenarios, Swift’s modern language features would offer a massive advantage. Consequently, when he began building his own hardware and Swift implementation years ago, he chose a philosophy and technical roadmap distinct from the community mainstream.

I have specially invited Andy to share his hands-on experience in Swift embedded development over the past few years. This serves both as a valuable historical record and a different perspective for the community to consider.

Over the past two years, Andy has traveled to many countries as a digital nomad. Recently, he has come up with new ideas for smart hardware; I look forward to seeing his new works soon.

Hi everyone. I’m Andy Liu. I’m delighted to accept Fatbobman’s invitation to look back at the journey my team and I have taken in Swift embedded development.

The following content is specifically targeted at Microcontrollers (MCUs)—hardware systems without an MMU (Memory Management Unit) that cannot run a conventional Linux OS. We will primarily focus on the software logic.

Why Use a New Language for Embedded Development?

Over the past decade, I’ve tinkered with all sorts of hardware geek projects: from bare metal and RTOS to Linux-based embedded hardware. Without exception, they were all developed in C. C is often called “Portable Assembly,” a name that speaks to its extreme proximity to hardware. An experienced embedded developer can look at a snippet of C code and visualize the corresponding assembly structure. For hardware products where cost control is squeezed down to every single byte (of RAM and ROM), C or C-mixed-with-assembly is almost the only viable development method.

For pure application developers, C has a steep entry barrier if they lack a clear understanding of low-level hardware systems—particularly memory usage and management. I believe this is a “feature” (or perhaps the “fault”) of C itself. If you understand the underlying hardware, C is an incredibly simple and clear language; after all, The C Programming Language managed to explain the entire ruleset in just over 200 pages.

If C is so concise, why am I dissatisfied? I’ve self-reflected and wondered if it was just my own issue. Despite nearly 20 years of contact with C, I still can’t remember the operator precedence rules that are staples of exams; I find myself resentfully adding extra parentheses to ensure my code doesn’t break. Furthermore, modern paradigms like Object-Oriented Programming, MVC, and MVVM architectures—which have been mainstream since the 1980s—remain out of reach in standard C development.

Some might argue that I simply haven’t mastered pointers, the essence of C. True, if a developer is fluent in tongue-twisters like “pointers to pointers to pointers,” they can manually implement any high-level language feature in C. There is no shortage of books teaching you how to construct modern software architectures in C. I think those books are great, but seeing pointers flying everywhere still makes my heart skip a beat. In safety-critical fields (like automotive), handwriting C code is even discouraged or banned; they use Model-Based Design (Domain-Specific Languages). Engineers model algorithms in software like Matlab, and the final C code is auto-generated—and engineers are usually forbidden from modifying it manually.

Take this snippet of less than 10 lines, for example; I stared at it for three minutes just to find the issue:

C
#include <stdio.h>

void main(void)
{
    int x = 10;             /* Assign value to variable x */
    int *p = &x;            /* Get the address of variable x */
    x = *p++ + ++*p;        /* Do some weird stuff */
    printf("x = %d\n", x);  /* Print the result */
}

Result: x = 10 (Note: This code involves Undefined Behavior, and the result often depends on the specific compiler implementation).

Gripes aside, I must admit that in fields where hardware costs are razor-thin and task logic is relatively simple, C will likely remain the primary solution for decades to come. In fields requiring strong user interaction and the reuse of existing application ecosystems, open-source Android is one of the few viable options (e.g., Smart TVs, In-Vehicle Infotainment).

However, even in fields where ecosystem reuse isn’t required, if the software complexity is high, moving to Linux, Android, or even Windows seems like the only alternative. This leads to an exponential increase in hardware cost, system complexity (including the OS and runtime environment), and the difficulty of ensuring long-term safe operation.

Starting in 2018, many traditional MCU manufacturers began releasing extremely powerful products. MCU frequencies jumped from dozens of MHz to hundreds of MHz or even GHz. Thinking back to the PC I played with in middle school in 1999, a top-tier Intel Pentium was only 500MHz—and PC programming was already incredibly diverse back then. Despite having such powerful MCUs, we were still forced to operate them through C. The hardware foundation for complex applications was ready, but a suitable universal software development framework was nowhere to be found. Thus, I began my journey of “reinventing the wheel.”

Why Choose Swift?

Starting in 2018, I began paying close attention to modern MCU development methods. I realized I wasn’t alone; many embedded developers felt the drawbacks of traditional methods and were attempting various innovations. A wave of new solutions emerged: Arduino (C++), MicroPython (Python), TinyGo (Go), Espruino (JavaScript), Meadow (.NET/C#), and more.

Let’s look at the pros and cons of the most popular ones, Arduino and MicroPython:

Arduino

  • Pros: It lowered the entry barrier for embedded hardware development to the absolute minimum, acting as the cornerstone of the electronic maker movement. It integrates driver code and examples into a minimalist IDE. Thanks to its mature community, almost every hardware module supplier (Adafruit, Sparkfun, Seeedstudio) provides ready-to-use Arduino drivers and tutorials.
  • Cons: It is still based on C++. Once the code scale expands, all the traditional C++ pitfalls become even more apparent. Many developers use Arduino to test sensors or prototypes, but as soon as the project reaches a certain scale, they usually revert to traditional development paths.

MicroPython

  • Pros: Aside from Arduino, MicroPython is the most famous open-source language project in the maker community. It inherits Python’s simplicity and clarity—you can control hardware with just a text editor and a few lines of code.
  • Cons: Interpreted execution leads to extremely low efficiency. Parts requiring high performance must still be implemented in C by the hardware vendor, leaving only a Python API layer for the user. Look at Adafruit’s CircuitPython: 88% of the code is still C. Low-level developers are still stuck with C, and maintenance costs soar over time (just look at the overall complexity of CircuitPython). The Python API is primarily used for education.

After comparing various options, I identified the criteria a programming language must meet to have the potential to become an industrial-grade embedded language:

  • A compiled, statically-typed universal language with no VM (compiled directly to machine code).
  • No Garbage Collection (GC) to ensure consistent and predictable runtime states.
  • Guaranteed safety while maintaining execution efficiency (capable of system-level development).
  • Backed by a stable maintenance team with little risk of disappearing overnight.

At the time, only two languages met these constraints: Swift (backed by Apple) and Rust (backed by Mozilla).

Their visions were remarkably similar back then. Swift’s syntax was incredibly elegant and easy to learn. Its downside was the perception that it was strictly an “Apple language” rather than a universal one (a hurdle Swift still struggles to clear today; few developers outside the Apple ecosystem truly understand its vision). There was almost no one trying Swift in the embedded space. Rust was the opposite: its Mozilla background was less prominent, its community was highly active, and people were already experimenting with it on MCUs (e.g., Rust Embedded). Its downside was the notoriously steep learning curve and cryptic syntax that scared many away.

After much deliberation, as someone who couldn’t even remember the full rules of C syntax, I was naturally drawn to the elegance and modern features of Swift.

The Journey

Phase 1: Modifying the Compiler

The compiler is a complex piece of software that developers use every day but which remains somewhat mysterious. Its essence is translation: turning one set of symbols (source code) into another (assembly, bytecode, etc.). Traditional compilers were often locked to a specific architecture (e.g., GCC). If you wanted to compile for both ARM and x64, you needed two separate GCC toolchains. Modern languages, however, are mostly built on the LLVM framework—Clang, Swift, and Rust included. LLVM maintains a standard “LLVM IR” (Intermediate Representation). Language developers only need to translate their code into IR (the “Frontend”), while LLVM and hardware teams maintain “Backends” that translate IR into machine code for various architectures.

LLVM Architecture

The MCU I chose was the ARM Cortex architecture, which already had a mature LLVM backend. My task was simply to bridge the non-existent link: Swift Frontend -> LLVM IR -> Cortex Assembly. Thanks to LLVM’s modern architecture, it took me about two to three months and less than 300 lines of code modification to verify the entire toolchain.

Specifically:

ComponentImplementationModification
Swift Compiler (Front/Back)C++< 100 lines
Swift Standard LibrarySwift0 lines
Swift RuntimeC++< 200 lines

The modifications to the compiler itself were minimal. More time was spent on the Swift Runtime, which handles low-level tasks like ARC (Automatic Reference Counting), Metadata management, and memory allocation. Because of how Metadata worked, the Swift compiler at the time could not optimize code during the static linking phase. Unused code (mainly from the Standard Library) was bundled into the final binary. Even a simple print function resulted in a binary larger than 2MB—the Flash limit of most mid-range MCUs. (When Apple officially announced Embedded Swift at WWDC 2024, they did massive work on Metadata to shrink binary sizes, though some Swift features had to be sacrificed in “embedded mode”).

Compiler Success

In Feb 2019, the modified Swift compiler generated its first MCU-ready code

First Swift Code

The development environment at the time, though simple, was exciting

Phase 2: Building a “Swift Version” of Arduino

Once the Swift toolchain was running on the MCU, it felt like opening a door to a new world. But that world was a barren desert. No one knew what kind of fruits this new world could bear.

To lower the barrier to entry and push the hardware to market, a clear and easy-to-understand Hardware Abstraction Layer (HAL) API was essential. Knowing my own limits in terms of energy and resources, I never intended to build everything from scratch. During this period, I found two partners. We looked at almost every modern embedded framework and RTOS and eventually chose Zephyr—a rising star—as our underlying system to shield the differences between various MCU vendors. This allowed us to avoid wrestling with tedious chip register details and eased future porting efforts.

Referencing the APIs of Arduino, MicroPython, and ARM Mbed, we launched our own SwiftIO framework. It abstracts common chip-level peripheral operations into standard Swift APIs.

SwiftIO Maker Kit By early 2020, our first hardware, the SwiftIO Board, and the SwiftIO Maker Kit were ready.

Phase 3: Pandemic and “Building Behind Closed Doors”

Hardware development usually follows a cycle: Prototype (debugging) -> Small Batch (production tuning) -> Mass Production. In June 2020, our product launched, and by July, our small batch of several hundred units had sold out. But then the unthinkable happened: the global supply chain crisis meant we couldn’t procure core components. That halt lasted three years.

During the pandemic, while waiting for the supply chain to recover, we stayed busy. One partner focused on optimizing the Zephyr integration, keeping up with new iterations and improving compatibility. Another partner continuously added driver code for various hardware modules to our MadDrivers and MadExamples libraries. We also completed all documentation, tutorials, and case videos, integrating them into our docs page.

This state continued until April 2024, when our new development boards and kits finally returned to the shelves. Looking back, while the pandemic was an objective factor, this kind of long-term “optimization” without market feedback is a trap every startup team must strive to avoid.

Fatbobman’s Note

As Andy mentioned, MadMachine finally resumed supply in April 2024. I was among the first to receive the new SwiftIO Playground Kit. After experiencing it firsthand, I wrote a post titled Developing Embedded Applications with Swift.

A New Milestone

In June 2024, at WWDC, the official Swift team announced native support for embedded development. The community reaction was electric; it was a firm step forward for Swift’s slogan as a “high-performance system programming language.”

Over a year later, I personally feel that the official Swift team has also entered a phase of “optimization without feedback” in the embedded direction. While the official team has the resources to toil silently for years, I look at the progress of Swift on Windows and the recent stirrings on Android and can’t help but feel: no matter how perfect you build a tool, long-term operation without market (developer) feedback is a massive drain on morale both inside and outside the team. I think this is why “only Swift developers know Swift is a universal language.”

Gripes aside, the official support is a huge net positive for our project. Previously, whenever macOS or Xcode updated, I would get user feedback that our custom compiler had broken. With official support, we can finally stop chasing compiler versions and shift our focus back to creating real-world use cases and products. Our original intention remains the same: to use the elegant language of Swift to do truly interesting things in the embedded field.

About Author

Andy Liu is the creator of the MadMachine project. He loves tinkering with interesting hardware and software, from building prototypes from scratch to chasing performance and obsessing over details. He enjoys the process of turning quirky ideas into reality. If you have a strange idea, an unusual requirement, or want to discuss a technical roadmap, feel free to reach out to him.

Subscribe to Fatbobman

Weekly Swift & SwiftUI highlights. Join developers.

Subscribe Now