Issue #124

The Spring Festival Gala, Robots, AI, and LLMs

Cover for Weekly Issue 124

As a television program with over a billion viewers, the China Central Television’s Spring Festival Gala is undoubtedly an exceptional showcase platform. In this year’s Gala, multiple Chinese robotics manufacturers presented their products in various performances, among which Unitree’s humanoid robots caught the most attention. During the show, multiple models of these humanoid robots executed a series of highly complex martial arts and dynamic movements. Compared to the more static, stationary displays of last year, the complexity and stability of their movements have seen a significant leap—a progress that has drawn attention from global media.

Following the Gala, discussions on social media showed a clear divide. Alongside the amazement at the technological progress, there was no shortage of skeptical voices dismissing the performance as “pre-programmed,” “lacking AI,” or “impractical.” To a certain extent, this reflects the public’s underestimation of the sheer complexity of robotics—especially a lack of awareness regarding the difficulties of motion control, real-time feedback systems, and system-level integration.

One point needs clarification: pre-trained does not equal “record-and-playback.” It is true that humanoid robots currently employ highly orchestrated movement sequences in such performances, but this shares a similar logic with the training of human dancers or athletes. Extensive offline training and rehearsal form the foundation of the movements, but during actual execution, the “body” must still rely on dynamic balance and real-time corrections to cope with real-world disturbances. It is precisely this fault tolerance and real-time repair capability that allows humanoid robots—a naturally unstable bipedal system—to pull off highly dynamic, continuous movements.

Meanwhile, the explosion of Large Language Models (LLMs) in recent years has led many to mistakenly equate LLMs with AI as a whole. In reality, AI, a field with decades of history, encompasses far more than just language understanding. Especially when interacting with the real physical world, the usage of specialized models—such as computer vision, path planning, motion control, and reinforcement learning—in industrial and physical systems still vastly exceeds that of LLMs. In the realm of robotics, the true ceiling of a system’s capability is typically determined by its perception systems, control systems, and low-latency feedback algorithms, rather than its language reasoning abilities.

Even if stronger “cognitive abilities” are introduced to humanoid robots in the future, the better path may not necessarily be directly plugging in an LLM. Instead, it will likely involve building World Models that inherently understand the laws of physics, paired with control systems capable of low-latency responses—two areas that happen to be inherent weaknesses of LLMs. The challenges of Embodied AI are fundamentally different from pure text reasoning.

As for the issue of “practicality,” kung fu or dancing indeed hardly correspond directly to real-world job scenarios. However, it is precisely these movements—which demand extreme balance, coordination, and dynamic response—that provide the perfect validation ground for highly complex and unstable systems like humanoid robots. They function as engineering stress tests, demonstrating the maturity of mechanical design, electronic control, and algorithmic integration, rather than proving short-term commercial viability.

Personally, I remain cautious about the future market size for humanoid robots. There is often a significant chasm between technological breakthroughs and widespread commercial adoption. Nevertheless, judging by the magnitude of progress showcased at this year’s Gala, it is reasonable to conclude that within the next decade, robots or smart machines integrating into our daily work and living environments is no longer just sci-fi imagination. Whether you like “robots” or not, the trajectory of technological evolution is unmistakably clear: we will eventually need to coexist with them.

As for the apocalyptic scenario of “robots enslaving humanity,” I’m not worried about that for now. My more realistic concern is this: if they encounter a bug at work and swing a punch at me, I genuinely couldn’t take the hit.

Recent Recommendations

How to Migrate to @Observable Without Breaking Your App

As more apps raise their minimum deployment target to iOS 17, @Observable is replacing ObservableObject as the new state management foundation. However, when a project has deeply relied on ObservableObject + @Published, migration is far from a simple macro substitution. Pawel Kozielecki draws on a real-world migration experience to systematically walk through the correct usage of property wrappers in the new system — using @State for lifecycle management, @Bindable for two-way bindings, and plain properties for read-only access — while highlighting easily overlooked details such as @ObservationIgnored and computed property tracking blind spots. The real challenge of migration was never about syntax; it’s about truly understanding who owns the view model’s lifecycle.


Testing with Event Streams

Although Swift Testing offers a rich assertion API, in practice you’ll find there’s no single tool that fully corresponds to XCTest’s ability to verify that multiple callbacks are triggered in order (fulfillment + enforceOrder). confirmation requires nesting and cannot directly validate trigger order. Matt Massicotte proposes an approach that better fits Swift’s concurrency model: using AsyncStream to collect events, wrapped in a lightweight EventStream type — yielding event identifiers when callbacks fire, then calling collect at the end to retrieve the full event sequence for comparison against an expected array. As for why not just use a plain array, Matt provides a compelling answer: when @Sendable constraints or inconsistent actor isolation are involved, writing directly to an array creates concurrency safety issues, whereas the AsyncStream-based approach naturally conforms to the concurrency model.


If You’re Not Versioning Your SwiftData Schema, You’re Gambling

SwiftData’s declarative syntax and automatic migration capabilities make it easy to fall into the trap of thinking “the framework will handle everything.” The reality is that once your model structure changes — adding fields, renaming, adjusting relationships — without an explicit schema version and migration plan, you’re left relying on implicit inference. When that inference fails, the result is rarely a graceful migration; more often it’s crashes, data loss, or an app that won’t launch. Mohammad Azam offers direct, pragmatic advice: explicitly declare Schema versions; prepare migration paths for future structural changes; and treat migration design as part of model design, not an afterthought.

This advice applies equally to Core Data. Even when a model is fully compatible with lightweight migration, creating a corresponding model version file for each release (whenever structural changes occur) not only helps track the model’s evolution but also enables clear, controlled rollback when issues arise. Using explicit versioning to govern model evolution is fundamentally about establishing safety boundaries for long-term maintenance.


How to build a simple CLI tool using Swift

There’s an interesting phenomenon in the age of AI Coding: CLI tools are experiencing a renaissance — more and more developers are building CLI tools to power their MCP and Agent workflows. Natascha Fadeeva walks through how to build structured command-line tools using Swift Package Manager and Apple’s official ArgumentParser library: defining root commands and subcommands, handling async network requests, and compiling to a standalone distributable binary. For iOS developers already fluent in Swift, this path is more natural than maintaining a bash or Python script, and easier to evolve alongside the project.


As an experienced developer, Joseph Heck observes that as AI becomes capable of executing tasks, generating code, and driving changes autonomously, the developer’s role shifts from “line-by-line implementer” to “path navigator.” The truly scarce skill is no longer coding speed, but navigation — how developers maintain their sense of direction in complex codebases and multi-agent environments. Joseph offers several practical suggestions: always include “ask me about anything ambiguous” in your prompts; have the agent draft a plan and get your approval before implementation; provide deterministic feedback loops (unit tests, compiler errors) that allow the agent to self-correct; and distill frequently reused instructions into Skill files.

Heck avoids amplifying the “AI will disrupt developers” narrative, instead emphasizing a more grounded reality: agentic coding amplifies existing engineering capabilities. If you’re already good at modular design and abstraction, AI will accelerate you. If your sense of boundaries is fuzzy, AI will just create chaos faster.


Setting up a delivery pipeline for your agentic iOS projects

When code generation, modification, and refactoring begin to be driven by agents, is the traditional CI/CD pipeline still sufficient? Donny Wals opens with a real experience: his app crashed mid-workout at the gym, he handed the crash report to an agent for analysis, and by the time he finished training, a PR was waiting. After merging, a TestFlight build landed shortly after. Around this experience, he systematically outlines how to build a reliable delivery pipeline for agentic iOS projects — one that keeps automated changes controllable, verifiable, and releasable.

The article’s focus isn’t on any specific tool, but on pipeline design itself. Donny emphasizes that code generated by agents is fundamentally “a change that hasn’t been reviewed line by line,” which makes clear quality gates all the more important: automated testing, continuous integration, and the release pipeline must bear ultimate responsibility for delivery. Agents can significantly accelerate implementation, but engineering discipline cannot be relaxed in kind — when velocity increases, control mechanisms become even more critical.


Tracking Token Usage in Foundation Models

Apple’s Foundation Models run on-device with a context window of just 4,096 tokens — once exceeded, the conversation cannot continue. iOS 26.4 introduces token usage tracking APIs to help developers monitor context consumption in real time. Artem Novichkov covers four key metrics: total model context capacity (contextSize), token consumption for Instructions, consumption for individual Prompts, and cumulative usage for the full conversation Transcript. The article also highlights an easily overlooked detail: when a Tool is introduced, its name, description, and argument schema are serialized and counted toward the token budget — the same Instructions jump from 16 to 79 tokens once a Tool is attached. For on-device models, token observability will become essential infrastructure for optimizing the user experience.

Tools

App Store Connect CLI

App Store Connect CLI is an unofficial App Store Connect command-line tool developed by Rudrank Riyam, covering the full release pipeline: TestFlight management, build uploads, code signing, screenshot automation, localization sync, app review submission, notarization, and financial report downloads. The tool was designed from the ground up with agent scenarios in mind and includes dedicated documentation for agent-oriented workflows. If your release pipeline centers on TestFlight, metadata, submission, signing, and CI automation, ASC is worth considering as a lightweight alternative to fastlane.


GRDB 7.10.0: Android, Linux, and Windows Support

GRDB 7.10.0 is a milestone release: it formally introduces support for Android, Linux, and Windows, and adds the ability to use SQLCipher (encrypted databases) via Swift Package Manager — two long-awaited features from the community. This marks a meaningful evolution for Swift’s most mature SQLite wrapper, from an Apple-platform tool into a truly cross-platform data layer solution.

Gwendal Roué notes in the release announcement that because Xcode does not yet support package traits, SwiftPM will still download unused dependencies; until this is resolved, SQLCipher support will continue to require a fork.


Swift System Metrics

Swift System Metrics provides Swift applications — particularly server-side projects — with unified system-level metrics collection: CPU utilization, memory usage, file descriptor counts, and more, exposed through a standardized Metrics interface that integrates with existing monitoring systems such as Prometheus. It is not a standalone monitoring system, but rather an infrastructure component driven by the Swift Server Work Group, designed to align with the Swift Metrics ecosystem and bring system resource metrics into the same observability stack as application-level metrics. The 1.0 release signals API stability and production readiness. For teams building Swift backend services or investing in Swift observability, this is a foundational piece of the puzzle.

Related Weekly

Subscribe to Fatbobman

Weekly Swift & SwiftUI highlights. Join developers.

Subscribe Now