Issue #125

OpenClaw Is Impressive — I Just Don’t Really Need It (Yet)

Cover for Weekly Issue 125

On the morning of March 1st, I received a message from OpenClaw. It was part of a scheduled task I had set up after installing it: on the first day of every month, it sends me a summary of the main tasks it executed on my behalf over the past month.

Reading through that rather sparse report, I found myself pausing. At this stage, I don’t think I actually need a personal AI agent. To be honest, if it hadn’t sent that message yesterday, I might have almost forgotten it was even there.

After the OpenClaw project name was finalized, I still couldn’t quite resist the stream of flashy demos filling my social media timeline. I dug out an idle Mac mini M4 and followed the setup guides to get everything running. During the first few days, I spent some time studying how others were using it, trying to see whether it could naturally fit into my own life and workflow.

Eventually, though, I came to realize that—at least given my current workload and habits—many traditional tools that have already matured are more than sufficient for my needs. Even when I do need an agent loop on mobile devices, using more focused, single-purpose tools often results in less configuration overhead and a lower cognitive burden overall.

A year ago, few people would have expected that CLI tools would see a resurgence at a time when GUIs have long been the default. In a similar way, before OpenClaw gained traction, hardly anyone anticipated the sudden emergence of so many OpenClaw-like projects. I have little doubt that most people will eventually have their own personal AI assistants. OpenClaw presents one possible vision of that future—very much from a hacker’s perspective. But what form these assistants will ultimately take, and how they will balance privacy, security, and efficiency, remains an open question.

From one angle, having an intelligent assistant does seem pretty cool. Still, life without one feels just as calm and comfortable. For now, I’ll let 🦞 rest quietly on my mini—until the day I truly need to wake it up.

This Week’s Sponsor

Notepad.exe — A Notepad for Developers

Notepad.exe is a lightweight coding scratchpad for macOS, built for experiments, snippets, and rapid prototyping. Open it, write your code, and run it — no project setup required.

🚀 Download for macOS →

Recent Recommendations

Building Lists: A High-Performance Diffable Data Source Framework

After integrating UICollectionViewDiffableDataSource with TCA, Hesham Salman noticed more than 100 UI stalls per minute. A deeper investigation revealed that NSDiffableDataSourceSnapshot heavily relies on Objective-C’s NSOrderedSet under the hood. In a reactive architecture where state updates frequently, repeated snapshot reconstruction leads to substantial hashing costs and Objective-C bridging overhead. To address this, Hesham built a pure Swift alternative, ListKit. By leveraging ContiguousArray, a two-level diff algorithm, and lazily constructed reverse indices, snapshot construction performance improved by several hundred times.

In imperative frameworks, developers can precisely control refresh timing, so diff performance issues may not be immediately obvious. But as reactive programming increasingly makes its way into UIKit, traditional assumptions and techniques must adapt to new refresh patterns. Yesterday’s “best practice APIs” can quietly become today’s performance bottlenecks.


SwiftUI Charts caused major stutter in my app — replacing it with Path fixed everything

There is no doubt that Swift Charts provides a clean declarative API and polished visual output, making it a go-to choice for many developers. However, even years after its release, when dealing with large datasets and high-frequency interactions, its reliance on numerous fine-grained view components can amplify SwiftUI’s diffing and layout costs, resulting in noticeable performance degradation. After reassessing his needs, Oscar Berggren replaced Charts with a custom Shape built on Path, completely eliminating drag-related stutters.

As highlighted in the ListKit article above, when high-frequency state updates (such as gesture-driven refreshes) combine with heavyweight view construction (hundreds of LineMark instances), performance bottlenecks become almost inevitable. In such cases, stepping back to lower-level drawing APIs like Path often yields more predictable and stable results.


We added an MCP server to our macOS app and learned a lot the hard way

Adding MCP support to a macOS app sounds like just exposing a few more interfaces — until you actually try it. Charidimos Chaintoutis discovered this firsthand while implementing native Swift MCP support for unclutr: the gap between “works in dev” and “users can configure, diagnose, and install it” is wider than expected.

The article details challenges around stdio transport, client handshakes, launcher configuration, and macOS sandbox restrictions—particularly the friction between sandboxing and spawning external processes. These constraints ultimately forced them to disable MCP support in the Mac App Store version and offer it only in the direct-download build. The security model they arrived at is especially instructive: separating read and write tools, requiring explicit deletion calls, enforcing absolute paths, supporting dry runs, and always moving files to Trash rather than permanently deleting them. These are lessons learned the hard way.


Array expression trailing closures in Swift

In this article, Artem Mirzabekian introduces the accepted Swift Evolution proposal SE-0508. The proposal removes a long-standing syntactic special case: array and dictionary type expressions previously could not be followed directly by trailing closures. With this restriction lifted, building collections using result builders (for example, let items = [String] { "First"; "Second" }) becomes far more natural. It also enables calling callAsFunction directly after array literals, such as ["a", "b", "c"] { $0.uppercased() }.

At first glance, this may seem like a minor syntactic refinement. In reality, it eliminates inconsistencies between collection types and other types, further smoothing Swift’s surface. Language progress does not always come from bold new features—sometimes it comes from patiently sanding down long-standing rough edges.


Mastering Geometry in SwiftUI - GeometryReader, GeometryProxy & onGeometryChange

For a long time, developers relied almost exclusively on GeometryReader to obtain a view’s size or position. However, GeometryReader is itself a layout container that expands to fill all available space. This “greedy” behavior often causes unexpected layout issues for those unfamiliar with its mechanics. In this comprehensive article, Sagar Unagar revisits SwiftUI’s geometry system from an architectural perspective. He compares the traditional GeometryReader + PreferenceKey pattern, the Layout protocol introduced in iOS 16, and the .onGeometryChange modifier introduced in iOS 18. Rather than merely listing APIs, the article explains how geometry fits into SwiftUI’s proposal-driven layout system.

If you approach SwiftUI with a command-style mindset and attempt to “control” layout directly, it will likely feel awkward. But once your mental model aligns with its negotiation-based design, you begin to see that SwiftUI’s expressive ceiling is much higher than it first appears.

Tools

Xcode Assistant Copilot Server

Xcode 26.3 introduced support for Codex and Claude Code, officially bringing agent capabilities into the IDE workflow. But not every developer uses those services. Developed by Fernando Romiti, Xcode Assistant Copilot Server offers an alternative for GitHub Copilot subscribers. It is a Swift-based local service that translates Xcode’s OpenAI-compatible requests into GitHub Copilot API calls. However, it should not be mistaken for a simple protocol adapter.

In its default mode, it acts as a transparent proxy, forwarding Xcode’s /v1/chat/completions requests to Copilot. Once Agent mode is enabled and MCP is configured, it runs a full local agent loop. When the Copilot model issues tool calls, the server intercepts them, executes the corresponding operations locally (via xcrun mcpbridge or permitted CLI tools), appends the results to the conversation, re-queries the model, and continues this cycle until a final response is produced and returned to Xcode.


Foundation Models SDK for Python

Foundation Models SDK for Python is a recently released open-source project from Apple. Through a Swift bridge, it enables developers to directly invoke Apple Intelligence foundation models running on macOS (on-device) from a Python environment.

In modern LLM application development, evaluation is critical. Developers need to run large test suites to measure the impact of prompt adjustments and tool-calling strategies. Data-driven analysis of this kind has long been dominated by the Python ecosystem. This SDK fills that gap: developers can export real transcripts (JSON) from Swift, then reproduce on-device inference behavior in Python and perform batch analysis, scoring, clustering, and error attribution as if processing ordinary datasets.

In effect, Apple is signaling a standardized workflow for AI application development: Swift handles on-device integration and user experience, while Python powers offline evaluation and iterative optimization.


vphone-cli: Running a Real iPhone on Your Mac

When Apple introduced Apple Intelligence in 2024, it also unveiled PCC (Private Cloud Compute), a privacy-focused infrastructure running on Apple Silicon servers. Its significance lies not merely in “offloading AI to the cloud”, but in extending the iPhone’s security model to server-side environments. Apple even released research materials and virtual research environments to allow security researchers to audit PCC nodes locally.

Starting with cloudOS 26, Apple added components related to vphone600ap in PCC firmware. The community quickly took notice. Building on in-depth reverse engineering work by Hyungyu Seo and others, Lakr developed vphone-cli, which operationalizes this virtualization mechanism. By leveraging private APIs from Virtualization.framework, it creates a fully functional virtual iPhone research environment on macOS. Unlike the Xcode Simulator, this setup runs real iOS firmware, executing the full boot chain from start to finish.

What makes this development fascinating is not merely jailbreak research or firmware analysis, but the broader signal it sends: Apple appears to be externalizing the iOS security architecture it has refined for over a decade, extending it to reshape its cloud computing infrastructure.

Related Weekly

Subscribe to Fatbobman

Weekly Swift & SwiftUI highlights. Join developers.

Subscribe Now