It’s been three years since I started using AI tools in earnest. Over these three years, I’ve witnessed AI’s remarkable leaps in capability — and grown increasingly aware of its limits. By now, AI has firmly established itself as an excellent productivity tool. Yet getting it to produce code that truly feels “heartfelt” — code that aligns with my personal style, ideas, and design philosophy — remains a considerable challenge.
Even though mainstream models now routinely offer 1M-token context windows, the information they can carry still falls far short of a human developer’s holistic grasp of a project as it grows. Compounded by attention decay, the truly usable context is actually quite limited. This is why, over the past stretch of time, I’ve published fewer technical articles written for human readers — and produced quite a lot of documentation written for AI instead. Readers often write to ask why my blog has slowed down; this is the reason.
The point of writing all this dedicated documentation isn’t merely to give AI enough context to work with. More importantly, I’m exploring a SwiftUI state pattern of my own design — one that diverges significantly from the prevailing paradigm. This means AI’s vast trove of pretraining data often becomes noise, even active resistance: it keeps unconsciously pulling you back toward the most common conventions. The whole process therefore takes on a kind of tug-of-war rhythm. I use AI to validate ideas rapidly, turning vague intuitions into precise contracts and interfaces; in parallel, I keep refining the documentation, which in turn constrains the AI and steers it toward code that matches my intent.
There’s still some distance to the finish line, but as the code, structure, and documentation iterate together — and especially as my half-formed ideas gradually crystallize and land — I can feel a clear shift: the AI is becoming noticeably more fluent within this project, and its output is converging on what I have in mind.
There are countless ways to arrive at the same UI outcome. What I want is to use various constraints to make AI choose the specific path I’ve laid out. I’m not claiming my own code is flawless — only that I want AI’s output to feel familiar and within my grasp, so I can step in and maintain it without friction.
With clear goals and thorough guidance, AI now implements things tens or even hundreds of times faster than I can. When “efficiency” is no longer the bottleneck, the real question for the next stage is how to make this lightning-fast assistant more heartfelt — more in tune with us.
Recent Recommendations
Q&A: Swift Concurrency - Formatted
This is a transcript of a Swift Concurrency Q&A with Apple engineers, compiled by Anton Gubarenko. In the session, the engineers addressed many of the most commonly misunderstood aspects of Swift’s current concurrency model: from the behavior change behind nonisolated(nonsending), to the boundaries of @concurrent, to Task lifetime and cancellation. Rather than merely adding more knowledge, this feels more like a “semantic recalibration.”
One noteworthy signal is that Swift is moving from “async implies concurrency by default” toward a more conservative default, where concurrency is introduced explicitly only when needed.
Immediate tasks in Swift Concurrency explained
Swift 6.2 introduced a subtle but useful new feature: Task.immediate. Unlike a regular Task, an immediate task starts synchronously in the current execution context and continues running until the first actual suspension point. Antoine van der Lee offers a clear explanation of its behavior in this article.
This capability mainly fills a long-standing gap: calling async logic from a synchronous context while still preserving execution order, such as immediately updating state when the actor isolation is already correct. In these scenarios, Task.immediate can avoid the timing mismatch caused by scheduling delay.
Its risk is just as direct: if heavy synchronous work runs before the first suspension point, especially on the MainActor, it can block the current executor and cause visible UI hitches.
Task.immediateonly changes when a task starts executing, not the task’s overall lifecycle. In most cases, the regular Task scheduling behavior remains the safer choice.
Concurrency Step-by-Step: Designing Protocols
In this article, Matt Massicotte shares a more practically executable approach to protocol design in the context of Swift 6 strict concurrency.
In Swift 6, protocol design has become more difficult than ever. You are no longer just defining methods; you are also defining isolation boundaries. Should the protocol be marked @MainActor? Should it inherit from Sendable? Should its methods be async? Matt points out that many of the “waterfalls of concurrency errors” developers encounter when adopting Swift 6 may look like isolation-domain conflicts on the surface, but are often architectural problems caused by premature abstraction. If you try to design the perfect protocol before the requirements are clear, it is very easy to get “locked in” by concurrency rules.
Matt’s advice is to avoid starting with a protocol. Start with concrete types instead. Let the interface boundaries gradually emerge from real usage, and postpone abstractions that involve isolation domains or context-dependent capabilities. This approach helps avoid falling into the trap of fat protocols and excessive constraints in the concurrency era. Rather than a protocol design guide, it is more like a pragmatic “delay decisions” strategy for Swift 6.
Automating your Xcode Project
Xcode project files have long been a source of trouble in version control. Starting from this pain point, Leo G Dion introduces practical ways to generate Xcode projects using XcodeGen and Tuist, along with a fairly complete Tuist workflow. The article also walks through the key configuration required for a minimally shippable project: deployment target, App Icon, Privacy Manifest, signing information, and version management, forming a useful automation-oriented “project checklist.”
Although I am currently the only developer on my projects, I have also switched to generating Xcode projects with Tuist. On one hand, these tools provide a higher degree of engineering determinism. On the other hand, their value is further amplified in AI-assisted development: most agents support them well, and when files in the workspace are modified, they can automatically trigger
generatewhen compilation requires it. Tools like Tuist and XcodeGen are gradually becoming more AI-friendly engineering infrastructure.
Appearance Mode Changer
Two days ago, Stewart Lynch celebrated his 75th birthday. In a post, he wrote: “75 years of patches, upgrades, bug fixes, deprecated habits, and surprisingly few fatal errors. Still compiling. Still shipping. Still learning.”
As a well-known video tutorial creator, Stewart has also recently restarted his blog, publishing short tips in written form. This article covers an implementation of appearance mode switching in SwiftUI.
For those of us working in a fast-moving industry that can easily trigger age anxiety, this may be one of the most enviable states a developer can reach. Happy birthday to Stewart, and I hope this “Still” spirit reaches everyone as well.
Six Years Perfecting Maps on watchOS
In this design diary, David Smith looks back on the six-year journey of building the Apple Watch mapping experience for Pedometer++.
The most valuable part of the article is not the implementation itself, but the series of trade-offs behind it: interactions on watchOS must be direct enough, complex configuration is almost unacceptable, and the relationship between map and data requires constant balancing between readability and information density. Even the base map is no longer an off-the-shelf dependency, but something specifically customized for Liquid Glass. His technical choice is also representative: even though MapKit has arrived on watchOS, he still chose a fully custom solution because its configurability and expressiveness remain limited. Behind this decision is not only technical capability, but also a clear sense of product-experience priorities.
This is a classic example of long-term product refinement.
Tools
Kadr: Describing Video Composition with a Swift DSL
Developed by Steliyan Hadzhidenev, Kadr is a Swift-native video composition library that uses a Result Builder DSL to organize AVFoundation’s otherwise scattered concepts — clips, transitions, multiple tracks, filters, overlays, audio, and export workflows — into a declarative API. What makes it worth following is not merely the cleaner syntax, but how it demonstrates the coordination of Swift 6 strict concurrency, Sendable, async/await, and time models such as CMTime in a real media-processing context.
Its companion project, KadrUI, provides a set of SwiftUI-side editing components, including VideoPreview, OverlayHost, multi-track TimelineView, InspectorPanel, and KeyframeEditor. This means the DSL is not limited to export workflows, but can also support core video-editor interactions such as previewing, dragging, trimming, keyframes, and overlay editing.
SwiftVLC: A Modern libVLC Wrapper for SwiftUI
Developed by Omar Albeik, SwiftVLC is a SwiftUI-oriented Swift wrapper around libVLC 4.0. Compared with the traditional VLCKit, it removes the Objective-C middle layer and directly provides an @Observable Player, AsyncStream event streams, typed throws, and a VideoView(player) that can be integrated in a single line. Its value is not just that it is “more Swift,” but that it offers a way to connect low-level multimedia capabilities with the modern Swift concurrency model. If your app needs to handle formats, subtitles, or complex network protocols that AVFoundation does not cover well, libVLC-based solutions remain hard to replace.
Of course, the limitations are equally clear: it requires relatively new system versions, and the underlying libVLC still requires attention to LGPL compliance.