# 112 - When AI Makes "Seeing Is Believing" Impossible

Published on

Article Image

Photo by Skip.tools

Nearly thirty years have passed since I graduated from college. Most of my classmates are now fully absorbed in their careers and families, so our alumni chat stays quiet for long stretches of time. But last Thursday night, an unexpected burst of activity suddenly broke the silence.

A classmate who hadn’t appeared in over a decade rejoined the conversation, saying his family was going through a difficult situation and asking if anyone could help. Almost immediately, doubts were raised—was it really him? Many of my former classmates work in legal professions, and their instincts tend to be finely tuned to anything that feels off.

We went through the usual steps: voice calls, video chats, back-and-forth questions. Still, some remained unconvinced. “AI makes impersonation way too easy now. Video and voice aren’t enough anymore.” It wasn’t until more personal details surfaced—shared memories, old nicknames, inside jokes known only to our class—that everyone finally felt reassured. Once his identity was confirmed, help arrived quickly, and the immediate crisis was resolved.

Strictly speaking, it’s hard to blame anyone for being suspicious. When a single smartphone can “swap faces” or “alter voices,” seeing is believing no longer holds up. Social media is flooded with AI-generated oddities, and our tolerance for the absurd has quietly risen. Whose pet doesn’t talk or cook online these days? And if a UFO actually landed one afternoon, people might not be surprised at all—our threshold for the unknown has been reshaped.

This shift has brought a new kind of anxiety:

We once worried about not getting information fast enough or broadly enough; now we worry about whether the information is even real. “Trusted sources” have become a scarce commodity.

And no, AI shouldn’t shoulder the blame alone. It remains a tool. The ones exploiting it for deception are still people—the tactics have simply evolved, and the cost of lying has dropped dramatically.

Against this backdrop, rebuilding truth and trust becomes increasingly difficult. Perhaps we’ll need to “fight fire with fire”: digital signatures, trusted timestamps, blockchain-based verification. None of these are perfect solutions, but they may be meaningful paths forward.

My mother used to tell me when I was young: “Doing good is like climbing uphill; doing harm is like falling off a cliff.”

The same is true of trust—infinitely harder to rebuild than to destroy, which is precisely what makes it precious.

This Week’s Sponsor

Need to debug HTTPS on your iPhone?

Try Proxyman! The best-in-class macOS that helps you capture/debug HTTP(s) with a few clicks. Support iOS devices and Simulator.

🚀 Get Started Now →

Recent Recommendations

Deep Dive into iMessage: Behind the Making of an Agent

As Apple ecosystem developers, we often face a subtle paradox: the system possesses powerful capabilities, but these aren’t necessarily exposed to developers through public APIs. iMessage is a prime example—deeply integrated into iOS and macOS as a core communication tool for users, yet it has never provided automation interfaces for developers. LingJueYa, the author of imessage-kit, shares his exploration journey in building this tool. The core challenges almost all stem from the Apple platform itself: parsing timestamps with a 2001 epoch, recovering NSAttributedString content from binary plists, safely accessing resources within macOS’s sandbox system, and working with AppleScript—a venerable automation mechanism.


2025: The Year SwiftUI Died

Jacob Bartlett’s provocatively titled article presents a discussion-worthy perspective: 2025 might mark a turning point for SwiftUI, rather than its peak. His core argument centers on Apple introducing the @Observable macro and updateProperties() method to UIKit, giving it modern state management capabilities, while the maturation of AI-assisted programming tools has dramatically reduced the cost of writing UIKit code (whereas AI’s understanding of SwiftUI’s declarative paradigm remains insufficient).

Apple’s long-term commitment to SwiftUI isn’t going anywhere — especially given its natural advantages in multi-platform adaptation. Meanwhile, the rise of AI has made it easier for many developers who started with SwiftUI to lean on UIKit when they need to fill performance or capability gaps. In practice, the choice for most adults isn’t either-or, but using both together in the most effective way.


Automatic Property Observation in UIKit with @Observable

UIKit officially introduced native support for Swift Observation in iOS 26. When you read properties of an @Observable object in updateProperties(), UIKit automatically tracks these dependencies and refreshes the corresponding views on-demand when data changes. Natalia Panferova demonstrates the convenience of this feature for cross-framework data sharing through a practical case mixing UIKit and SwiftUI. The article also introduces the iOS 18 backward compatibility solution: add the UIObservationTrackingEnabled key to Info.plist and place the update logic in viewWillLayoutSubviews() to achieve the same effect.


How SwiftData Represents AttributedString in Core Data Storage

Although AttributedString itself conforms to Codable, developers previously couldn’t use it directly as a property in SwiftData models. This restriction was finally lifted in iOS 26—Apple clearly opened a “special channel” for it, and now it can be stored directly like basic types such as Int and String. Oleksii Oliinyk, author of DataScout for SwiftData (a SwiftData database analysis app), encountered related crashes while maintaining the tool and took the opportunity to deeply analyze the implementation mechanism behind it.

SwiftData’s ability to allow developers to use Codable-conforming types as model properties is itself a powerful feature. However, its underlying handling may differ from what many people expect. In “Considerations for Using Codable and Enums in SwiftData Models,” I provide a more systematic introduction to SwiftData’s Codable conversion logic and potential pitfalls. Additionally, if you need to save AttributedString before iOS 26, you can refer to this thread on Apple’s developer forums.


Introducing AnyLanguageModel: One API for Local and Remote LLMs on Apple Platforms

AnyLanguageModel is a unified Swift LLM interface library developed by Mattt. We introduced its core concept in Issue 109. Building on Foundation Models’ API design, the library maintains a familiar developer experience while uniformly supporting multiple model providers, including local models (Core ML, MLX, llama.cpp, Ollama) and cloud models (OpenAI, Anthropic, Google Gemini, etc.), significantly reducing the integration burden across different APIs and execution methods, and making it easier to explore open-source models.

In this article, Mattt further introduces AnyLanguageModel’s design philosophy, cross-backend capability abstraction, and how Swift 6.1 package traits help control dependency size. Notably, while Apple currently doesn’t provide image input capability in Foundation Models, AnyLanguageModel has already added this functionality for models like Claude, enabling vision-language scenarios to work smoothly on Apple platforms.


Approachable Concurrency and MainActor by Default

Regardless of how things ultimately unfold, Approachable Concurrency introduced in Swift 6.2 is destined to leave a significant mark in Swift’s history. It significantly reduces the mental burden of concurrent programming in certain scenarios, but has also left many developers feeling “more confused the more it’s explained.”


How to Build Scalable White-Label iOS Apps: From Multi-Target to Modular Architecture

A White-Label product refers to a flexible, reusable app framework that can be deployed across different clients, with customizable branding and feature configurations (such as a universal restaurant ordering app template). Pawel Kozielecki systematically outlines the evolution of iOS White-Label applications in this comprehensive article, dividing it into three stages: basic branding customization, custom UI/UX, and full modularization. Building on this foundation, he compares three common implementation strategies—multi-target, resource replication, and modular architecture—and points out that as the number of clients and differentiation requirements grow, only modular architecture can truly scale long-term and avoid maintenance chaos. The article also discusses key challenges in scaling white-label projects, including App Store review, code signing, resource and configuration layering, testing, and CI, all with abundant practical details.

Tools

QuickLayout: Declarative UIKit Layout Library

Since UIKit can now seamlessly integrate Observation, it’s no surprise that writing layout code in SwiftUI style is also possible. QuickLayout, developed by Constantine Fry, provides exactly this capability and has been used by Meta in Instagram’s core features. You can lay out views directly like this:

Swift
import QuickLayout

@QuickLayout
class MyCellView: UIView {

  let titleLabel = UILabel()
  let subtitleLabel = UILabel()

  var body: Layout {
    HStack {
      VStack(alignment: .leading) {
        titleLabel
        Spacer(4)
        subtitleLabel
      }
      Spacer()
    }
    .padding(.horizontal, 16)
    .padding(.vertical, 8)
  }
}

UIKit and SwiftUI were never opposites, but rather two UI thinking models that can learn from each other. Currently, SwiftUI’s advantages lie in abstraction and consistency, while high performance, fine-grained control, and toolchain support remain UIKit’s strengths.


SettingsKit

Nearly all apps need a settings interface. While not difficult to write, maintenance costs rise as options multiply. SettingsKit, developed by Aether, was created to address this. It enables SwiftUI developers to quickly build scalable, consistently styled settings interfaces with built-in search capabilities, featuring multiple styles including grouped, card, and sidebar layouts—ideal for medium to large settings modules.


KurrentDB-Swift

Kurrent (formerly EventStoreDB) is a database specifically designed for event storage. It not only saves the current state of a system but also maintains a complete record of every change in its history, making it ideal for scenarios requiring strong traceability such as finance, logistics, retail, e-commerce, and SaaS. As a Swift client library for KurrentDB, KurrentDB-Swift developed by Grady Zhuo supports Swift 6, async/await streaming subscriptions, and event reading, filling a long-standing gap in the Swift ecosystem for mature Event Sourcing tools.

Weekly Swift & SwiftUI highlights!