Issue #131

Copyright Walls Demolished by Vibe Coding, and the Developer's New Moat

Cover for Weekly Issue 131

Photo by Nik on Unsplash

Anthropic recently announced that because its latest model, Mythos, has become “too powerful” at cybersecurity work and software vulnerability discovery—reaching a level the company finds unsettling—it has taken the unusually restrained step of not releasing the model publicly. Access is limited to a handful of critical infrastructure companies within Project Glasswing, and ordinary developers cannot reach it through the API either. (Some analysts have pointed out, of course, that this arrangement also conveniently helps prevent model distillation and locks in enterprise-tier customers.) But even with this “beast” kept on a leash for the moment, the coding capabilities of today’s mainstream AI models are already more than enough to make cloning a product trivially easy.

Last week, a developer on Reddit claimed that he had spent a year “reverse-engineering the SwiftUI API” to build an entirely new Swift web framework. The post was fluent and precise in its terminology, and it drew considerable attention. Paul Hudson soon appeared in the comments and called it out: the so-called “independent research” was in fact little more than a string replacement performed on his MIT-licensed open-source project Ignite—down to the point that the original author’s personal, stylistically distinctive code comments had been preserved verbatim. The entire repository was then squashed into a single commit to erase its history, and the license was illicitly changed to the copyleft GPL. A number of developers in the community suspect that the “reverse-engineering SwiftUI” narrative itself was AI-generated as well. More intriguingly, the author in question was actually a major contributor to Ignite himself—when Vibe Coding has driven the cost of “repackaging a project” close to zero, “I was involved in this” can itself become a rhetorical device for blurring the lines of responsibility.

Around the same time, Vibe Island—a polished macOS menu-bar app for monitoring AI coding agents—was pixel-for-pixel cloned shortly after its release. Although the copycat published its code under the banner of an “open-source alternative,” the impact on the original author’s sales and creative motivation was real and significant. Yet even if the author wished to pursue legal action, he would run into a new problem of the times: in both establishing ownership and enforcing his rights, he might need to prove that his work possesses sufficient human originality and account for the extent of AI-generated content involved—otherwise, he would face considerably greater legal uncertainty.

Indeed, the legal walls protecting code are beginning to crumble on the “ownership side” first. Last month, China’s Copyright Protection Centre officially rolled out a new version of its software copyright registration application and accompanying review rules, which explicitly require the filer to make a notarized personal commitment that “no AI has been used to develop the code, author the documentation, or generate the registration materials,” and the review process now focuses on whether the human intellectual contribution clears the originality threshold required by copyright law. Content without substantive human involvement will struggle to obtain registration. Violators may also be placed on a registry of dishonest filers, with consequences tied to their personal credit records.

This trend is converging with the recent direction of case law in Europe and the United States: if a piece of code is primarily “rewritten or recombined” at high speed by an AI responding to prompts, its chances of obtaining copyright protection drop considerably.

We have to face a harsh truth: “I had a brilliant idea and vibe-coded it into existence” is no longer enough to constitute a business moat. The new paradigm we call Vibe Coding has not only reshaped development workflows and dramatically improved efficiency—it has simultaneously shaken the foundational logic of the software copyright system from three directions at once: the bar for ownership has risen, the burden of proof for infringement has grown heavier, and functional cloning has been quietly normalized.

What makes it all the more disheartening is that, controversial as these clone projects may be, they still rack up no small number of stars on GitHub. That suggests that when the cost of getting something is vanishingly low, moral appeals alone can no longer hold back the rush toward “free equivalents.”

Perhaps, as we noted in Issue 120’s discussion of Skip’s move to go open source—in an era where the cost of implementing code is approaching zero and any app can be cloned at any moment by an AI, building behind closed doors and “selling the tool” will only get harder. Forging real connections with users, and turning “the credibility of the maker and the trust of the community” into brand equity that cannot be copied—this, perhaps, is the true core competency and moat for developers in the years ahead.

Recent Recommendations

Swift Blog Carnival: Tiny Languages

The Swift community has launched its first Blog Carnival, with April’s theme being Tiny Languages. Christian Tietze invites developers to write about this topic—custom DSLs, result builders, scripting languages, routing rules… any perspective related to “tiny languages” is welcome. The submission deadline is May 1.

So far, three entries have been published:

  • Matt Massicotte reflects on his journey from Rake to Make, and then to various Swift-based task runners, noting that he still hasn’t found an ideal replacement
  • Chris Liscio shares the design of Capo’s embedded DSL, used to describe keyboard and MIDI bindings, built on Point-Free’s swift-parsing library
  • Nicolas Zinovieff presents an experimental symbolic math DSL, leveraging protocols and operator overloading to make expressions like (1 + 2 * "X") * (3 - "Y") valid Swift code, with lazy evaluation when concrete values are provided—implemented in under 300 lines

Indicating Selection in macOS Menus Using SwiftUI

SwiftUI provides several components for representing selection, such as Picker, Toggle, and Menu. However, clearly guiding users through choices and accurately reflecting the current selection is not as straightforward as it may seem. Gabriel Theodoropoulos starts from basic Button implementations and gradually evolves toward Picker and Toggle, systematically outlining common approaches and their limitations.

The value of this article lies not in presenting a “single correct solution,” but in reminding developers that SwiftUI’s standard components do not automatically guarantee the best user experience. In practice, you still need to balance system consistency and implementation flexibility.


Building List Replacement in SwiftUI

Choosing between List and ScrollView + LazyStack remains a common challenge for SwiftUI developers. In this article, Majid Jabrayilov rebuilds parts of his CardioBot app using the SwiftUI Container View API (iOS 17+), creating three reusable components—ScrollingSurface, DividedCard, and SectionedSurface—as a replacement for List.

These components closely mirror the usage of List + Section, while eliminating constraints tied to List, such as listRowBackground and listItemTint.

List is not just a “styled LazyVStack”—the two differ fundamentally in architecture, scrolling behavior, integration with navigation containers, and performance on large datasets. For a deeper comparison, see this previous article.


AppIntents meet MCP

While many still see AppIntents as a companion to Siri and Shortcuts, Florian Schweizer explores a more forward-looking direction: exposing AppIntents as MCP (Model Context Protocol) tools, enabling LLMs to directly interact with your app’s capabilities.

Based on SwiftMCP, Florian uses macros to build an MCP server and seamlessly maps AppIntents into MCP tools. This allows AI agents to invoke app functionality directly, enabling cross-app automation.

Rumors have suggested since last year that Apple is bringing MCP support into its ecosystem. Perhaps we’ll get answers at WWDC 26 in a couple of months.


Ride the Lightning Air: Building Interactive WidgetKit Widgets

Many developers are misled by WidgetKit documentation, mistakenly treating AppIntentTimelineProvider as the key to interactive widgets. In reality, it is designed for user-configurable widgets (e.g., editing options via long-press), not for interactivity.

The actual foundation for interactive widgets remains TimelineProvider. Wesley Matlock demonstrates the correct approach through a fictional airline widget: Using TimelineProvider + Button(intent:) + App Group shared storage to build interactive widgets.

The data flow forms a clean loop: User action → Intent execution → State update → Widget reload → UI refresh


File Storage and iCloud: A Complete View from Local to Cloud

In iOS and macOS development (and usage), file storage is often treated as a basic capability—but it actually defines the lifecycle and behavior of your data.

In Working with files and directories in iOS, Natascha Fadeeva systematically explains the App Sandbox structure and the roles of Documents, Library, and Caches, helping developers understand where different types of data should reside—and how to avoid unnecessary iCloud backups.

Meanwhile, Howard Oakley in Understanding and Testing iCloud explores what happens next: iCloud is not a single service, but a collection of subsystems such as CloudKit, iCloud Drive, and system update services. Different types of data follow different synchronization and backup paths.

File placement is not just an organizational concern—it defines whether data is backed up, synchronized, and how it flows across devices.

As a result, iCloud issues are rarely just about “whether sync is enabled.” They often involve multiple layers, including client state, network conditions, caching behavior, and server-side throttling.


Tools

Bad Dock: Animate Your Dock Icon

This is a “ridiculous yet serious” macOS experiment. Eric Martz uses the public NSDockTile / NSDockTilePlugin APIs to bypass the squircle constraints introduced in Big Sur and render a video stream directly inside a Dock icon.

The implementation is straightforward but well-structured: decoding video with AVAssetReader, reducing frame rate to ~12fps, and managing memory with a ring buffer. The result is a polished technical proof-of-concept built from what initially seems like a playful idea.

The real value of projects like this lies not in their functionality, but in revealing that system API boundaries often extend far beyond what the documentation suggests.

Note: This project implements runtime dynamic Dock icons (continuously rendered while the app is running). After the app exits, only a static custom icon can be preserved via NSDockTilePlugin.

Related Weekly

Subscribe to Fatbobman

Weekly Swift & SwiftUI highlights. Join developers.

Subscribe Now