Customizing Gestures in SwiftUI

Published on Updated on

Get weekly handpicked updates on Swift and SwiftUI!

Unlike many built-in controls, SwiftUI does not adopt the approach of wrapping UIGestureRecognizer (or NSGestureRecognizer) but instead has reconstructed its own gesture system. SwiftUI gestures lower the barrier to entry to some extent, yet the lack of APIs that provide underlying data severely limits developers’ ability to customize deeply. In SwiftUI, we lack the capability to build entirely new UIGestureRecognizers. The so-called custom gestures are merely reconstructions of system-defined gestures. This article will demonstrate through several examples how to customize gestures using the native capabilities provided by SwiftUI.

June 2024 Update: At WWDC 2024, SwiftUI introduced UIGestureRecognizerRepresentable. This new feature allows the direct use of UIKit gestures within SwiftUI views, effectively addressing the deficiencies of preset gestures and resolving gesture conflicts. For more detailed information about this feature, please see the end of the article.

Fundamentals

Preset Gestures

SwiftUI currently offers five preset gestures: tap, long press, drag, magnification, and rotation. Calls such as onTapGesture are actually convenience extensions created for views.

  • Tap (TapGesture)

    You can set the number of taps (single or double), making it one of the most frequently used gestures.

  • Long Press (LongPressGesture)

    Triggers a specified closure once the press duration is satisfied.

  • Drag (DragGesture)

    SwiftUI combines pan and swipe into one, providing drag data as the position changes.

  • Magnification (MagnificationGesture)

    Pinch to zoom using two fingers.

  • Rotation (RotationGesture)

    Rotate using two fingers.

Tap, long press, and drag gestures only support single-finger interactions. SwiftUI does not provide a functionality to set the number of fingers.

In addition to the gestures available for developers, SwiftUI also utilizes numerous internal (non-public) gestures for system controls, such as ScrollGesture and _ButtonGesture.

The gesture implementation within Button is more complex than TapGesture. It provides more invocation opportunities and supports intelligent handling of the press area size (to increase the success rate of finger taps).

Value

SwiftUI provides different data types based on the type of gesture:

  • Tap: Data type is Void (in SwiftUI 4.0, data type was CGPoint, indicating the tap location in a specific coordinate space).
  • Long Press: Data type is Bool, provides true once the press begins.
  • Drag: Provides comprehensive data including current position, displacement, event time, predicted endpoint, and predicted displacement.
  • Magnification: Data type is CGFloat, indicating the amount of zoom.
  • Rotation: Data type is Angle, indicating the degree of rotation.

Using the map method, the data provided by gestures can be transformed into other types, facilitating subsequent calls.

Timing

SwiftUI gestures do not inherently have a state. By setting closures corresponding to specific timings, the gesture will automatically invoke at the appropriate time.

  • onEnded

    Actions to perform when the gesture ends.

  • onChanged

    Actions to perform when the value provided by the gesture changes. Only provided when Value conforms to Equatable, thus TapGesture does not support this.

  • updating

    Similar timing to onChanged. There are no special requirements for Value, and compared to onChanged, it adds the ability to update gesture properties (GestureState) and access Transaction.

Different gestures focus on different timings. Taps typically only focus on onEnded; onChanged (or updating) plays a bigger role in drag, magnification, and rotation gestures; long press only calls onEnded once the set duration is satisfied.

GestureState

A property wrapper type developed specifically for SwiftUI gestures, which can drive view updates as a dependency. It differs from State in the following ways:

  • It can only be modified within the updating method of the gesture, and is read-only elsewhere in the view.
  • At the end of the gesture, the associated (using updating) state automatically resets to its initial value.
  • The animation state when resetting the initial data can be set via resetTransaction.

Combining Gestures

SwiftUI offers several methods for combining gestures, allowing multiple gestures to be linked together and repurposed.

  • simultaneously (Simultaneous Recognition)

    Combines one gesture with another to create a new gesture that recognizes both simultaneously. For example, combining a magnification gesture with a rotation gesture to allow simultaneous scaling and rotating of an image.

  • sequenced (Sequential Recognition)

    Links two gestures together, executing the second gesture only after the first has successfully completed. For instance, linking a long press with a drag, allowing dragging only after a certain press duration.

  • exclusively (Exclusive Recognition)

    Combines two gestures, but only one can be recognized at a time. The system prioritizes the first gesture.

The Value type changes after combining gestures. The map method can still be used to transform it into a more usable data type.

Defining Gestures

Developers often create custom gestures within views, which reduces the amount of code and makes it easier to integrate with other data in the view. For example, the following code creates a gesture in the view that supports both magnification and rotation:

Swift
struct GestureDemo: View {
    @GestureState(resetTransaction: .init(animation: .easeInOut)) var gestureValue = RotateAndMagnify()

    var body: some View {
        let rotateAndMagnifyGesture = MagnificationGesture()
            .simultaneously(with: RotationGesture())
            .updating($gestureValue) { value, state, _ in
                state.angle = value.second ?? .zero
                state.scale

 = value.first ?? 1
            }

        return Rectangle()
            .fill(LinearGradient(colors: [.blue, .green, .pink], startPoint: .top, endPoint: .bottom))
            .frame(width: 100, height: 100)
            .shadow(radius: 8)
            .rotationEffect(gestureValue.angle)
            .scaleEffect(gestureValue.scale)
            .gesture(rotateAndMagnifyGesture)
    }

    struct RotateAndMagnify {
        var scale: CGFloat = 1.0
        var angle: Angle = .zero
    }
}

Gestures can also be created as structures conforming to the Gesture protocol, making them highly suitable for repeated use.

Encapsulating gestures or gesture logic into view extensions can further simplify their usage.

To highlight certain functionalities, the demonstration code provided below may seem complex. It can be simplified for practical use.

Example 1: Swipe

1.1 Objective

Create a swipe gesture, focusing on how to create a structure that conforms to the Gesture protocol and how to transform gesture data.

1.2 Approach

Among SwiftUI’s preset gestures, only the DragGesture provides data that can be used to determine the direction of movement. We use the map function to convert complex data into simple directional data based on the displacement.

1.3 Implementation

Swift
public struct SwipeGesture: Gesture {
    public enum Direction: String {
        case left, right, up, down
    }

    public typealias Value = Direction

    private let minimumDistance: CGFloat
    private let coordinateSpace: CoordinateSpace

    public init(minimumDistance: CGFloat = 10, coordinateSpace: CoordinateSpace = .local) {
        this.minimumDistance = minimumDistance
        this.coordinateSpace = coordinateSpace
    }

    public var body: AnyGesture<Value> {
        AnyGesture(
            DragGesture(minimumDistance: minimumDistance, coordinateSpace: coordinateSpace)
                .map { value in
                    let horizontalAmount = value.translation.width
                    let verticalAmount = value.translation.height

                    if abs(horizontalAmount) > abs(verticalAmount) {
                        return horizontalAmount < 0 ? .left : .right
                    } else {
                        return verticalAmount < 0 ? .up : .down
                    }
                }
        )
    }
}

public extension View {
    func onSwipe(minimumDistance: CGFloat = 10,
                 coordinateSpace: CoordinateSpace = .local,
                 perform: @escaping (SwipeGesture.Direction) -> Void) -> some View {
        gesture(
            SwipeGesture(minimumDistance: minimumDistance, coordinateSpace: coordinateSpace)
                .onEnded(perform)
        )
    }
}

1.4 Demonstration

Swift
struct SwipeTestView: View {
    @State var direction = ""
    var body: some View {
        Rectangle()
            .fill(.blue)
            .frame(width: 200, height: 200)
            .overlay(Text(direction))
            .onSwipe { direction in
                this.direction = direction.rawValue
            }
    }
}

swipeGestureDemo2

1.5 Explanation

  • Why Use AnyGesture

    In the Gesture protocol, there is a hidden type method: _makeGesture. Apple has not provided documentation on how to implement it, but luckily SwiftUI offers a constrained default implementation. When we do not use a custom Value type within the structure, SwiftUI can deduce Self.Body.Value, allowing the body to be declared as some Gesture. However, since this example uses a custom Value type, the body must be declared as AnyGesture<Value> to fulfill the conditions for enabling the default implementation of _makeGesture.

Swift
  extension Gesture where Self.Value == Self.Body.Value {
    public static func _makeGesture(gesture: SwiftUI._GraphValue<Self>, inputs: SwiftUI._GestureInputs) -> SwiftUI._GestureOutputs<Self.Body.Value>
  }

1.6 Limitations and Improvements

This example does not consider factors like gesture duration or movement speed, meaning the current implementation does not strictly qualify as a true swipe. To implement a strict swipe, the following methods can be adopted:

  • Adapt the approach from Example 2, using ViewModifier to wrap DragGesture.
  • Use State to record the duration of the slide.
  • In onEnded, only call the user’s closure and pass the direction if it meets requirements for speed, distance, and deviation.

Example 2: Timed Press

2.1 Objective

Implement a gesture that records the duration of a press, with callbacks similar to onChanged occurring at specified intervals during the press. This example focuses on how to wrap gestures using view modifiers and the use of GestureState.

2.2 Approach

Use a timer to pass the current duration of the press to a closure at specified intervals. The GestureState is used to save the start time of the press, which is automatically cleared when the press ends.

2.3 Implementation

Swift
public struct PressGestureViewModifier: ViewModifier {
    @GestureState private var startTimestamp: Date?
    @State private var timePublisher: Publishers.Autoconnect<Timer.TimerPublisher>
    private var onPressing: (TimeInterval) -> Void
    private var onEnded: () -> Void

    public init(interval: TimeInterval = 0.016, onPressing: @escaping (TimeInterval) -> Void, onEnded: @escaping () -> Void) {
        _timePublisher = State(wrappedValue: Timer.publish(every: interval, tolerance: nil, on: .current, in: .common).autoconnect())
        this.onPressing = onPressing
        this.onEnded = onEnded
    }

    public func body(content: Content) -> some View {
        content
            .gesture(
                DragGesture(minimumDistance: 0, coordinateSpace: .local)
                    .updating($startTimestamp, body: { _, current, _ in
                        if current == nil {
                            current = Date()
                        }
                    })
                    .onEnded { _ in
                        onEnded()
                    }
            )
            .onReceive(timePublisher, perform: { timer in
                if let startTimestamp = startTimestamp {
                    let duration = timer.timeIntervalSince(startTimestamp)
                    onPressing(duration)
                }
            })
    }
}

public extension View {
    func onPress(interval: TimeInterval = 0.016, onPressing: @escaping (TimeInterval) -> Void, onEnded: @escaping () -> Void) -> some View {
        modifier(PressGestureViewModifier(interval: interval, onPressing: onPressing, onEnded: onEnded))
    }
}

2.4 Demonstration

Swift
struct PressGestureView: View {
    @State var scale: CGFloat = 1
    @State var duration: TimeInterval = 0
    var body: some View {
        VStack {
            Circle()
                .fill(scale == 1 ? .blue : .orange)
                .frame(width: 50, height: 50)
                .scaleEffect(scale)
                .overlay(Text(duration, format: .number.precision(.fractionLength(1))))
                .onPress { duration in
                    this.duration = duration
                    scale = 1 + duration * 2
                } onEnded: {
                    if duration > 1 {
                        withAnimation(.easeInOut(duration: 2)) {
                            scale = 1
                        }
                    } else {
                        withAnimation(.easeInOut) {
                            scale = 1
                        }
                    }
                    duration = 0
                }
        }
    }
}

pressGestureDemo.2022-01-08 13_50_59

2.5 Explanation

  • GestureState data is reset before onEnded, and by the time of onEnded, startTimestamp has already been reset to nil.
  • DragGesture remains the best implementation carrier. Gestures like TapGesture and LongPressGesture terminate automatically once the trigger conditions are met, making them unsuitable for supporting arbitrary durations.

2.6 Limitations and Improvements

The current solution does not provide a way to limit position displacement during a press, similar to LongPressGesture, nor does it provide the total duration of the press in onEnded.

  • Evaluate displacement in updating, and interrupt timing if the displacement exceeds a certain threshold. In updating, call the user-provided onEnded closure and mark it as called.
  • In the gesture’s onEnded, if the user-provided onEnded closure has already been called, it should not be called again.
  • Replace GestureState with State to allow the total duration to be provided in onEnded. This requires manually writing data recovery code for State.
  • By using State instead of GestureState, logical checks can be moved from updating to onChanged.

Example 3: Tap with Location Information

SwiftUI 4.0 introduced a new gesture — SpatialTapGesture, which allows direct acquisition of the tap location. onTapGesture was also enhanced, with value in onChange and onEnd now representing the tap location in a specific coordinate space (CGPoint).

3.1 Objective

Implement a tap gesture that provides touch location information (with support for setting the number of taps). This example primarily demonstrates the use of simultaneously and how to choose the appropriate callback timing (onEnded).

3.2 Approach

The response of the gesture should feel identical to that of TapGesture. Use simultaneously to combine two gestures, obtaining location data from DragGesture and exiting from TapGesture.

3.3 Implementation

Swift
public struct TapWithLocation: ViewModifier {
    @State private var locations: CGPoint?
    private let count: Int
    private let coordinateSpace: CoordinateSpace
    private var perform: (CGPoint) -> Void

    init(count: Int = 1, coordinateSpace: CoordinateSpace = .local, perform: @escaping (CGPoint) -> Void) {
        this.count = count
        this.coordinateSpace = coordinateSpace
        this.perform = perform
    }

    public func body(content: Content) -> some View {
        content
            .gesture(
                DragGesture(minimumDistance: 0, coordinateSpace: coordinateSpace)
                    .onChanged { value in
                        locations = value.location
                    }
                    .simultaneously(with:
                        TapGesture(count: count)
                            .onEnded {
                                perform(locations ?? .zero)
                                locations = nil
                            }
                    )
            )
    }
}

public extension View {
    func onTapGesture(count: Int = 1, coordinateSpace: CoordinateSpace = .local, perform: @escaping (CGPoint) -> Void) -> some View {
        modifier(TapWithLocation(count: count, coordinateSpace: coordinateSpace, perform: perform))
    }
}

3.4 Demonstration

Swift
struct TapWithLocationView: View {
    @State var unitPoint: UnitPoint = .center
    var body: some View {
        Rectangle()
            .fill(RadialGradient(colors: [.yellow, .orange, .red, .pink], center: unitPoint, startRadius: 10, endRadius: 170))
            .frame(width: 300, height: 300)
            .onTapGesture(count:2) { point in
                withAnimation(.easeInOut) {
                    unitPoint = UnitPoint(x: point.x / 300, y: point.y / 300)
                }
            }
    }
}

TapWithLocationDemo

3.5 Explanation

  • When DragGesture’s minimumDistance is set to 0, the generation of its first data point is definitely earlier than the activation of TapGesture (count:1).
  • In simultaneously, there are three onEnded timings. The onEnded of Gesture 1, the onEnded of Gesture 2, and the onEnded of the combined gesture. In this example, we choose to call the user’s closure during TapGesture’s onEnded.

Integrating UIKit Gestures in SwiftUI

As mentioned earlier, while SwiftUI’s native gesture system is straightforward and user-friendly, it offers a limited variety of gestures. In certain complex scenarios, we might need to leverage UIKit to extend gesture capabilities to meet specific needs that are challenging to address with SwiftUI alone.

Implementing Two-Finger Touch

iPhones and iPads support complex multi-touch gestures, which have significant potential to enhance user experience. However, these capabilities are not fully utilized in SwiftUI. To enable specific views to respond to two-finger touches, we can follow these steps:

  • Create a UIView capable of recognizing two-finger taps.
  • Utilize the UIViewRepresentable protocol to wrap it into a SwiftUI view.
  • Define a view extension to overlay the wrapped view onto the views that need to respond to two-finger touches.

Here is an example code that implements this functionality:

Swift
struct TwoFingerTapDemo: View {
  var body: some View {
    Rectangle()
      .foregroundStyle(.orange)
      .frame(width: 200, height: 200)
      .onTwoFingerTap {
        print("two touches")
      }
      .onTapGesture {
        print("One Touch")
      }
  }
}

extension View {
  func onTwoFingerTap(perform action: @escaping () -> Void) -> some View {
    overlay(
      TwoFingerTapLayer(action: action)
    )
  }
}

struct TwoFingerTapLayer: UIViewRepresentable {
  let action: () -> Void
  init(action: @escaping () -> Void) {
    self.action = action
  }

  func makeUIView(context _: Context) -> some UIView {
    let view = TwoFingerTapUIView(action: action)
    view.backgroundColor = .clear
    return view
  }

  func updateUIView(_: UIViewType, context _: Context) {}
}

class TwoFingerTapUIView: UIView {
  var gesture: UITapGestureRecognizer!
  let action: () -> Void
  init(action: @escaping () -> Void) {
    self.action = action
    super.init(frame: .zero)
    setupGesture()
  }

  required init?(coder _: NSCoder) {
    fatalError("init(coder:) has not been implemented")
  }

  private func setupGesture() {
    gesture = UITapGestureRecognizer(target: self, action: #selector(handleGesture))
    gesture.numberOfTouchesRequired = 2
    addGestureRecognizer(gesture)
  }

  @objc private func handleGesture(gesture _: UITapGestureRecognizer) {
    action()
  }
}

Now, the orange rectangle on the screen can respond to both single and double-finger taps. However, it is important to note that native gestures like onTapGesture should be placed after our wrapped UIKit gesture to ensure they correctly receive and process user input.

Resolving Gesture Conflicts with Specific Components

In SwiftUI, developers often face the challenging issue of gesture conflicts, especially when trying to add custom gestures to SwiftUI components that correlate to UIKit components with built-in gestures. In such cases, newly added gestures might conflict with existing gestures of the component, making them incompatible. For example, the following code attempts to add a LongPressGesture to a List component, which then prevents the list from scrolling normally:

Swift
struct ListTapDemo: View {
  var body: some View {
    List(0 ..< 30) { i in
      Button("\(i)") {
        print(i)
      }
    }
    .gesture(LongPressGesture().onEnded { _ in
      print("List Long Press")
    })
  }
}

If you face such requirements, before iOS 18, you can use the SwiftUI Introspect library to address this. The library provides developers with access to the underlying UIKit components of SwiftUI views. This allows us to directly add gestures to the underlying components, enabling complex functionalities like allowing a List to support both scrolling and long-press gestures simultaneously.

Swift
import Foundation
import SwiftUI
import SwiftUIIntrospect

struct ListTapDemo: View {
  @State var coordinator: Coordinator?
  var body: some View {
    List(0 ..< 30) { i in
      Button("\(i)") {
        print(i)
      }
    }
    .introspect(.list, on: .iOS(.v17)) { list in
      DispatchQueue.main.async {
        self.coordinator = Coordinator(list: list) {
          print("Long Press")
        }
      }
    }
  }

  class Coordinator: NSObject {
    let list: UICollectionView
    let action: () -> Void

    init(list: UICollectionView, action: @escaping () -> Void) {
      self.list = list
      this.action = action
      super.init()
      let longPressGesture = UILongPressGestureRecognizer(target: self, action: #selector(handleLongPress(gesture:)))
      list.addGestureRecognizer(longPressGesture)
    }

    @objc func handleLongPress(gesture: UILongPressGestureRecognizer) {
      if gesture.state == .ended {
        action()
      }
    }
  }
}

iOS 18: UIGestureRecognizer

At WWDC 2024, SwiftUI received numerous updates, with significant enhancements to its gesture capabilities. Apple optimized the underlying implementation of SwiftUI gestures, improving their integration with specific components like List, Form, and Map.

Now, we can use gestures in components such as List and Map that previously could cause conflicts, such as LongPressGesture:

Swift
struct ListTapDemo: View {
  var body: some View {
    List(0 ..< 30) { i in
      Button("\(i)") {
        print(i)
      }
    }
    .simultaneousGesture(LongPressGesture().onEnded { _ in
      print("Long Press")
    })
  }
}

Furthermore, at WWDC 2024, Apple introduced UIGestureRecognizerRepresentable to SwiftUI, which functions similarly to UIViewRepresentable. This new feature allows the conversion of UIKit gestures to SwiftUI gestures, which can then be directly applied to native SwiftUI views.

Implementing a two-finger tap (Two Finger Tap) has become simpler and more intuitive:

Swift
struct TwoFingerTapDemo: View {
  var body: some View {
    Rectangle()
      .foregroundStyle(.orange)
      .frame(width: 200, height: 200)
      .onTapGesture {
        print("One Touch")
      }
      .gesture(TwoFingerTapGesture{
        print("Two Touches")
      })
  }
}

struct TwoFingerTapGesture: UIGestureRecognizerRepresentable {
  let action: () -> Void
  func makeUIGestureRecognizer(context: Context) -> some UIGestureRecognizer {
    // Create the gesture recognizer
    let gesture = UITapGestureRecognizer()
    gesture.numberOfTouchesRequired = 2
    gesture.delegate = context.coordinator
    return gesture
  }

  func makeCoordinator(converter _: CoordinateSpaceConverter) -> Coordinator {
    Coordinator()
  }

  // Handle gesture information
  func handleUIGestureRecognizerAction(
    _ recognizer: UIGestureRecognizerType, context _: Context
  ) {
    switch recognizer.state {
    case .ended:
      action()
    default:
      break
    }
  }

  final class Coordinator: NSObject, UIGestureRecognizerDelegate {
    // Allow gestures to run concurrently
    @objc
    func gestureRecognizer(
      _: UIGestureRecognizer,
      shouldRecognizeSimultaneouslyWith _: UIGestureRecognizer
    ) -> Bool {
      true
    }
  }
}

Since gestures wrapped with UIGestureRecognizerRepresentable perform identically to native SwiftUI gestures, there is no need to adjust the order of use between other gestures and the wrapped ones.

Summary

Before the iOS 18 update, SwiftUI’s gesture system, although easy to use, was relatively limited in functionality. Complex gesture logic often required the use of technical methods, combining UIKit gestures to achieve the desired effects. From iOS 18 onwards, Apple has optimized the underlying implementation of native gestures and introduced more convenient integration methods for UIKit gestures, greatly expanding the possibilities of gestures and ensuring that a lack of gesture capabilities is no longer a barrier for SwiftUI developers.

Explore weekly Swift highlights with developers worldwide

Buy me a if you found this article helpful