<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[AppCoda]]></title><description><![CDATA[AppCoda is one of the leading iOS programming communities. Our goal is to empower everyone to create apps through easy-to-understand tutorials. Learn by doing is the heart of our learning materials. ]]></description><link>https://www.appcoda.com/</link><generator>Ghost 5.83</generator><lastBuildDate>Mon, 13 Apr 2026 03:32:07 GMT</lastBuildDate><atom:link href="https://www.appcoda.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Announcing Mastering SwiftUI 7 for iOS 26 and Xcode 26]]></title><description><![CDATA[<p>We&#x2019;re excited to announce the release of Mastering SwiftUI 7, fully updated for iOS 26 and Xcode 26.</p><p>This latest edition reflects the most recent SwiftUI APIs and development tools, ensuring your learning stays current with the evolving Apple ecosystem.</p><p>In addition to the complete update, we&#x2019;</p>]]></description><link>https://www.appcoda.com/announcing-mastering-swiftui-7-for-ios-26-and-xcode-26/</link><guid isPermaLink="false">68f5fe01533ef0148a6d05ea</guid><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Fri, 16 Jan 2026 03:18:50 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/2025/10/gumroad-swiftui-book-ios26-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.appcoda.com/content/images/2025/10/gumroad-swiftui-book-ios26-1.png" alt="Announcing Mastering SwiftUI 7 for iOS 26 and Xcode 26"><p>We&#x2019;re excited to announce the release of Mastering SwiftUI 7, fully updated for iOS 26 and Xcode 26.</p><p>This latest edition reflects the most recent SwiftUI APIs and development tools, ensuring your learning stays current with the evolving Apple ecosystem.</p><p>In addition to the complete update, we&#x2019;ve added several new chapters covering some of the latest features introduced in iOS 26:</p><ul><li>Exploring WebView and WebPage in SwiftUI for iOS 26</li><li>Getting Started with Foundation Models in iOS 26</li><li>Working with @Generable and @Guide in Foundation Models</li><li>Using Tool Calling to Supercharge Foundation Models</li><li>Developing Siri Shortcuts with App Intents</li><li>Developing Live Activities in SwiftUI Apps</li></ul><p>You can preview the latest content in the sample book&#xA0;<a href="https://www.appcoda.com/learnswiftui" rel="noreferrer">here</a>.<br>&#x200B;</p>]]></content:encoded></item><item><title><![CDATA[Developing Live Activities in SwiftUI Apps]]></title><description><![CDATA[<p>Live Activities, first introduced in&#xA0;iOS 16, are one of Apple&apos;s most exciting updates for creating apps that feel more connected to users in real time. Instead of requiring users to constantly reopen an app, <a href="https://developer.apple.com/design/human-interface-guidelines/live-activities?ref=appcoda.com" rel="noreferrer">Live Activities</a> let information remain visible right on the Lock Screen and</p>]]></description><link>https://www.appcoda.com/live-activities/</link><guid isPermaLink="false">68ac1e0b8c6022233889413e</guid><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Mon, 25 Aug 2025 09:02:25 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1663373460374-d78ee5369fd5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fGR5bmFtaWMlMjBpc2xhbmQlMjBpcGhvbmV8ZW58MHx8fHwxNzU2MTEyNDM3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1663373460374-d78ee5369fd5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fGR5bmFtaWMlMjBpc2xhbmQlMjBpcGhvbmV8ZW58MHx8fHwxNzU2MTEyNDM3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Developing Live Activities in SwiftUI Apps"><p>Live Activities, first introduced in&#xA0;iOS 16, are one of Apple&apos;s most exciting updates for creating apps that feel more connected to users in real time. Instead of requiring users to constantly reopen an app, <a href="https://developer.apple.com/design/human-interface-guidelines/live-activities?ref=appcoda.com" rel="noreferrer">Live Activities</a> let information remain visible right on the Lock Screen and Dynamic Island. Whether it&apos;s tracking a food delivery, checking sports scores, or monitoring progress toward a goal, this feature keeps important updates just a glance away.</p><p>Later in&#xA0;iOS 17, Apple expanded Live Activities even further by supporting push updates from the server side, which makes them even more powerful for apps that rely on real-time information. But even without server-driven updates, Live Activities are incredibly useful for client-side apps that want to boost engagement and provide timely feedback.</p><p>In this tutorial, we&apos;ll explore how to implement Live Activities by building a&#xA0;Water Tracker app. The app allows users to log their daily water intake and instantly see their progress update on the Lock Screen or Dynamic Island. By the end of the tutorial, you&apos;ll understand how to integrate Live Activities into your <a href="https://www.appcoda.com/swiftui" rel="noreferrer">SwiftUI</a> apps.</p><h2 id="a-quick-look-at-the-demo-app">A Quick Look at the Demo App</h2><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/liveactivities-demo-app.png" class="kg-image" alt="Developing Live Activities in SwiftUI Apps" loading="lazy" width="1920" height="1286" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/liveactivities-demo-app.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/liveactivities-demo-app.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/liveactivities-demo-app.png 1600w, https://www.appcoda.com/content/images/2025/08/liveactivities-demo-app.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>Our demo app,&#xA0;Water Tracker, is a simple and fun way to keep track of your daily water intake. You&#x2019;ve probably heard the advice that drinking eight glasses of water a day is a good habit, and this app helps you stay mindful of that goal. The design is minimal on purpose: there&apos;s a circular progress bar showing how far along you are, and every time you tap the&#xA0;<em>Add Glass</em>&#xA0;button, the counter goes up by one and the progress bar fills a little more.</p><p>Behind the scenes, the app uses a&#xA0;<code>WaterTracker</code>&#xA0;class to manage the logic. This class keeps track of how many glasses you&#x2019;ve already logged and what your daily goal is, so the UI always reflects your current progress. Here&#x2019;s the code that makes it work:</p><pre><code class="language-swift">import Observation

@Observable
class WaterTracker {
    var currentGlasses: Int = 0
    var dailyGoal: Int = 8
    
    func addGlass() {
        guard currentGlasses &lt; dailyGoal else { return }
            
        currentGlasses += 1
    }
    
    func resetDaily() {
        currentGlasses = 0
    }
    
    var progress: Double {
        Double(currentGlasses) / Double(dailyGoal)
    }
    
    var isGoalReached: Bool {
        currentGlasses &gt;= dailyGoal
    }
    
}

</code></pre><p>What we are going to do is to add Live Activities support to the app. Once implemented, users will be able to see their progress directly on the Lock Screen and in the Dynamic Island. The Live Activity will show the current water intake alongside the daily goal in a clear, simple way.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/liveactivities-lockscreen-island-1.png" class="kg-image" alt="Developing Live Activities in SwiftUI Apps" loading="lazy" width="1938" height="1100" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/liveactivities-lockscreen-island-1.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/liveactivities-lockscreen-island-1.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/liveactivities-lockscreen-island-1.png 1600w, https://www.appcoda.com/content/images/2025/08/liveactivities-lockscreen-island-1.png 1938w" sizes="(min-width: 720px) 720px"></figure><h2 id="creating-a-widget-extension-for-live-activities">Creating a Widget Extension for Live Activities</h2><p>Live Activities are built as part of an app&apos;s widget extension, so the first step is to add a widget extension to your Xcode project.</p><p>In this demo, the project is called <em>WaterReminder</em>. To create the extension, select the project in Xcode, go to the menu bar, and choose Editor &gt; Target &gt; Add Target. When the template dialog appears, select Widget Extension, give it a name, and make sure to check the Include Live Activity option.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/liveactivities-add-widget.png" class="kg-image" alt="Developing Live Activities in SwiftUI Apps" loading="lazy" width="1920" height="1286" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/liveactivities-add-widget.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/liveactivities-add-widget.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/liveactivities-add-widget.png 1600w, https://www.appcoda.com/content/images/2025/08/liveactivities-add-widget.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>When Xcode asks, be sure to activate the new scheme. It will then generate the widget extension for you, which appears as a new folder in the project navigator along with the starter code for the Live Activity and the widget.</p><p>We&#x2019;ll be rewriting the entire&#xA0;<code>WaterReminderWidgetLiveActivity.swift</code>&#xA0;file from scratch, so it&#x2019;s best to clear out all of its existing code before proceeding.</p><p>Since the Live Activity doesn&#x2019;t rely on the widget, you can optionally remove the <code>WaterReminderWidget.swift</code> file and update the <code>WaterReminderWidgetBundle</code> struct like this:</p><pre><code class="language-swift">struct WaterReminderWidgetBundle: WidgetBundle {
    var body: some Widget {
        WaterReminderWidgetLiveActivity()
    }
}
</code></pre><h2 id="defining-the-activityattributes-structure">Defining the ActivityAttributes Structure</h2><p>The&#xA0;<code>ActivityAttributes</code>&#xA0;protocol describes the content that appears in your Live Activity.&#xA0;We have to adopt the protocol and define the dynamic content of the activity.</p><p>Since this attributes structure is usually shared between both the main app and widget extension, I suggest to create a shared folder to host this Swift file. In the project folder, create a new folder named <code>Shared</code> and then create a new Swift file named <code>WaterReminderWidgetAttributes.swift</code>.</p><p>Update the content like this:</p><pre><code class="language-swift">import Foundation
import ActivityKit

struct WaterReminderWidgetAttributes: ActivityAttributes {
    public struct ContentState: Codable, Hashable {
        var currentGlasses: Int
        var dailyGoal: Int
    }
    
    var activityName: String
}

extension WaterReminderWidgetAttributes {
    static var preview: WaterReminderWidgetAttributes {
        WaterReminderWidgetAttributes(activityName: &quot;Water Reminder&quot;)
    }
}

extension WaterReminderWidgetAttributes.ContentState {
     static var sample: WaterReminderWidgetAttributes.ContentState {
        WaterReminderWidgetAttributes.ContentState(currentGlasses: 3, dailyGoal: 8)
     }
     
    static var goalReached: WaterReminderWidgetAttributes.ContentState {
        WaterReminderWidgetAttributes.ContentState(currentGlasses: 8, dailyGoal: 8)
     }
}
</code></pre><p>The&#xA0;<code>WaterReminderWidgetAttributes</code>&#xA0;struct adopts the&#xA0;<code>ActivityAttributes</code>&#xA0;protocol and includes an&#xA0;<code>activityName</code>&#xA0;property to identify the activity. To conform to the protocol, we define a nested&#xA0;<code>ContentState</code>&#xA0;struct, which holds the data displayed in the Live Activity&#x2014;specifically, the number of glasses consumed and the daily goal.</p><p>The extensions are used for SwiftUI previews, providing sample data for visualization.</p><p>Please take note that the target membership of the file should be accessed by both the main app and the widget extension. You can verify it in the file inspector.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/liveactivities-shared-target-membership.png" class="kg-image" alt="Developing Live Activities in SwiftUI Apps" loading="lazy" width="1920" height="663" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/liveactivities-shared-target-membership.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/liveactivities-shared-target-membership.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/liveactivities-shared-target-membership.png 1600w, https://www.appcoda.com/content/images/2025/08/liveactivities-shared-target-membership.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="implementing-the-live-activity-view">Implementing the Live Activity View</h2><p>Next, let&#x2019;s implement the live activity view, which handles the user interface in different settings. Open the <code>WaterReminderWidgetLiveActivity.swift</code> file and write the code like below:</p><pre><code class="language-swift">import ActivityKit
import WidgetKit
import SwiftUI

struct WaterReminderLiveActivityView: View {
    
    let context: ActivityViewContext&lt;WaterReminderWidgetAttributes&gt;
    
    var body: some View {
        VStack(alignment: .leading, spacing: 10) {
            HStack {
                Text(&quot;&#x1F4A7;&quot;)
                    .font(.title)
                Text(&quot;Water Reminder&quot;)
                    .font(.headline)
                    .fontWeight(.semibold)
                Spacer()
            }
            
            HStack {
                Text(&quot;Current: \(context.state.currentGlasses)&quot;)
                    .font(.title2)
                    .fontWeight(.bold)
                Spacer()
                Text(&quot;Goal: \(context.state.dailyGoal)&quot;)
                    .font(.title2)
            }
            
            // Progress bar
            Gauge(value: Double(context.state.currentGlasses), in: 0...Double(context.state.dailyGoal)) {
                EmptyView()
            }
            .gaugeStyle(.linearCapacity)
        }

    }
}
</code></pre><p>This view defines the main interface of the Live Activity, which appears on both the Lock Screen and the Dynamic Island. It displays a progress bar to visualize water intake, along with the current number of glasses consumed and the daily goal.</p><p>Next, create the <code>WaterReminderWidgetLiveActivity</code> struct like this:</p><pre><code class="language-swift">struct WaterReminderWidgetLiveActivity: Widget {
    var body: some WidgetConfiguration {
        ActivityConfiguration(for: WaterReminderWidgetAttributes.self) { context in
            // Lock screen/banner UI goes here
            WaterReminderLiveActivityView(context: context)
                .padding()
        } dynamicIsland: { context in
            DynamicIsland {
                // Expanded UI goes here.  Compose the expanded UI through
                DynamicIslandExpandedRegion(.center) {
                    WaterReminderLiveActivityView(context: context)
                        .padding(.bottom)
                }
            } compactLeading: {
                Text(&quot;&#x1F4A7;&quot;)
                    .font(.title3)
            } compactTrailing: {
                
                if context.state.currentGlasses == context.state.dailyGoal {
                    Image(systemName: &quot;checkmark.circle&quot;)
                        .foregroundColor(.green)
                } else {
                    ZStack {
                        Circle()
                            .fill(Color.blue.opacity(0.2))
                            .frame(width: 24, height: 24)
                        
                        Text(&quot;\(context.state.dailyGoal - context.state.currentGlasses)&quot;)
                            .font(.caption2)
                            .fontWeight(.bold)
                            .foregroundColor(.blue)
                    }
                }

            } minimal: {
                Text(&quot;&#x1F4A7;&quot;)
                    .font(.title2)
            }
        }
    }
}

</code></pre><p>The code above defines the Live Activity widget configuration for the app. In other words, you configure how the live activity should appear under different configurations.</p><p>To keep it simple, we display the same live activity view on the Lock Screen and Dynamic Island.</p><p>The&#xA0;<code>dynamicIsland</code>&#xA0;closure specifies how the Live Activity should look inside the Dynamic Island. In the expanded view, the same&#xA0;<code>WaterReminderLiveActivityView</code>&#xA0;is shown in the center region. For the compact view, the leading side displays a water drop emoji, while the trailing side changes dynamically based on the progress: if the daily goal is reached, a green checkmark appears; otherwise, a small circular indicator shows how many glasses are left. In the minimal view, only the water drop emoji is displayed.</p><p>Lastly, let&#x2019;s add some preview code to render the preview of the Live Activity:</p><pre><code class="language-swift">#Preview(&quot;Notification&quot;, as: .content, using: WaterReminderWidgetAttributes.preview) {
   WaterReminderWidgetLiveActivity()
} contentStates: {
    WaterReminderWidgetAttributes.ContentState.sample
    WaterReminderWidgetAttributes.ContentState.goalReached
}

#Preview(&quot;Dynamic Island&quot;, as: .dynamicIsland(.expanded), using: WaterReminderWidgetAttributes.preview) {
    WaterReminderWidgetLiveActivity()
} contentStates: {
    WaterReminderWidgetAttributes.ContentState(currentGlasses: 3, dailyGoal: 8)
    
    WaterReminderWidgetAttributes.ContentState(currentGlasses: 8, dailyGoal: 8)
}


#Preview(&quot;Dynamic Island Compact&quot;, as: .dynamicIsland(.compact), using: WaterReminderWidgetAttributes.preview) {
    WaterReminderWidgetLiveActivity()
} contentStates: {
    WaterReminderWidgetAttributes.ContentState(currentGlasses: 5, dailyGoal: 8)
    
    WaterReminderWidgetAttributes.ContentState(currentGlasses: 8, dailyGoal: 8)
}</code></pre><p>Xcode lets you preview the Live Activity in different states without needing to run the app on a simulator or a real device. By setting up multiple preview snippets, you can quickly test how the Live Activity will look on both the Lock Screen and the Dynamic Island.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/liveactivities-swiftui-preview.png" class="kg-image" alt="Developing Live Activities in SwiftUI Apps" loading="lazy" width="1920" height="1132" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/liveactivities-swiftui-preview.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/liveactivities-swiftui-preview.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/liveactivities-swiftui-preview.png 1600w, https://www.appcoda.com/content/images/2025/08/liveactivities-swiftui-preview.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="managing-live-activities">Managing Live Activities</h2><p>Now that we&#x2019;ve prepare the view of the live activity, what&#x2019;s left is to trigger it when the user taps the <em>Add Glass</em> button. To make our code more organized, we will create a helper class called <code>LiveActivityManager</code> to managing the live activity cycle.</p><pre><code class="language-swift">import Foundation
import ActivityKit
import SwiftUI

@Observable
class LiveActivityManager {
    private var liveActivity: Activity&lt;WaterReminderWidgetAttributes&gt;?
    
    var isLiveActivityActive: Bool {
        liveActivity != nil
    }
    
    // MARK: - Live Activity Management
    
    func startLiveActivity(currentGlasses: Int, dailyGoal: Int) {
        guard ActivityAuthorizationInfo().areActivitiesEnabled else {
            print(&quot;Live Activities are not enabled&quot;)
            return
        }
        
        // End any existing activity first
        endLiveActivity()
        
        let attributes = WaterReminderWidgetAttributes(activityName: &quot;Water Reminder&quot;)
        let contentState = WaterReminderWidgetAttributes.ContentState(
            currentGlasses: currentGlasses,
            dailyGoal: dailyGoal
        )
        
        do {
            liveActivity = try Activity&lt;WaterReminderWidgetAttributes&gt;.request(
                attributes: attributes,
                content: ActivityContent(state: contentState, staleDate: nil),
                pushType: nil
            )
            print(&quot;Live Activity started successfully&quot;)
        } catch {
            print(&quot;Error starting live activity: \(error)&quot;)
        }
    }
    
    func updateLiveActivity(currentGlasses: Int, dailyGoal: Int) {
        guard let liveActivity = liveActivity else { return }
        
        Task {
            let contentState = WaterReminderWidgetAttributes.ContentState(
                currentGlasses: currentGlasses,
                dailyGoal: dailyGoal
            )
            
            await liveActivity.update(ActivityContent(state: contentState, staleDate: nil))
            print(&quot;Live Activity updated: \(currentGlasses)/\(dailyGoal)&quot;)
        }
    }
    
    func endLiveActivity() {
        guard let liveActivity = liveActivity else { return }
        
        Task {
            await liveActivity.end(nil, dismissalPolicy: .immediate)
            self.liveActivity = nil
            print(&quot;Live Activity ended&quot;)
        }
    }

}
</code></pre><p>The code works with <code>WaterReminderWidgetAttributes</code> that we have defined earlier for managing the state of the live activity.</p><p>When a new Live Activity starts, the code first checks whether Live Activities are enabled on the device and clears out any duplicates. It then configures the attributes and uses the&#xA0;<code>request</code>&#xA0;method to ask the system to create a new Live Activity.</p><p>Updating the Live Activity is straightforward: you simply update the content state of the attributes and call the&#xA0;<code>update</code>method on the Live Activity object.</p><p>Finally, the class includes a helper method to end the currently active Live Activity when needed.</p><h2 id="using-the-live-activity-manager">Using the Live Activity Manager</h2><p>With the live activity manager set up, we can now update the <code>WaterTracker</code> class to work with it. First, declare a property to hold the <code>LiveActivityManager</code> object in the class:</p><pre><code class="language-swift">let liveActivityManager = LiveActivityManager()
</code></pre><p>Next, update the <code>addGlass()</code> method like this:</p><pre><code class="language-swift">func addGlass() {
    guard currentGlasses &lt; dailyGoal else { return }
    
    currentGlasses += 1
    
    if currentGlasses == 1 {
        liveActivityManager.startLiveActivity(currentGlasses: currentGlasses, dailyGoal: dailyGoal)
    } else {
        liveActivityManager.updateLiveActivity(currentGlasses: currentGlasses, dailyGoal: dailyGoal)
    }
}
</code></pre><p>When the button is tapped for the first time, we call the <code>startLiveActivity</code> method to start a live activity. For subsequent taps, we simply update the content states of the live activity.</p><p>The live activity should be ended when the user taps the reset button. Therefore, update the <code>resetDaily</code> method like below:</p><pre><code class="language-swift">func resetDaily() {
    currentGlasses = 0
    
    liveActivityManager.endLiveActivity()
}
</code></pre><p>That&#x2019;s it! We&#x2019;ve completed all the code changes.</p><h2 id="updating-infoplist-to-enable-live-activities">Updating Info.plist to Enable Live Activities</h2><p>Before your app can execute Live Activities, we have to add an entry called <em>Supports Live Activities</em> in the <code>Info.plist</code> file of the main app. Set the value to YES to enable Live Activities.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/liveactivities-infoplist.png" class="kg-image" alt="Developing Live Activities in SwiftUI Apps" loading="lazy" width="1738" height="710" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/liveactivities-infoplist.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/liveactivities-infoplist.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/liveactivities-infoplist.png 1600w, https://www.appcoda.com/content/images/2025/08/liveactivities-infoplist.png 1738w" sizes="(min-width: 720px) 720px"></figure><p>Great! At this point, you can try out Live Activities either in the simulator or directly on a real device.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/liveactivities-dynamic-island.png" class="kg-image" alt="Developing Live Activities in SwiftUI Apps" loading="lazy" width="1626" height="706" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/liveactivities-dynamic-island.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/liveactivities-dynamic-island.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/liveactivities-dynamic-island.png 1600w, https://www.appcoda.com/content/images/2025/08/liveactivities-dynamic-island.png 1626w" sizes="(min-width: 720px) 720px"></figure><h2 id="summary">Summary</h2><p>In this tutorial, we explored how to add Live Activities to SwiftUI apps. You&apos;ve learned how these features boost user engagement by delivering real-time information directly to the Lock Screen and the Dynamic Island, reducing the need for users to reopen your app. We covered the entire process, including creating the data model, designing the user interface, and managing the Live Activity lifecycle. We encourage you to integrate Live Activities into your current or future applications to provide a richer, more convenient user experience.</p>]]></content:encoded></item><item><title><![CDATA[Integrating Siri Shortcuts into SwiftUI Apps with App Intents]]></title><description><![CDATA[<p>Have you ever wondered how to make your app&#x2019;s features accessible from the built-in Shortcuts app on iOS? That&#x2019;s what the App Intents framework is designed for. Introduced in iOS 16 and macOS Ventura, the framework has been around for over two years. It provides developers</p>]]></description><link>https://www.appcoda.com/app-intents-shortcuts/</link><guid isPermaLink="false">689eebd78c60222338894111</guid><category><![CDATA[Swift]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Fri, 15 Aug 2025 09:36:44 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1726587912062-0e5b1be8a6ee?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE3fHxpcGhvbmUlMjAxNnxlbnwwfHx8fDE3NTUyNDYwNzN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1726587912062-0e5b1be8a6ee?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE3fHxpcGhvbmUlMjAxNnxlbnwwfHx8fDE3NTUyNDYwNzN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Integrating Siri Shortcuts into SwiftUI Apps with App Intents"><p>Have you ever wondered how to make your app&#x2019;s features accessible from the built-in Shortcuts app on iOS? That&#x2019;s what the App Intents framework is designed for. Introduced in iOS 16 and macOS Ventura, the framework has been around for over two years. It provides developers with a powerful way to define actions that users can trigger through Shortcuts. With <a href="https://developer.apple.com/documentation/appintents?ref=appcoda.com" rel="noreferrer">App Intents</a>, your app can integrate seamlessly with the Shortcuts app, Siri, and even system-wide Spotlight search.</p><p>In this tutorial, we&#x2019;ll explore how to use the App Intents framework to bring your app&#x2019;s functionality into Shortcuts by creating an App Shortcut. Using the Ask Me Anything app as our example, we&#x2019;ll walk through the process of letting users ask questions right from Shortcuts.</p><p>This tutorial assumes you&apos;re familiar with the Ask Me Anything app from our Foundation Models tutorial. If you haven&apos;t read it yet, please <a href="https://www.appcoda.com/foundation-models/" rel="noreferrer">review that tutorial</a> first.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/shortcut-demo-app.png" class="kg-image" alt="Integrating Siri Shortcuts into SwiftUI Apps with App Intents" loading="lazy" width="2000" height="1299" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/shortcut-demo-app.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/shortcut-demo-app.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/shortcut-demo-app.png 1600w, https://www.appcoda.com/content/images/2025/08/shortcut-demo-app.png 2112w" sizes="(min-width: 720px) 720px"></figure><h2 id="using-app-intents">Using App Intents</h2><p>The Ask Me Anything app allows users to ask questions and then it provides answers using the on-device LLM. What we are going to do is to expose this feature to the Shortcuts app. To do that, all you need to do is to create a new Struct and adopt the App Intents framework.</p><p>Let&#x2019;s create a new file named <code>AskQuestionIntent</code> in the AskMeAnything project and update its content like below:</p><pre><code class="language-swift">import SwiftUI
import AppIntents

struct AskQuestionIntent: AppIntent {
    static var title: LocalizedStringResource = &quot;Ask Question&quot;
    static var description = IntentDescription(&quot;Ask a question to get an AI-powered answer&quot;)
    
    static let supportedModes: IntentModes = .foreground
    
    @Parameter(title: &quot;Question&quot;, description: &quot;The question you want to ask&quot;)
    var question: String
    
    @AppStorage(&quot;incomingQuestion&quot;) var storedQuestion: String = &quot;&quot;
    
    init() {}
    
    init(question: String) {
        self.question = question
    }
    
    func perform() async throws -&gt; some IntentResult {
        storedQuestion = question
        
        return .result()
    }
}

</code></pre><p>The code above defines a struct called&#xA0;<code>AskQuestionIntent</code>, which is an&#xA0;<strong>App Intent</strong> using the&#xA0;<code>AppIntents</code>&#xA0;framework. An App Intent is basically a way for your app to &#x201C;talk&#x201D; to the Shortcuts app, Siri, or Spotlight. Here, the intent&#x2019;s job is to let a user ask a question and get an AI-powered answer.</p><p>At the top, we have two static properties:&#xA0;<code>title</code>&#xA0;and&#xA0;<code>description</code>. These are what the Shortcuts app or Siri will show the user when they look at this intent.</p><p>The&#xA0;<code>supportedModes</code>&#xA0;property specifies that this intent can only run in the foreground, meaning the app will open when the shortcut is executed.</p><p>The&#xA0;<code>@Parameter</code>&#xA0;property wrapper defines the input the user needs to give. In this case, it&apos;s a&#xA0;<code>question</code>&#xA0;string. When someone uses this shortcut, they&apos;ll be prompted to type or say this question.</p><p>The&#xA0;<code>@AppStorage(&quot;incomingQuestion&quot;)</code>&#xA0;property is a convenient way to persist the provided question in&#xA0;<code>UserDefaults</code>, making it accessible to other parts of the app.</p><p>Finally, the&#xA0;<code>perform()</code>&#xA0;function is where the intent actually does its work. In this example, it just takes the&#xA0;<code>question</code>&#xA0;from the parameter and saves it into&#xA0;<code>storedQuestion</code>. Then it returns a&#xA0;<code>.result()</code>&#xA0;to tell the system it&#x2019;s done. You&#x2019;re not doing the AI call directly here &#x2014; just passing the question into your app so it can handle it however it wants.</p><h2 id="handling-the-shortcut">Handling the Shortcut</h2><p>Now that the shortcut is ready, executing the &#x201C;Ask Question&#x201D; shortcut will automatically launch the app. To handle this behavior, we need to make a small update to&#xA0;<code>ContentView</code>.</p><p>First, declare a variable to retrieve the question provided by the shortcut like this:</p><pre><code class="language-swift">@AppStorage(&quot;incomingQuestion&quot;) private var incomingQuestion: String = &quot;&quot;</code></pre><p>Next, attach the&#xA0;onChange&#xA0;modifier to the scroll view:</p><pre><code class="language-swift">ScrollView {

...


}
.onChange(of: incomingQuestion) { _, newQuestion in
    if !newQuestion.isEmpty {
        question = newQuestion
        incomingQuestion = &quot;&quot;
        
        Task {
            await generateAnswer()
        }
    }
}</code></pre><p>In the code above, we attach an&#xA0;.onChange&#xA0;modifier to the&#xA0;ScrollView&#xA0;so the view can respond whenever the&#xA0;incomingQuestionvalue is updated. Inside the closure, we check whether a new question has been received from the shortcut. If so, we trigger the&#xA0;generateAnswer()&#xA0;method, which sends the question to the on-device LLM for processing and returns an AI-generated answer.</p><h2 id="adding-a-preconfigured-shortcut">Adding a Preconfigured Shortcut</h2><p>In essence, this is how you create a shortcut that connects directly to your app. If you&#x2019;ve explored the Shortcuts app before, you&#x2019;ve probably noticed that many apps already provide preconfigured shortcuts. For instance, the Calendar app includes ready-made shortcuts for creating and managing events.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/shortcut-app-preconfigured.png" class="kg-image" alt="Integrating Siri Shortcuts into SwiftUI Apps with App Intents" loading="lazy" width="2000" height="1303" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/shortcut-app-preconfigured.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/shortcut-app-preconfigured.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/shortcut-app-preconfigured.png 1600w, https://www.appcoda.com/content/images/2025/08/shortcut-app-preconfigured.png 2042w" sizes="(min-width: 720px) 720px"></figure><p>With the App Intents framework, adding these preconfigured shortcuts to your own app is straightforward. They can be used right away in the Shortcuts app or triggered hands-free with Siri. Building on the&#xA0;<code>AskQuestionIntent</code>&#xA0;we defined earlier, we can now create a corresponding shortcut so users can trigger it more easily. For example, here&#x2019;s how we could define an &#x201C;Ask Question&#x201D; shortcut:</p><pre><code class="language-swift">struct AskQuestionShortcut: AppShortcutsProvider {
    static var appShortcuts: [AppShortcut] {
        AppShortcut(
            intent: AskQuestionIntent(),
            phrases: [
                &quot;Ask \(.applicationName) a question&quot;,
                &quot;Ask \(.applicationName) about \(.applicationName)&quot;,
                &quot;Get answer from \(.applicationName)&quot;,
                &quot;Use \(.applicationName)&quot;
            ],
            shortTitle: &quot;Ask Question&quot;,
            systemImageName: &quot;questionmark.bubble&quot;
        )
    }
}
</code></pre><p>The <code>AskQuestionShortcut</code> adopts the&#xA0;<code>AppShortcutsProvider</code>&#xA0;protocol, which is how we tell the system what shortcuts our app supports. Inside, we define a single shortcut called &#x201C;Ask Question,&#x201D; which is tied to our&#xA0;<code>AskQuestionIntent</code>. We also provide a set of example phrases that users might say to Siri, such as &#x201C;Ask [App Name] a question&#x201D; or &#x201C;Get answer from [App Name].&#x201D;</p><p>Finally, we give the shortcut a short title and a system image name so it&#x2019;s visually recognizable inside the Shortcuts app. Once this code is in place, the system automatically registers it, and users will see the shortcut ready to use&#x2014;no extra setup required.</p><h2 id="testing-the-shortcut">Testing the Shortcut</h2><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/shortcut-spotlight-search.png" class="kg-image" alt="Integrating Siri Shortcuts into SwiftUI Apps with App Intents" loading="lazy" width="1920" height="1181" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/shortcut-spotlight-search.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/shortcut-spotlight-search.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/shortcut-spotlight-search.png 1600w, https://www.appcoda.com/content/images/2025/08/shortcut-spotlight-search.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>To give the shortcut a try, build and run the app on either the simulator or a physical iOS device. Once the app has launched at least once, return to the Home Screen and open the Shortcuts app. You should now find the &#x201C;Ask Question&#x201D; shortcut we just created, ready for you to use.</p><p>The new shortcut not only appears in the Shortcuts app but is also available in Spotlight search.</p><p>When you run the &#x201C;Ask Question&#x201D; shortcut, it should automatically prompt you for question. Once you type your question and tap Done, it brings up the app and show you the answer.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/08/shortcut-demo-homescreen.png" class="kg-image" alt="Integrating Siri Shortcuts into SwiftUI Apps with App Intents" loading="lazy" width="1882" height="922" srcset="https://www.appcoda.com/content/images/size/w600/2025/08/shortcut-demo-homescreen.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/08/shortcut-demo-homescreen.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/08/shortcut-demo-homescreen.png 1600w, https://www.appcoda.com/content/images/2025/08/shortcut-demo-homescreen.png 1882w" sizes="(min-width: 720px) 720px"></figure><h2 id="summary">Summary</h2><p>In this tutorial, we explored how to use the App Intents framework to expose your app&#x2019;s functionality to the Shortcuts app and Siri. We walked through creating an&#xA0;<code>AppIntent</code>&#xA0;to handle user input, defining a preconfigured shortcut, and testing it right inside the Shortcuts app. With this setup, users can now ask questions to the Ask Me Anything app directly from Shortcuts or via Siri, making the experience faster and more convenient.</p><p>In the next tutorial, we&#x2019;ll take it a step further by showing you how to display the AI&#x2019;s answer in a Live Activity. This will let users see their responses in real time, right on the Lock Screen or in the Dynamic Island&#x2014;without even opening the app.</p>]]></content:encoded></item><item><title><![CDATA[Using Tool Calling to Supercharge Foundation Models]]></title><description><![CDATA[<p>In the <a href="https://www.appcoda.com/generable/" rel="noreferrer">previous tutorials</a>, we explored how Foundation Models work in iOS 26 and how you can start building AI-powered features using this new framework. We also introduced the&#xA0;<code>@Generable</code>&#xA0;macro, which makes it easy to convert generated responses into structured Swift types.</p><p>Now, in Part 3 of</p>]]></description><link>https://www.appcoda.com/tool-calling/</link><guid isPermaLink="false">68884aff8c602223388940f3</guid><category><![CDATA[AI]]></category><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Tue, 29 Jul 2025 04:47:46 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1730212426715-f0189e690149?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE4Nnx8aW9zJTIwMjZ8ZW58MHx8fHwxNzUzNzY0NDM1fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1730212426715-f0189e690149?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE4Nnx8aW9zJTIwMjZ8ZW58MHx8fHwxNzUzNzY0NDM1fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Using Tool Calling to Supercharge Foundation Models"><p>In the <a href="https://www.appcoda.com/generable/" rel="noreferrer">previous tutorials</a>, we explored how Foundation Models work in iOS 26 and how you can start building AI-powered features using this new framework. We also introduced the&#xA0;<code>@Generable</code>&#xA0;macro, which makes it easy to convert generated responses into structured Swift types.</p><p>Now, in Part 3 of the <a href="https://www.appcoda.com/foundation-models/" rel="noreferrer">Foundation Models</a> series, we&#x2019;ll dive into another powerful capability:&#xA0;Tool Calling&#xA0;&#x2014; a feature that lets the model interact with your app&#x2019;s functions to perform tasks, retrieve data, or trigger actions based on user input.</p><p>The on-device language model isn&#x2019;t capable of answering every type of question, especially those that require real-time data, like the current weather or the latest stock prices. In other cases, you might want the model to access your app&#x2019;s own data to respond accurately. That&#x2019;s where&#xA0;Tool Calling&#xA0;comes in that it allows the model to delegate specific tasks to your app&apos;s functions or external APIs.</p><p>In this tutorial, we&#x2019;ll extend the&#xA0;<strong>Ask Me Anything</strong>&#xA0;app. While the on-device model can handle general queries, it doesn&#x2019;t have access to up-to-date information about trending movies. To bridge that gap, we&#x2019;ll use Tool Calling to integrate with the&#xA0;<a href="https://www.themoviedb.org/?ref=appcoda.com" rel="noreferrer">The Movie Database (TMDB)</a>&#xA0;API, enabling the model to respond to movie-related questions using live data.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/07/tool-calling-trending-movies-demo.png" class="kg-image" alt="Using Tool Calling to Supercharge Foundation Models" loading="lazy" width="2000" height="1168" srcset="https://www.appcoda.com/content/images/size/w600/2025/07/tool-calling-trending-movies-demo.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/07/tool-calling-trending-movies-demo.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/07/tool-calling-trending-movies-demo.png 1600w, https://www.appcoda.com/content/images/2025/07/tool-calling-trending-movies-demo.png 2168w" sizes="(min-width: 720px) 720px"></figure><h2 id="using-tmdb-apis">Using TMDB APIs</h2><p>If you ask the&#xA0;Ask Me Anything&#xA0;app about trending movies, the on-device language model won&#x2019;t have the answer&#x2014;it simply doesn&#x2019;t have access to that kind of real-time information and may suggest checking other sources instead. Let&#x2019;s fix that using&#xA0;Tool Calling&#xA0;and the&#xA0;TMDB API. With this setup, whenever a user asks a movie-related question, the model won&#x2019;t respond with &#x201C;I don&#x2019;t know.&#x201D; Instead, it will automatically call the external API and return the relevant information directly in the app.</p><p>In the Xcode project, create a <code>MovieService</code> file and insert the following code:</p><pre><code class="language-swift">// Model for a Movie
struct Movie: Codable, Identifiable {
    let id: Int
    let title: String
    let overview: String
    
    // Coding keys to match API response
    enum CodingKeys: String, CodingKey {
        case id
        case title
        case overview
    }
}

// Model for the API response
struct TrendingMoviesResponse: Codable {
    let results: [Movie]
}

// Service class to fetch trending movies
class MovieService {
    // Base URL for TMDB API
    private let baseURL = &quot;https://api.themoviedb.org/3&quot;
    
    private let apiKey = &quot;&lt;your-api-key&gt;&quot;
    
    // Function to fetch trending movies using async/await
    func fetchTrendingMovies() async throws -&gt; [Movie] {
        
        // Construct the URL for trending movies
        let urlString = &quot;\(baseURL)/trending/movie/day?api_key=\(apiKey)&quot;
        guard let url = URL(string: urlString) else {
            throw URLError(.badURL)
        }
        
        // Perform the network request
        let (data, response) = try await URLSession.shared.data(from: url)
        
        // Check for valid HTTP response
        guard let httpResponse = response as? HTTPURLResponse,
              (200...299).contains(httpResponse.statusCode) else {
            throw URLError(.badServerResponse)
        }
        
        // Decode the JSON response
        let decoder = JSONDecoder()
        let trendingResponse = try decoder.decode(TrendingMoviesResponse.self, from: data)
        return trendingResponse.results
    }
}
</code></pre><p>Make sure you replace the value of <code>apiKey</code> with your own TMDB API key. If you haven&#x2019;t signed up yet, head over to <a href="http://themoviedb.org/?ref=appcoda.com">themoviedb.org</a> and register for a free account to get your API key.</p><p>The code above is fairly straightforward: it calls the web API to fetch trending movies, then parses the response and decodes it into an array of&#xA0;<code>Movie</code>&#xA0;objects.</p><h2 id="using-tool-calling-in-foundation-models">Using Tool Calling in Foundation Models</h2><p>Next, we&#x2019;ll use Tool Calling to trigger the code in&#xA0;<code>MovieService</code>&#xA0;whenever the user asks about trending movies. To get started, create a new file named&#xA0;<code>GetTrendingMoviesTool.swift</code>&#xA0;and add the following code:</p><pre><code class="language-swift">import FoundationModels

struct GetTrendingMoviesTool: Tool {
    let name = &quot;getTrendingMovies&quot;
    let description = &quot;Get trending movies and their information&quot;
    
    let service = MovieService()

    @Generable
    struct Arguments {
        
    }
    
    func call(arguments: Arguments) async throws -&gt; [String] {
        let movies = try await service.fetchTrendingMovies()
       
        let formattedMovies = movies.map { movie in
            &quot;\(movie.title): \(movie.overview)&quot;
        }
        
        return formattedMovies
    }
}

</code></pre><p>We define a&#xA0;<code>GetTrendingMovieTool</code>&#xA0;struct that conforms to the&#xA0;<code>Tool</code>&#xA0;protocol &#x2014; this is the standard way to implement Tool Calling in the Foundation Models framework. The protocol requires you to specify a&#xA0;<code>name</code>&#xA0;and&#xA0;<code>description</code>&#xA0;for the tool, along with an&#xA0;<code>Arguments</code>&#xA0;struct to represent any parameters the tool might need. In this case, we don&#x2019;t require additional input, so we define an empty&#xA0;<code>Arguments</code>&#xA0;structure.</p><p>If you wanted to filter trending movies by genre, you could define&#xA0;<code>Arguments</code>&#xA0;like this:</p><pre><code class="language-swift">@Generable
struct Arguments {
		@Guide(description: &quot;The genre to fetch trending movies&quot;)
		var genre: String
}
</code></pre><p>When the tool is triggered by the model, the&#xA0;<code>call</code>&#xA0;method is automatically executed. Inside it, we call the&#xA0;<code>fetchTrendingMovies()</code>&#xA0;method from our service. After receiving the results, we format them to display each movie&#x2019;s title and overview.</p><p>With the trending movie tool in place, integrating it into your app is straightforward. Simply open <code>ContentView</code>&#xA0;and update the&#xA0;<code>LanguageModelSession</code>&#xA0;initialization as follows:</p><pre><code class="language-swift">@State private var session = LanguageModelSession(tools: [GetTrendingMoviesTool()])
</code></pre><p>You can provide custom tools by passing them through the&#xA0;<code>tools</code>&#xA0;parameter when initializing the language model session. That&#x2019;s it! The language model will automatically invoke&#xA0;<code>GetTrendingMoviesTool</code>&#xA0;whenever it detects a question related to trending movies.</p><p>Build and run the app, then try asking the same question again. This time, the model will successfully respond with trending movie information by invoking the tool.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/07/tool-calling-trending-movies-ans.png" class="kg-image" alt="Using Tool Calling to Supercharge Foundation Models" loading="lazy" width="1920" height="1232" srcset="https://www.appcoda.com/content/images/size/w600/2025/07/tool-calling-trending-movies-ans.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/07/tool-calling-trending-movies-ans.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/07/tool-calling-trending-movies-ans.png 1600w, https://www.appcoda.com/content/images/2025/07/tool-calling-trending-movies-ans.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="summary">Summary</h2><p>In this tutorial, we explored tool calling, a powerful addition to the Foundation Models framework in iOS 26. Unlike basic text generation, tool calling enables the on-device language model to interact with your app&#x2019;s functions or access external services.</p><p>With tool calling, you can significantly extend the model&#x2019;s capabilities. Whether it&#x2019;s running custom logic or fetching real-time data through APIs, the model can now perform context-aware tasks beyond its built-in knowledge.</p><p>I hope you&#x2019;ve enjoyed this tutorial series and feel inspired to start building smarter, AI-powered features using the Foundation Models framework.</p>]]></content:encoded></item><item><title><![CDATA[Working with @Generable and @Guide in Foundation Models]]></title><description><![CDATA[<p>In the <a href="https://www.appcoda.com/foundation-models/" rel="noreferrer">previous tutorial</a>, we introduced the Foundation Models framework and demonstrated how to use it for basic content generation. That process was fairly straightforward &#x2014; you provide a prompt, wait a few seconds, and receive a response in natural language. In our example, we built a simple Q&amp;</p>]]></description><link>https://www.appcoda.com/generable/</link><guid isPermaLink="false">6879be778c602223388940e0</guid><category><![CDATA[AI]]></category><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Fri, 18 Jul 2025 04:12:06 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1750637215837-31d990af2495?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDN8fGlvcyUyMDI2fGVufDB8fHx8MTc1MjgxMDgzN3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1750637215837-31d990af2495?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDN8fGlvcyUyMDI2fGVufDB8fHx8MTc1MjgxMDgzN3ww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Working with @Generable and @Guide in Foundation Models"><p>In the <a href="https://www.appcoda.com/foundation-models/" rel="noreferrer">previous tutorial</a>, we introduced the Foundation Models framework and demonstrated how to use it for basic content generation. That process was fairly straightforward &#x2014; you provide a prompt, wait a few seconds, and receive a response in natural language. In our example, we built a simple Q&amp;A app where users could ask any question, and the app displayed the generated text directly.</p><p>But what if the response is more complex &#x2014; and you need to convert the unstructured text into a structured object?</p><p>For example, suppose you ask the model to generate a recipe, and you want to turn that response into a&#xA0;<code>Recipe</code>&#xA0;object with properties like&#xA0;<code>name</code>,&#xA0;<code>ingredients</code>, and&#xA0;<code>instructions</code>.</p><p>Do you need to manually parse the text and map each part to your data model?</p><p>The Foundation Models framework in <a href="https://www.apple.com/hk/en/newsroom/2025/06/apple-elevates-the-iphone-experience-with-ios-26/?ref=appcoda.com" rel="noreferrer">iOS 26</a> provides two powerful new macros called <code>Generable</code> and <code>@Guide</code> to help developers simplify this process.</p><p>In this tutorial, we&#x2019;ll explore how these macros work and how you can use them to generate structured data directly from model output.</p><h2 id="the-demo-app">The Demo App</h2><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/07/generable-macro-demo-app.png" class="kg-image" alt="Working with @Generable and @Guide in Foundation Models" loading="lazy" width="1768" height="1174" srcset="https://www.appcoda.com/content/images/size/w600/2025/07/generable-macro-demo-app.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/07/generable-macro-demo-app.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/07/generable-macro-demo-app.png 1600w, https://www.appcoda.com/content/images/2025/07/generable-macro-demo-app.png 1768w" sizes="(min-width: 720px) 720px"></figure><p>We will build a simple Quiz app that demonstrates how to use Foundation Models to generate structured content. In this case, it&#x2019;s the vocabulary questions for English learners.</p><p>The app displays a multiple-choice question with four answer options, allowing users to test their knowledge interactively. Each question is generated by the on-device language model and automatically parsed into a Swift struct using the <code>@Generable</code> macro.</p><p>This demo app shows how developers can move beyond basic text generation and use Foundation Models to create structured content.</p><h2 id="using-generable-and-guide">Using @Generable and @Guide</h2><p>Let&#x2019;s get started with building the demo app. As said before, unlike the previous Q&amp;A demo, this quiz app presents a multiple-choice question with several answer options. To represent the question, we&#x2019;ll define the following structure in Swift:</p><pre><code class="language-swift">struct Question {
    let text: String
    let choices: [String]
    let answer: String
    let explanation: String
}
</code></pre><p>Later, we will ask the on-device language model to generate quiz questions. The challenge is how we can convert the model&#x2019;s unstructured text response into a usable <code>Question</code> object. Fortunately, the Foundation Models framework introduces the <code>@Generable</code> macro to simplify the conversion process.</p><p>To enable automatic conversion, simply mark your struct with&#xA0;<code>@Generable</code>, like this:</p><pre><code class="language-swift">import FoundationModels

@Generable
struct Question {
    @Guide(description: &quot;The quiz question&quot;)
    let text: String
    @Guide(.count(4))
    let choices: [String]
    let answer: String
    @Guide(description: &quot;A brief explanation of why the answer is correct.&quot;)
    let explanation: String
}
</code></pre><p>The framework also introduces the <code>@Guide</code> macro, which allows developers to provide specific instructions to the language model when generating properties. For instance, to specify that each question should have exactly 4 choices, you can use <code>@Guide(.count(4))</code> on the <code>choices</code> array property.</p><p>With array, other than controlling the exact number of element, you can also use the following guides:</p><pre><code class="language-swift">.minimumCount(3)
.maximumCount(100)
</code></pre><p>You can also add a descriptive explanation to a property to give the language model more context about the kind of data it should generate. This helps ensure the output is more accurate and aligned with your expectations.</p><p>It&#x2019;s important to pay attention to the order in which properties are declared. When using a&#xA0;<code>Generable</code>&#xA0;type, the language model generates values sequentially based on the order of the properties in your code. This becomes especially important when one property&apos;s value relies on another. For example, in the code above, the&#xA0;<code>explanation</code>&#xA0;property depends on the&#xA0;<code>answer</code>, so it should be declared after the&#xA0;<code>answer</code>&#xA0;to ensure it references the correct context.</p><h2 id="building-the-quiz-app">Building the Quiz App</h2><p>With the <code>Question</code> structure ready, we dive into the implementation of the Quiz app. Switch back to <code>ContentView</code> and update the code like this:</p><pre><code class="language-swift">import FoundationModels

struct ContentView: View {
    
    @State private var session = LanguageModelSession(instructions: &quot;You are a high school English teacher.&quot;)
    
    @State private var question: Question?
    
    var body: some View {
        VStack(spacing: 20) {
            
            if let question {
                QuestionView(question: question)
            } else {
                ProgressView(&quot;Generating questions ...&quot;)
            }
            
            Spacer()
            
            Button(&quot;Next Question&quot;) {
                Task {
                    do {
                        question = nil
                        question = try await generateQuestion()
                    } catch {
                        print(error)
                    }
                }
            }
            .padding()
            .frame(maxWidth: .infinity)
            .background(Color.green.opacity(0.18))
            .foregroundStyle(.green)
            .font(.headline)
            .cornerRadius(10)

        }
        .padding(.horizontal)
        .task {
            do {
                question = try await generateQuestion()
            } catch {
                print(error)
            }
        }
    }
    
    func generateQuestion() async throws -&gt; Question {
        
        let response = try await session.respond(to: &quot;Create a vocabulary quiz for high school students. Generate one multiple-choice question that tests vocabulary knowledge.&quot;, generating: Question.self)
        
        return response.content
    }
}
</code></pre><p>The user interface code for this app is simple and easy to follow. What&#x2019;s worth highlighting, however, is how we integrate the Foundation Models framework to generate quiz questions. In the example above, we create a&#xA0;<code>LanguageModelSession</code>&#xA0;and provide it with a clear instruction, asking the language model to take on the role of an English teacher.</p><p>To generate a question, we use the session&#x2019;s&#xA0;<code>respond</code>&#xA0;method and specify the expected response type using the&#xA0;<code>generating</code>parameter. The session then automatically produces a response and maps the result into a&#xA0;<code>Question</code>&#xA0;object, saving you from having to parse and structure the data manually.</p><p>Next, we&#x2019;ll implement the&#xA0;<code>QuestionView</code>, which is responsible for displaying the generated quiz question, handling user interaction, and verifying the selected answer. Add the following view definition inside your&#xA0;<code>ContentView</code>&#xA0;file:</p><pre><code class="language-swift">struct QuestionView: View {
    let question: Question
    
    @State private var selectedAnswer: String? = nil
    @State private var didAnswer: Bool = false

    var body: some View {
        ScrollView {
            VStack(alignment: .leading) {
                Text(question.text)
                    .font(.title)
                    .fontWeight(.semibold)
                    .padding(.vertical)
                
                VStack(spacing: 12) {
                    ForEach(question.choices, id: \.self) { choice in
                        
                        Button {
                            if !didAnswer {
                                selectedAnswer = choice
                                didAnswer = true
                            }

                        } label: {
                            if !didAnswer {
                                Text(choice)
                            } else {
                                HStack {
                                    if choice == question.answer {
                                        Text(&quot;&#x2705;&quot;)
                                    } else if selectedAnswer == choice {
                                        Text(&quot;&#x274C;&quot;)
                                    }
                                    
                                    Text(choice)
                                }
                            }
                        }
                        .disabled(didAnswer)
                        .padding()
                        .frame(maxWidth: .infinity)
                        .background(
                            Color.blue.opacity(0.15)
                        )
                        .foregroundStyle(.blue)
                        .font(.title3)
                        .cornerRadius(12)

                    }
                }
                
                if didAnswer {
                    
                    VStack(alignment: .leading, spacing: 10) {
                        Text(&quot;The correct answer is \(question.answer)&quot;)
                        
                        Text(question.explanation)
                    }
                    .font(.title3)
                    .padding(.top)
                }
            }
            

        }
    }
    
}
</code></pre><p>This view presents the question text at the top, followed by four answer choices rendered as tappable buttons. When the user selects an answer, the view checks if it&#x2019;s correct and displays visual feedback using emojis (&#x2705; or &#x274C;). Once answered, the correct answer and an explanation are shown below. The&#xA0;<code>@State</code>&#xA0;properties track the selected answer and whether the question has been answered, allowing the UI to update reactively.</p><p>Once you&apos;ve implemented all the necessary changes, you can test the app in the Preview canvas. You should see a generated vocabulary question like the one shown below, complete with four answer choices. After selecting an answer, the app provides immediate visual feedback and an explanation.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/07/generable-macro-demo-test.png" class="kg-image" alt="Working with @Generable and @Guide in Foundation Models" loading="lazy" width="1732" height="1252" srcset="https://www.appcoda.com/content/images/size/w600/2025/07/generable-macro-demo-test.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/07/generable-macro-demo-test.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/07/generable-macro-demo-test.png 1600w, https://www.appcoda.com/content/images/2025/07/generable-macro-demo-test.png 1732w" sizes="(min-width: 720px) 720px"></figure><h2 id="summary">Summary</h2><p>In this tutorial, we explored how to use the Foundation Models framework in iOS 26 to generate structured content with Swift. By building a simple vocabulary quiz app, we demonstrated how the new&#xA0;<code>@Generable</code>&#xA0;and&#xA0;<code>@Guide</code>&#xA0;macros can turn unstructured language model responses into typed Swift structs.</p><p>Stay tuned &#x2014; in the next tutorial, we&#x2019;ll dive into another powerful feature of the Foundation Models framework.</p>]]></content:encoded></item><item><title><![CDATA[Getting Started with Foundation Models in iOS 26]]></title><description><![CDATA[<p>With iOS 26, Apple introduces the Foundation Models framework, a privacy-first, on-device AI toolkit that brings the same language models behind Apple Intelligence right into your apps. This framework is available across Apple platforms, including iOS, macOS, iPadOS, and visionOS, and it provides developers with a streamlined Swift API for</p>]]></description><link>https://www.appcoda.com/foundation-models/</link><guid isPermaLink="false">686f947d0995e71e7964f1b6</guid><category><![CDATA[AI]]></category><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Thu, 10 Jul 2025 10:32:53 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1601836211377-4112e1c147ee?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGlvcyUyMDJ8ZW58MHx8fHwxNzUyMTQzMTc3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1601836211377-4112e1c147ee?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGlvcyUyMDJ8ZW58MHx8fHwxNzUyMTQzMTc3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Getting Started with Foundation Models in iOS 26"><p>With iOS 26, Apple introduces the Foundation Models framework, a privacy-first, on-device AI toolkit that brings the same language models behind Apple Intelligence right into your apps. This framework is available across Apple platforms, including iOS, macOS, iPadOS, and visionOS, and it provides developers with a streamlined Swift API for integrating advanced AI features directly into your apps.</p><p>Unlike cloud-based LLMs such as ChatGPT or Claude, which run on powerful servers and require internet access, Apple&#x2019;s LLM is designed to run entirely on-device. This architectural difference gives it a unique advantage: all data stays on the user&#x2019;s device, ensuring privacy, lower latency, and offline access.</p><p>This framework opens the door to a whole range of intelligent features you can build right out of the box. You can generate and summarize content, classify information, or even build in semantic search and personalized learning experiences. Whether you want to create a smart in-app guide, generate unique content for each user, or add a conversational assistant, you can now do it with just a few lines of Swift code.</p><p>In this tutorial, we&#x2019;ll explore the <a href="https://developer.apple.com/documentation/foundationmodels?ref=appcoda.com" rel="noreferrer">Foundation Models framework</a>. You&#x2019;ll learn what it is, how it works, and how to use it to generate content using Apple&#x2019;s on-device language models.</p><p>To follow along, make sure you have Xcode 26 installed, and that your Mac is running macOS Tahoe, which is required to access the Foundation Models framework.</p><p>Ready to get started? Let&#x2019;s dive in.</p><h2 id="the-demo-app-ask-me-anything">The Demo App: Ask Me Anything</h2><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/07/foundation-models-demo-app.png" class="kg-image" alt="Getting Started with Foundation Models in iOS 26" loading="lazy" width="1920" height="1245" srcset="https://www.appcoda.com/content/images/size/w600/2025/07/foundation-models-demo-app.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/07/foundation-models-demo-app.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/07/foundation-models-demo-app.png 1600w, https://www.appcoda.com/content/images/2025/07/foundation-models-demo-app.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>It&#x2019;s always great to learn new frameworks or APIs by building a demo app &#x2014; and that&#x2019;s exactly what we&#x2019;ll do in this tutorial. We&#x2019;ll create a simple yet powerful app called&#xA0;<strong>Ask Me Anything</strong>&#xA0;to explore how Apple&#x2019;s new&#xA0;Foundation Models&#xA0;framework works in iOS 26.</p><p>The app lets users type in any questions and provides an AI-generated response, all processed on-device using Apple&#x2019;s built-in LLM.</p><p>By building this demo app, you&apos;ll learn how to integrate the Foundation Models framework into a <a href="https://www.appcoda.com/swiftui" rel="noreferrer">SwiftUI</a> app. You&apos;ll also understand how to create prompts and capture both full and partial generated responses.</p><h2 id="using-the-default-system-language-model">Using the Default System Language Model</h2><p>Apple provides a built-in model called&#xA0;<code>SystemLanguageModel</code>, which gives you access to the on-device foundation model that powers Apple Intelligence. For general-purpose use, you can access the&#xA0;<strong>base version</strong>&#xA0;of this model via the&#xA0;<code>default</code>&#xA0;property. It&#x2019;s optimized for text generation tasks and serves as a great starting point for building features like content generation or question answering in your app.</p><p>To use it in your app, you&apos;ll first need to import the&#xA0;<code>FoundationModels</code>&#xA0;framework:</p><pre><code class="language-swift">import FoundationModels
</code></pre><p>With the framework now imported, you can get a handle on the default system language model. Here&#x2019;s the sample code to do that:</p><pre><code class="language-swift">struct ContentView: View {
    
    private var model = SystemLanguageModel.default
    
    var body: some View {
        switch model.availability {
        case .available:
            mainView
        case .unavailable(let reason):
            Text(unavailableMessage(reason))
        }
    }
    
    private var mainView: some View {
        ScrollView {
            .
            .
            .
        }
    }

    private func unavailableMessage(_ reason: SystemLanguageModel.Availability.UnavailableReason) -&gt; String {
        switch reason {
        case .deviceNotEligible:
            return &quot;The device is not eligible for using Apple Intelligence.&quot;
        case .appleIntelligenceNotEnabled:
            return &quot;Apple Intelligence is not enabled on this device.&quot;
        case .modelNotReady:
            return &quot;The model isn&apos;t ready because it&apos;s downloading or because of other system reasons.&quot;
        @unknown default:
            return &quot;The model is unavailable for an unknown reason.&quot;
        }
    }
}
</code></pre><p>Since Foundation Models only work on devices with Apple Intelligence enabled, it&apos;s important to verify that a model is available before using it. You can check its readiness by inspecting the <code>availability</code> property.</p><h2 id="implementing-the-ui">Implementing the UI</h2><p>Let&#x2019;s continue to build the UI of the <code>mainView</code>. We first add two state variables to store the user question and the generated answer:</p><pre><code class="language-swift">@State private var answer: String = &quot;&quot;
@State private var question: String = &quot;&quot;
</code></pre><p>For the UI implementation, update the <code>mainView</code> like this:</p><pre><code class="language-swift">private var mainView: some View {
    ScrollView {
        ScrollView {
            VStack {
                Text(&quot;Ask Me Anything&quot;)
                    .font(.system(.largeTitle, design: .rounded, weight: .bold))
                
                TextField(&quot;&quot;, text: $question, prompt: Text(&quot;Type your question here&quot;), axis: .vertical)
                    .lineLimit(3...5)
                    .padding()
                    .background {
                        Color(.systemGray6)
                    }
                    .font(.system(.title2, design: .rounded))
                
                Button {

                } label: {
                    Text(&quot;Get answer&quot;)
                        .frame(maxWidth: .infinity)
                        .font(.headline)
                }
                .buttonStyle(.borderedProminent)
                .controlSize(.extraLarge)
                .padding(.top)
                
                Rectangle()
                    .frame(height: 1)
                    .foregroundColor(Color(.systemGray5))
                    .padding(.vertical)
                
                Text(LocalizedStringKey(answer))
                    .font(.system(.body, design: .rounded))
            }
            .padding()
        }

    }
}
</code></pre><p>The implementation is pretty straightforward - I just added a touch of basic styling to the text field and button.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/07/foundation-models-app-ui.png" class="kg-image" alt="Getting Started with Foundation Models in iOS 26" loading="lazy" width="1920" height="974" srcset="https://www.appcoda.com/content/images/size/w600/2025/07/foundation-models-app-ui.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/07/foundation-models-app-ui.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/07/foundation-models-app-ui.png 1600w, https://www.appcoda.com/content/images/2025/07/foundation-models-app-ui.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="generating-responses-with-the-language-model">Generating Responses with the Language Model</h2><p>Now we&#x2019;ve come to the core part of app: sending the question to the model and generating the response. To handle this, we create a new function called <code>generateAnswer()</code>:</p><pre><code class="language-swift">private func generateAnswer() async {
    let session = LanguageModelSession()
    do {
        let response = try await session.respond(to: question)
        answer = response.content
    } catch {
        answer = &quot;Failed to answer the question: \(error.localizedDescription)&quot;
    }
}
</code></pre><p>As you can see, it only takes a few lines of code to send a question to the model and receive a generated response. First, we create a session using the default system language model. Then, we pass the user&#x2019;s question, which is known as a&#xA0;<em>prompt,</em> to the model using the&#xA0;<code>respond</code>&#xA0;method.</p><p>The call is asynchronous as it usually takes a few second (or even longer) for the model to generate the response. Once the response is ready, we can access the generated text through the&#xA0;<code>content</code>&#xA0;property and assign it to <code>answer</code> for display.</p><p>To invoke this new function, we also need to update the closure of the &#x201C;Get Answer&#x201D; button like this:</p><pre><code class="language-swift">Button {
    Task {
        await generateAnswer()
    }
} label: {
    Text(&quot;Show me the answer&quot;)
        .frame(maxWidth: .infinity)
        .font(.headline)
}
</code></pre><p>You can test the app directly in the preview pane, or run it in the simulator. Just type in a question, wait a few seconds, and the app will generate a response for you.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/07/foundation-models-ask-first-question.png" class="kg-image" alt="Getting Started with Foundation Models in iOS 26" loading="lazy" width="1920" height="1162" srcset="https://www.appcoda.com/content/images/size/w600/2025/07/foundation-models-ask-first-question.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/07/foundation-models-ask-first-question.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/07/foundation-models-ask-first-question.png 1600w, https://www.appcoda.com/content/images/2025/07/foundation-models-ask-first-question.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="reusing-the-session">Reusing the Session</h2><p>The code above creates a new session for each question, which works well when the questions are unrelated.</p><p>But what if you want users to ask follow-up questions and keep the context? In that case, you can simply reuse the same session each time you call the model.</p><p>For our demo app, we can move the <code>session</code> variable out of the <code>generateAnswer()</code> function and turn it into a state variable:</p><pre><code class="language-swift">@State private var session = LanguageModelSession()
</code></pre><p>After making the change, try testing the app by first asking:&#xA0;<em>&#x201C;What are the must-try foods when visiting Japan?&#x201D;</em>&#xA0;Then follow up with:&#xA0;<em>&#x201C;Suggest me some restaurants.&#x201D;</em></p><p>Since the session is retained, the model understands the context and knows you&apos;re looking for restaurant recommendations in Japan.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/07/foundation-models-suggest-restaurants.png" class="kg-image" alt="Getting Started with Foundation Models in iOS 26" loading="lazy" width="1810" height="1188" srcset="https://www.appcoda.com/content/images/size/w600/2025/07/foundation-models-suggest-restaurants.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/07/foundation-models-suggest-restaurants.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/07/foundation-models-suggest-restaurants.png 1600w, https://www.appcoda.com/content/images/2025/07/foundation-models-suggest-restaurants.png 1810w" sizes="(min-width: 720px) 720px"></figure><p>If you don&#x2019;t reuse the same session, the model won&#x2019;t recognize the context of your follow-up question. Instead, it will respond with something like this, asking for more details:</p><p><em>&#x201C;Sure! To provide you with the best suggestions, could you please let me know your location or the type of cuisine you&apos;re interested in?&#x201D;</em></p><h2 id="disabling-the-button-during-response-generation">Disabling the Button During Response Generation</h2><p>Since the model takes time to generate a response, it&#x2019;s a good idea to disable the &quot;Get Answer&quot; button while waiting for the answer. The&#xA0;<code>session</code>&#xA0;object includes a property called&#xA0;<code>isResponding</code>&#xA0;that lets you check if the model is currently working.</p><p>To disable the button during that time, simply use the&#xA0;<code>.disabled</code>&#xA0;modifier and pass in the session&#x2019;s status like this:</p><pre><code class="language-solidity">Button {
    Task {
        await generateAnswer()
    }
} label: {
    .
    .
    .
}
.disabled(session.isResponding)
</code></pre><h2 id="working-with-stream-responses">Working with Stream Responses</h2><p>The current user experience isn&apos;t ideal &#x2014; since the on-device model takes time to generate a response, the app only shows the result after the entire response is ready.</p><p>If you&#x2019;ve used ChatGPT or similar LLMs, you&#x2019;ve probably noticed that they start displaying partial results almost immediately. This creates a smoother, more responsive experience.</p><p>The Foundation Models framework also supports streaming output, which allows you to display responses as they&apos;re being generated, rather than waiting for the complete answer. To implement this, use the <code>streamResponse</code> method rather than the <code>respond</code> method. Here&apos;s the updated <code>generateAnswer()</code> function that works with streaming responses:</p><pre><code class="language-solidity">private func generateAnswer() async {
    
    do {
        answer = &quot;&quot;
        let stream = session.streamResponse(to: question)
        for try await streamData in stream {             
		        answer = streamData.asPartiallyGenerated()
        }
    } catch {
        answer = &quot;Failed to answer the question: \(error.localizedDescription)&quot;
    }
}
</code></pre><p>Just like with the&#xA0;<code>respond</code>&#xA0;method, you pass the user&apos;s question to the model when calling&#xA0;<code>streamResponse</code>. The key difference is that instead of waiting for the full response, you can loop through the streamed data and update the&#xA0;<code>answer</code>&#xA0;variable with each partial result &#x2014; displaying it on screen as it&#x2019;s generated.</p><p>Now when you test the app again and ask any questions, you&apos;ll see responses appear incrementally as they&apos;re generated, creating a much more responsive user experience.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/07/foundation-models-stream-response.gif" class="kg-image" alt="Getting Started with Foundation Models in iOS 26" loading="lazy" width="1524" height="982" srcset="https://www.appcoda.com/content/images/size/w600/2025/07/foundation-models-stream-response.gif 600w, https://www.appcoda.com/content/images/size/w1000/2025/07/foundation-models-stream-response.gif 1000w, https://www.appcoda.com/content/images/2025/07/foundation-models-stream-response.gif 1524w" sizes="(min-width: 720px) 720px"></figure><h2 id="customizing-the-model-with-instructions">Customizing the Model with Instructions</h2><p>When instantiating the model session, you can provide optional instructions to customize its use case. For the demo app, we haven&apos;t provided any instructions during initialization because this app is designed to answer any question.</p><p>However, if you&apos;re building a Q&amp;A system for specific topics, you may want to customize the model with targeted instructions. For example, if your app is designed to answer travel-related questions, you could provide the following instruction to the model:</p><p><em>&#x201C;You are a knowledgeable and friendly travel expert. Your job is to help users by answering travel-related questions clearly and accurately. Focus on providing useful advice, tips, and information about destinations, local culture, transportation, food, and travel planning. Keep your tone conversational, helpful, and easy to understand, as if you&apos;re speaking to someone planning their next trip.&#x201D;</em></p><p>When writing instructions, you can define the model&#x2019;s role (e.g., travel expert), specify the focus of its responses, and even set the desired tone or style.</p><p>To pass the instruction to the model, you can instantiate the&#xA0;<code>session</code>&#xA0;object like this:</p><pre><code class="language-swift">var session = LanguageModelSession(instructions: &quot;your instruction&quot;)</code></pre><h2 id="summary">Summary</h2><p>In this tutorial, we covered the basics of the Foundation Models framework and showed how to use Apple&#x2019;s on-device language model for tasks like question answering and content generation.</p><p>This is just the beginning &#x2014; the framework offers much more. In future tutorials, we&#x2019;ll dive deeper into other new features such as the new <code>@Generable</code> and <code>@Guide</code> macros, and explore additional capabilities like content tagging and tool calling.</p><p>If you&apos;re looking to build smarter, AI-powered apps, now is the perfect time to explore the Foundation Models framework and start integrating on-device intelligence into your projects.</p>]]></content:encoded></item><item><title><![CDATA[Exploring WebView and WebPage in SwiftUI for iOS 26]]></title><description><![CDATA[<p>In <a href="https://www.apple.com/os/ios/?ref=appcoda.com" rel="noreferrer">iOS 26</a>, SwiftUI finally introduced one of its most highly anticipated components:&#xA0;<code>WebView</code>, a native solution for displaying web content. Before this update, SwiftUI developers had to rely on the UIKit framework, using&#xA0;<code>UIViewRepresentable</code>&#xA0;to wrap&#xA0;<code>WKWebView</code>&#xA0;or&#xA0;<code>SFSafariViewController</code>&#xA0;in order to</p>]]></description><link>https://www.appcoda.com/swiftui-webview/</link><guid isPermaLink="false">685523d10995e71e7964f1a3</guid><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Fri, 20 Jun 2025 09:09:07 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1749650646156-452624a35dbd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGlvcyUyMDI2fGVufDB8fHx8MTc1MDQxMDM5OHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1749650646156-452624a35dbd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGlvcyUyMDI2fGVufDB8fHx8MTc1MDQxMDM5OHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Exploring WebView and WebPage in SwiftUI for iOS 26"><p>In <a href="https://www.apple.com/os/ios/?ref=appcoda.com" rel="noreferrer">iOS 26</a>, SwiftUI finally introduced one of its most highly anticipated components:&#xA0;<code>WebView</code>, a native solution for displaying web content. Before this update, SwiftUI developers had to rely on the UIKit framework, using&#xA0;<code>UIViewRepresentable</code>&#xA0;to wrap&#xA0;<code>WKWebView</code>&#xA0;or&#xA0;<code>SFSafariViewController</code>&#xA0;in order to <a href="https://www.appcoda.com/learnswift/webview.html" rel="noreferrer">embed a web view</a>. With the arrival of&#xA0;<code>WebView</code>, Apple now provides a fully native SwiftUI approach to integrating web browsing capabilities into apps. In this tutorial, I&#x2019;ll give you a quick overview of the new&#xA0;<code>WebView</code>&#xA0;and show you how to use it in your own app development.</p><h2 id="the-basic-usage-of-webview">The Basic Usage of WebView</h2><p>To load a web page using the new <code>WebView</code>, you simply import the <code>WebKit</code> framework and instantiate the view with a URL. Here is an example:</p><pre><code class="language-swift">import SwiftUI
import WebKit

struct ContentView: View {
    var body: some View {
        WebView(url: URL(string: &quot;https://www.appcoda.com&quot;))
    }
}
</code></pre><p>With just a single line of code, you can now embed a full-featured mobile Safari experience directly in your app&#x2014;powered by the same WebKit engine that runs Safari.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/06/swiftui-webview-basics.png" class="kg-image" alt="Exploring WebView and WebPage in SwiftUI for iOS 26" loading="lazy" width="1920" height="1147" srcset="https://www.appcoda.com/content/images/size/w600/2025/06/swiftui-webview-basics.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/06/swiftui-webview-basics.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/06/swiftui-webview-basics.png 1600w, https://www.appcoda.com/content/images/2025/06/swiftui-webview-basics.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="an-alternative-way-of-loading-web-content">An Alternative Way of Loading Web Content</h2><p>In addition to&#xA0;<code>WebView</code>, the WebKit framework also introduces a new class called&#xA0;<code>WebPage</code>. Rather than passing a URL directly to&#xA0;<code>WebView</code>, you can first create a&#xA0;<code>WebPage</code>&#xA0;instance with the URL and then use it to display the web content. Below is the sample code that achieves the same result:</p><pre><code class="language-swift">struct ContentView: View {
    @State private var page = WebPage()
    
    var body: some View {
        
        WebView(page)
            .ignoresSafeArea()
            .onAppear {
                if let pageURL = URL(string: &quot;https://www.appcoda.com&quot;) {
                    let urlRequest = URLRequest(url: pageURL)
                    page.load(urlRequest)
                }
            }
    }
}
</code></pre><h2 id="working-with-webpage">Working with WebPage</h2><p>In most cases, if you simply need to display web content or embed a browser in your app, <code>WebView</code> is the most straightforward approach. If you need finer control over how web content behaves and interacts with your application, <code>WebPage</code> offers more detailed customization options like accessing web page properties and programmatic navigation.</p><p>For example, you can access the <code>title</code> property of the <code>WebPage</code> object to retrieve the title of the web page:</p><pre><code class="language-swift">Text(page.title)
Text(page.url)
</code></pre><p>You can also use the <code>url</code> property to access the current URL of the web page.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/06/swiftui-webview-webpage-properties.png" class="kg-image" alt="Exploring WebView and WebPage in SwiftUI for iOS 26" loading="lazy" width="1920" height="1147" srcset="https://www.appcoda.com/content/images/size/w600/2025/06/swiftui-webview-webpage-properties.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/06/swiftui-webview-webpage-properties.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/06/swiftui-webview-webpage-properties.png 1600w, https://www.appcoda.com/content/images/2025/06/swiftui-webview-webpage-properties.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>If you want to track the loading progress, the&#xA0;<code>estimatedProgress</code>&#xA0;property gives you an approximate percentage of the page&#x2019;s loading completion.</p><pre><code class="language-swift">Text(page.estimatedProgress.formatted(.percent.precision(.fractionLength(0))))
</code></pre><p>Other than accessing its properties, the&#xA0;<code>WebPage</code>&#xA0;class also lets you control the loading behavior of a web page. For example, you can call&#xA0;<code>reload()</code>&#xA0;to refresh the current page, or&#xA0;<code>stopLoading()</code>&#xA0;to halt the loading process.</p><h2 id="loading-custom-html-content-using-webpage">Loading Custom HTML Content Using WebPage</h2><p>Besides loading a <code>URLRequest</code>, the <code>WebPage</code> class&apos;s <code>load</code> method can also handle custom HTML content directly. Below is the sample code for loading a YouTuber player:</p><pre><code class="language-swift">struct ContentView: View {
    
    @State private var page = WebPage()
    
    private let htmlContent: String = &quot;&quot;&quot;
        &lt;div class=&quot;videoFrame&quot;&gt;
        &lt;iframe width=&quot;960&quot; height=&quot;540&quot; src=&quot;https://www.youtube.com/embed/0_DjDdfqtUE?si=iYAmkvDghnGaVcAC&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen&gt;&lt;/iframe&gt;
        &lt;/div&gt;
        &quot;&quot;&quot;
    var body: some View {
        
        WebView(page)
            .onAppear {
                page.load(html: htmlContent, baseURL: URL(string: &quot;about:blank&quot;)!)
            }
    }
}
</code></pre><p>If you place the code in Xcode, the preview should show you the YouTube player.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/06/swiftui-webview-youtube.png" class="kg-image" alt="Exploring WebView and WebPage in SwiftUI for iOS 26" loading="lazy" width="1350" height="722" srcset="https://www.appcoda.com/content/images/size/w600/2025/06/swiftui-webview-youtube.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/06/swiftui-webview-youtube.png 1000w, https://www.appcoda.com/content/images/2025/06/swiftui-webview-youtube.png 1350w" sizes="(min-width: 720px) 720px"></figure><h2 id="executing-javascript">Executing Javascript</h2><p>The&#xA0;<code>WebPage</code>&#xA0;object not only lets you load HTML content&#x2014;it also allows you to execute JavaScript. You can use the&#xA0;<code>callJavaScript</code>&#xA0;method and pass in the script you want to run. Here is an example:</p><pre><code class="language-swift">struct ContentView: View {
    
    @State private var page = WebPage()
    
    private let snippet = &quot;&quot;&quot;
        document.write(&quot;&lt;h1&gt;This text is generated by Javascript&lt;/h1&gt;&quot;);
        &quot;&quot;&quot;
    
    var body: some View {
        
        WebView(page)
            .task {
                do {
                    try await page.callJavaScript(snippet)
                } catch {
                    print(&quot;JavaScript execution failed: \(error)&quot;)
                }
            }
    }
}
</code></pre><h2 id="summary">Summary</h2><p>The new native&#xA0;<code>WebView</code>&#xA0;component in SwiftUI makes it much easier to display web content within iOS apps, removing the need to rely on UIKit wrappers.&#xA0;SwiftUI developers can choose between two key approaches:</p><ul><li><code>WebView</code>: Ideal for straightforward use cases where you just need to load and display a web page.</li><li><code>WebPage</code>: Offers more granular control, allowing you to access page properties, track loading progress, reload or stop loading, and even execute JavaScript.</li></ul><p>This native SwiftUI solution brings a cleaner, more streamlined experience to embedding web content in your apps.</p>]]></content:encoded></item><item><title><![CDATA[In-App Language Switch in iOS with SwiftUI - No Restart Required]]></title><description><![CDATA[<p>We&apos;ve covered iOS localization in several tutorials, including one that shows how to fully localize an app using String Catalogs. However, these tutorials rely on the system language to determine the app&#x2019;s language. But what if you want to give users the ability to choose their</p>]]></description><link>https://www.appcoda.com/swiftui-language-switch/</link><guid isPermaLink="false">6847fc800995e71e7964f18a</guid><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Tue, 10 Jun 2025 09:42:34 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1743945968054-088cff86a63a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDYwfHxpcGhvbmUlMjAxNnxlbnwwfHx8fDE3NDk1NDg0OTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1743945968054-088cff86a63a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDYwfHxpcGhvbmUlMjAxNnxlbnwwfHx8fDE3NDk1NDg0OTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="In-App Language Switch in iOS with SwiftUI - No Restart Required"><p>We&apos;ve covered iOS localization in several tutorials, including one that shows how to fully localize an app using String Catalogs. However, these tutorials rely on the system language to determine the app&#x2019;s language. But what if you want to give users the ability to choose their preferred language, regardless of the system setting? And what if you want the language to update instantly&#x2014;without restarting the app? That&#x2019;s exactly what this tutorial will teach you.</p><p>Before we get started, I recommend reviewing <a href="https://www.appcoda.com/string-catalogs/">the earlier iOS localization tutorial</a> if you&apos;re not familiar with String Catalogs. The demo app used in this tutorial builds on the one from that guide.</p><h2 id="the-demo-app">The Demo App</h2><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/06/language-switch-demo-screens.png" class="kg-image" alt="In-App Language Switch in iOS with SwiftUI - No Restart Required" loading="lazy" width="1972" height="1208" srcset="https://www.appcoda.com/content/images/size/w600/2025/06/language-switch-demo-screens.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/06/language-switch-demo-screens.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/06/language-switch-demo-screens.png 1600w, https://www.appcoda.com/content/images/2025/06/language-switch-demo-screens.png 1972w" sizes="(min-width: 720px) 720px"></figure><p>We&apos;re reusing the demo app from our iOS localization tutorial&#x2014;a simple app with basic UI elements to illustrate localization concepts. In this tutorial, we&apos;ll extend it by adding a Settings screen that lets users select their preferred language. The app will then update the language instantly, with no need to restart.</p><h2 id="adding-app-languages-and-app-settings">Adding App Languages and App Settings</h2><p>Before we start building the Setting screen, let&#x2019;s first add an <code>AppLanguage</code> enum and an <code>AppSetting</code> class to the project. The <code>AppLanguage</code> enum defines the set of languages that your app supports. Here is the code:</p><pre><code class="language-swift">enum AppLanguage: String, CaseIterable, Identifiable {
    case en, fr, jp, ko, zhHans = &quot;zh-Hans&quot;, zhHant = &quot;zh-Hant&quot;
    
    var id: String { rawValue }
    
    var displayName: String {
        switch self {
        case .en: return &quot;English&quot;
        case .fr: return &quot;French&quot;
        case .jp: return &quot;Japanese&quot;
        case .ko: return &quot;Korean&quot;
        case .zhHans: return &quot;Simplified Chinese&quot;
        case .zhHant: return &quot;Traditional Chinese&quot;
        }
    }
}
</code></pre><p>Each case in the enum corresponds to a specific language, using standard locale identifiers as raw values. For example,&#xA0;<code>.en</code>&#xA0;maps to&#xA0;<code>&quot;en&quot;</code>&#xA0;for English,&#xA0;<code>.fr</code>&#xA0;to&#xA0;<code>&quot;fr&quot;</code>&#xA0;for French, and so on.&#xA0;The&#xA0;<code>displayName</code>&#xA0;computed property provides a user-friendly label for each language. Instead of displaying raw locale codes like &quot;en&quot; or &quot;zh-Hans&quot; in the UI, this property returns readable names such as &quot;English&quot; or &quot;Simplified Chinese.&quot;</p><p>The <code>AppSetting</code> class, which conforms to the <code>ObservableObject</code> protocol, is a simple observable model that stores the user&#x2019;s selected language. Here is the code:</p><pre><code class="language-swift">class AppSetting: ObservableObject {
    @Published var language: AppLanguage = .en
}
</code></pre><p>By default, the language is set to English. Later, when the user selects a different language from the Settings screen, updating this property will cause SwiftUI views that rely on the app&#x2019;s locale to re-render using the new language.</p><h2 id="building-the-setting-screen">Building the Setting Screen</h2><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/06/language-switch-settings.png" class="kg-image" alt="In-App Language Switch in iOS with SwiftUI - No Restart Required" loading="lazy" width="1558" height="1006" srcset="https://www.appcoda.com/content/images/size/w600/2025/06/language-switch-settings.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/06/language-switch-settings.png 1000w, https://www.appcoda.com/content/images/2025/06/language-switch-settings.png 1558w" sizes="(min-width: 720px) 720px"></figure><p>Next, let&#x2019;s build the Settings screen. It&#x2019;s a simple interface that displays a list of all the supported languages. Below is the code for implementing the setting view:</p><pre><code class="language-swift">struct SettingView: View {
    
    @Environment(\.dismiss) var dismiss
    
    @EnvironmentObject var appSetting: AppSetting
    
    @State private var selectedLanguage: AppLanguage = .en
    
    var body: some View {
        NavigationStack {
            Form {
                Section(header: Text(&quot;Language&quot;)) {
                    ForEach(AppLanguage.allCases) { lang in
                        
                        HStack {
                            Text(lang.displayName)
                            
                            Spacer()
                            
                            if lang == selectedLanguage {
                                Image(systemName: &quot;checkmark&quot;)
                                    .foregroundColor(.primary)
                            }
                                
                        }
                        .onTapGesture {
                            selectedLanguage = lang
                        }
                    }
                }
            }
            
            .toolbar {
                ToolbarItem(placement: .topBarTrailing) {
                    Button(&quot;Save&quot;) {
                        appSetting.language = selectedLanguage
                        dismiss()
                    }
                }

                ToolbarItem(placement: .topBarLeading) {
                    Button(&quot;Cancel&quot;) {
                        dismiss()
                    }
                }
            }
            .navigationTitle(&quot;Settings&quot;)
            .navigationBarTitleDisplayMode(.inline)
            
        }
        .onAppear {
            selectedLanguage = appSetting.language
        }
    }
}

#Preview {
    SettingView()
        .environmentObject(AppSetting())
}
</code></pre><p>The view simply lists the available languages as defined in <code>AppLanguage</code>. The currently selected language shows a checkmark next to it. When the user taps &quot;Save,&quot; the selected language is saved to the shared&#xA0;<code>AppSetting</code>&#xA0;object, and the view is dismissed.</p><p>In the main view, we add a Setting button and use the <code>.sheet</code> modifier to display the Setting view.</p><pre><code class="language-swift">struct ContentView: View {
    
    @EnvironmentObject var appSetting: AppSetting
    
    @State private var showSetting: Bool = false
    
    var body: some View {
        VStack {
            
            HStack {
                Spacer()
                
                Button {
                    showSetting.toggle()
                } label: {
                    Image(systemName: &quot;gear&quot;)
                        .font(.system(size: 30))
                        .tint(.primary)
                }

                
            }
                
            Text(&quot;ProLingo&quot;)
                .font(.system(size: 75, weight: .black, design: .rounded))
            
            Text(&quot;Learn programming languages by working on real projects&quot;)
                .font(.headline)
                .padding(.horizontal)
              
           .
           .
           .
           .
           .
           .
            
        }
        .padding()
        .sheet(isPresented: $showSetting) {
            SettingView()
                .environmentObject(appSetting)
        }

    }
}

</code></pre><h2 id="enabling-real-time-language-changes">Enabling Real-Time Language Changes</h2><p>At this point, tapping the gear button will bring up the Settings view. However, the app doesn&apos;t update its language when the user selects their preferred language. To implement dynamic language switching, we have to attach the <code>.environment</code> modifier to <code>ContentView</code> and update the locale to match the user&#x2019;s selection like this:</p><pre><code class="language-swift">VStack {
   ...
}
.environment(\.locale, Locale(identifier: appSetting.language.id))
</code></pre><p>This line of code injects a custom <code>Locale</code> into the SwiftUI environment. The <code>\.locale</code> key controls which language and region SwiftUI uses for localizable views like <code>Text</code>. The locale is set to match the language the user selected in settings.</p><p>The app can now update its language on the fly. For example, open the Settings view and select Traditional Chinese. After saving your selection and returning to the main view, you&apos;ll see the UI instantly updated to display all text in Traditional Chinese.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/06/language-switch-tc.png" class="kg-image" alt="In-App Language Switch in iOS with SwiftUI - No Restart Required" loading="lazy" width="1920" height="1191" srcset="https://www.appcoda.com/content/images/size/w600/2025/06/language-switch-tc.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/06/language-switch-tc.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/06/language-switch-tc.png 1600w, https://www.appcoda.com/content/images/2025/06/language-switch-tc.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="using-localizedstringkey">Using LocalizedStringKey</h2><p>You may notice a bug in the app. After changing the language to Traditional Chinese (or other languages) and reopening the Settings view, the language names still display in English.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/06/language-switch-settings-bug.png" class="kg-image" alt="In-App Language Switch in iOS with SwiftUI - No Restart Required" loading="lazy" width="1240" height="660" srcset="https://www.appcoda.com/content/images/size/w600/2025/06/language-switch-settings-bug.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/06/language-switch-settings-bug.png 1000w, https://www.appcoda.com/content/images/2025/06/language-switch-settings-bug.png 1240w" sizes="(min-width: 720px) 720px"></figure><p>Let&#x2019;s take a look at the code that handles the display of language name:</p><pre><code class="language-swift">Text(lang.displayName)
</code></pre><p>You may wonder why the <code>Text</code> view doesn&#x2019;t handle the localization automatically. In this case, SwiftUI treats <code>lang.displayName</code> as a plain text, which means no automatic localization happens, even if the string matches a key in the String Catalog file. To make the localization work, you need to convert the <code>String</code> to a <a href="https://developer.apple.com/documentation/swiftui/localizedstringkey?ref=appcoda.com" rel="noreferrer"><code>LocalizedStringKey</code></a> like this:</p><pre><code class="language-swift">Text(LocalizedStringKey(lang.displayName))
</code></pre><p>Using <code>LocalizedStringKey</code> triggers the localization lookup process. When you run the app again, you&apos;ll see the language names in the Settings view displayed in your chosen language.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/06/language-switch-setting-tc.png" class="kg-image" alt="In-App Language Switch in iOS with SwiftUI - No Restart Required" loading="lazy" width="1470" height="672" srcset="https://www.appcoda.com/content/images/size/w600/2025/06/language-switch-setting-tc.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/06/language-switch-setting-tc.png 1000w, https://www.appcoda.com/content/images/2025/06/language-switch-setting-tc.png 1470w" sizes="(min-width: 720px) 720px"></figure><h2 id="summary">Summary</h2><p>In this tutorial, you learned how to implement in-app language switching in iOS using <a href="https://www.appcoda.com/swiftui" rel="noreferrer">SwiftUI</a>, allowing users to change languages without restarting the app. We explored how to create a Settings screen for language selection, enabled real-time localization updates, and learned the importance of using <code>LocalizedStringKey</code> for proper string localization.</p><p>The code and concepts presented here provide a foundation for implementing language switching in your own iOS apps. Feel free to adapt this approach for your own iOS apps that require multi-language support.</p>]]></content:encoded></item><item><title><![CDATA[Exploring ImagePlayground: AI-Powered Image Generation in iOS 18]]></title><description><![CDATA[<p>With the release of iOS 18, Apple has unveiled a suite of exciting features under the <a href="https://www.apple.com/hk/en/apple-intelligence/?ref=appcoda.com" rel="noreferrer">Apple Intelligence</a> umbrella, and one standout is the <code>ImagePlayground</code> framework. This powerful API empowers developers to generate images from text descriptions using AI, opening up a world of creative possibilities for iOS apps. Whether</p>]]></description><link>https://www.appcoda.com/imageplaygroundsheet/</link><guid isPermaLink="false">67c7c1fc0995e71e7964f16c</guid><category><![CDATA[SwiftUI]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Wed, 05 Mar 2025 03:50:02 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/2025/03/imageplayground-featured.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.appcoda.com/content/images/2025/03/imageplayground-featured.png" alt="Exploring ImagePlayground: AI-Powered Image Generation in iOS 18"><p>With the release of iOS 18, Apple has unveiled a suite of exciting features under the <a href="https://www.apple.com/hk/en/apple-intelligence/?ref=appcoda.com" rel="noreferrer">Apple Intelligence</a> umbrella, and one standout is the <code>ImagePlayground</code> framework. This powerful API empowers developers to generate images from text descriptions using AI, opening up a world of creative possibilities for iOS apps. Whether you&#x2019;re building a design tool, a storytelling app, or just want to add some flair to your UI, ImagePlayground makes it seamless to integrate AI-driven image generation.</p><p>In this tutorial, we&#x2019;ll walk you through building a simple app using <a href="https://www.appcoda.com/swiftui" rel="noreferrer">SwiftUI</a> and the <code>ImagePlayground</code> framework. Our app will let users type a description&#x2014;like &#x201C;a serene beach at sunset&#x201D;&#x2014;and generate a corresponding image with a tap. Designed for developers with some iOS experience, this guide assumes you&#x2019;re familiar with Swift, SwiftUI, and Xcode basics. Ready to dive into iOS 18&#x2019;s image generation capabilities?</p><p>Let&#x2019;s get started!</p><h2 id="prerequisites">Prerequisites</h2><p>Before we get started, make sure you&#x2019;ve got a few things ready:</p><ul><li><strong>Device</strong>: Image Playground is supported on iPhone 15 Pro, iPhone 15 Pro Max, and all iPhone 16 models.</li><li><strong>iOS Version</strong>: Your device must be running iOS 18.1 or later.</li><li><strong>Xcode</strong>: You&#x2019;ll need Xcode 16 or later to build the app.</li><li><strong>Apple Intelligence</strong>: Ensure that Apple Intelligence is enabled on your device. You can check this in&#xA0;<strong>Settings &gt; Apple Intelligence &amp; Siri</strong>. If prompted, request access to Apple Intelligence features.</li></ul><h2 id="setting-up-the-xcode-project">Setting up the Xcode Project</h2><p>First, let&#x2019;s begin by creating a new Xcode project named <code>AIImageGeneration</code> using the iOS app template. Make sure you choose SwiftUI as the UI framework. Also, the minimum deployment version is set to 18.1 (or later). The <code>ImagePlayground</code> framework is only available on iOS 18.1 or up.</p><h2 id="using-imageplaygroundsheet">Using ImagePlaygroundSheet</h2><p>Have you tried Image Playground app in iOS 18 before? The app leverages Apple Intelligence to create images based on user inputs, such as text descriptions. While <em>Playground</em> is an independent app on iOS, developers can bring this functionality into your apps using <code>ImagePlaygroundSheet</code>, a SwiftUI view modifier that presents the image generation interface.</p><p>Let&#x2019;s switch over to the Xcode project and see how the sheet works. In the <code>ContentView.swift</code> file, add the following import statement:</p><pre><code class="language-swift">import ImagePlayground
</code></pre><p>The <code>ImagePlaygroundSheet</code> view is included in the <code>ImagePlayground</code> framework. For the <code>ContentView</code> struct, update it like below:</p><pre><code class="language-swift">struct ContentView: View {
    @Environment(\.supportsImagePlayground) private var supportsImagePlayground
    
    @State private var showImagePlayground: Bool = false
    
    @State private var generatedImageURL: URL?
    
    var body: some View {
        if supportsImagePlayground {
            
            if let generatedImageURL {
                AsyncImage(url: generatedImageURL) { image in
                    image
                        .resizable()
                        .scaledToFill()
                } placeholder: {
                    Color.purple.opacity(0.1)
                }
                .padding()
            }

            Button {
                showImagePlayground.toggle()
            } label: {
                Text(&quot;Generate images&quot;)
            }
            .buttonStyle(.borderedProminent)
            .controlSize(.large)
            .tint(.purple)
            .imagePlaygroundSheet(isPresented: $showImagePlayground) { url in
                generatedImageURL = url
            }
            .padding()

        } else {
            ContentUnavailableView(&quot;Not Supported&quot;, systemImage: &quot;exclamationmark.triangle&quot;, description: Text(&quot;This device does not support Image Playground. Please use a device that supports Image Playground to view this example.&quot;))
        }
    }
}
</code></pre><p>Not all iOS devices have Apple Intelligence enabled. That&#x2019;s why it&#x2019;s important to do a basic check to see if <code>ImagePlayground</code> is supported on the device. The <code>supportsImagePlayground</code> property uses SwiftUI&#x2019;s environment system to check if the device can use Image Playground. If the device doesn&#x2019;t support it, we simply show a &#x201C;Not Supported&#x201D; message on the screen.</p><p>For devices that do support it, the demo app displays a &#x201C;Generate Images&#x201D; button. The easiest way to add Image Playground to your app is by using the <code>imagePlaygroundSheet</code> modifier. We use the <code>showImagePlayground</code> property to open or close the playground sheet. After the user creates an image in Image Playground, the system saves the image file in a temporary location and gives back the image URL. This URL is then assigned to the <code>generatedImageURL</code> variable.</p><p>With the image URL ready, we use the <code>AsyncImage</code> view to display the image on the screen.</p><p>Run the app and test it on your iPhone. Tap the &#x201C;Generate Image&#x201D; button to open the Image Playground sheet. Enter a description for the image, and let Apple Intelligence create it for you. Once it&#x2019;s done, close the sheet, and the generated image should appear in the app.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2025/03/imageplayground-demo-simple.png" class="kg-image" alt="Exploring ImagePlayground: AI-Powered Image Generation in iOS 18" loading="lazy" width="1822" height="1212" srcset="https://www.appcoda.com/content/images/size/w600/2025/03/imageplayground-demo-simple.png 600w, https://www.appcoda.com/content/images/size/w1000/2025/03/imageplayground-demo-simple.png 1000w, https://www.appcoda.com/content/images/size/w1600/2025/03/imageplayground-demo-simple.png 1600w, https://www.appcoda.com/content/images/2025/03/imageplayground-demo-simple.png 1822w" sizes="(min-width: 720px) 720px"></figure><h2 id="working-with-concepts">Working with Concepts</h2><p>Previously, I showed you the basic way of using the <code>imagePlaygroundSheet</code> modifier. The modifier provides a number of parameters for developers to customize the integration. For example, we can create our own text field to capture the description of the image.</p><p>In <code>ContentView</code>, update the code like below:</p><pre><code class="language-swift">struct ContentView: View {
    @Environment(\.supportsImagePlayground) private var supportsImagePlayground
    
    @State private var showImagePlayground: Bool = false
    
    @State private var generatedImageURL: URL?
    @State private var description: String = &quot;&quot;
    
    var body: some View {
        if supportsImagePlayground {
            
            if let generatedImageURL {
                AsyncImage(url: generatedImageURL) { image in
                    image
                        .resizable()
                        .scaledToFill()
                } placeholder: {
                    Color.purple.opacity(0.1)
                }
                .padding()
            } else {
                Text(&quot;Type your image description to create an image...&quot;)
                    .font(.system(.title, design: .rounded, weight: .medium))
                    .multilineTextAlignment(.center)
                    .frame(maxWidth: .infinity, maxHeight: .infinity)
            }

            Spacer()
            
            HStack {
                TextField(&quot;Enter your text...&quot;, text: $description)
                    .padding()
                    .background(
                        RoundedRectangle(cornerRadius: 12)
                            .fill(.white)
                    )
                    .overlay(
                        RoundedRectangle(cornerRadius: 12)
                            .stroke(Color.gray.opacity(0.2), lineWidth: 1)
                    )
                    .font(.system(size: 16, weight: .regular, design: .rounded))
                
                Button {
                    showImagePlayground.toggle()
                } label: {
                    Text(&quot;Generate images&quot;)
                }
                .buttonStyle(.borderedProminent)
                .controlSize(.regular)
                .tint(.purple)
                .imagePlaygroundSheet(isPresented: $showImagePlayground,
                                      concept: description
                    ) { url in
                    generatedImageURL = url
                }
                .padding()
            }
            .padding(.horizontal)

        } else {
            ContentUnavailableView(&quot;Not Supported&quot;, systemImage: &quot;exclamationmark.triangle&quot;, description: Text(&quot;This device does not support Image Playground. Please use a device that supports Image Playground to view this example.&quot;))
        }
    }
}
</code></pre><p>We added a new text field where users can directly enter an image description. The <code>imagePlaygroundSheet</code> modifier has been updated with a new parameter called <code>concept</code>. This parameter accepts the image description and passes it to the creation UI to generate the image.</p><pre><code class="language-swift">.imagePlaygroundSheet(isPresented: $showImagePlayground,
                      concept: description
) { url in
      generatedImageURL = url
}
</code></pre><p>The <code>concept</code> parameter works best for short descriptions. If you want to allow users to input a longer paragraph, it&#x2019;s better to use the <code>concepts</code> parameter, which takes an array of <code>ImagePlaygroundConcept</code>. Below is an example of how the code can be rewritten using the <code>concepts</code> parameter:</p><pre><code class="language-swift">.imagePlaygroundSheet(isPresented: $showImagePlayground,
                      concepts: [ .text(description) ]
) { url in
      generatedImageURL = url
}
</code></pre><p>The <code>text</code> function creates a playground concept by processing a short description of the image. For longer text, you can use the <code>extracted(from:title:)</code> API, which lets the system analyze the text and extract key concepts to guide the image creation process.</p><h2 id="adding-a-source-image">Adding a Source Image</h2><p>The <code>imagePlaygroundSheet</code> modifier also supports adding a source image, which acts as the starting point for image generation. Here is an example:</p><pre><code class="language-swift">.imagePlaygroundSheet(isPresented: $showImagePlayground,
                      concepts: [.text(description)],
                      sourceImage: Image(&quot;gorilla&quot;)
    ) { url in
    generatedImageURL = url
}
.padding()
</code></pre><p>You can either use the <code>sourceImage</code> or <code>sourceImageURL</code> parameter to embed the image.</p><h2 id="summary">Summary</h2><p>In this tutorial, we explored the potential of the <code>ImagePlayground</code> framework in iOS 18, showcasing how developers can harness its AI-driven image generation capabilities to create dynamic and visually engaging experiences. By combining the power of SwiftUI with <code>ImagePlayground</code>, we demonstrated how simple it is to turn text descriptions into stunning visuals.</p><p>Now it&#x2019;s your turn to explore this innovative framework and unlock its full potential in your own projects. I&apos;m eager to see what new AI-related frameworks Apple will introduce next!</p>]]></content:encoded></item><item><title><![CDATA[Announcing Mastering SwiftUI for iOS 18 and Xcode 16]]></title><description><![CDATA[<p>We&apos;re thrilled to announce that our&#xA0;<a href="https://www.appcoda.com/swiftui" rel="noreferrer">Mastering SwiftUI ebook</a>&#xA0;has been fully updated for iOS 18 and Xcode 16. This comprehensive update includes:</p><ul><li>All content and source code now compatible with the latest iOS and Xcode versions</li><li>Brand new chapters covering iOS 18&apos;s new</li></ul>]]></description><link>https://www.appcoda.com/swiftui6-ios18-book/</link><guid isPermaLink="false">67186ae11ffbee5921d1b9e1</guid><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Wed, 09 Oct 2024 04:32:00 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/2024/10/gumroad-mastering-swiftui-6-basic-banner.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.appcoda.com/content/images/2024/10/gumroad-mastering-swiftui-6-basic-banner.png" alt="Announcing Mastering SwiftUI for iOS 18 and Xcode 16"><p>We&apos;re thrilled to announce that our&#xA0;<a href="https://www.appcoda.com/swiftui" rel="noreferrer">Mastering SwiftUI ebook</a>&#xA0;has been fully updated for iOS 18 and Xcode 16. This comprehensive update includes:</p><ul><li>All content and source code now compatible with the latest iOS and Xcode versions</li><li>Brand new chapters covering iOS 18&apos;s new APIs</li><li>Learn to implement translation features, create stunning animated text effects, master hero animations, and much more!</li></ul><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/10/swiftui-6-announcement.png" class="kg-image" alt="Announcing Mastering SwiftUI for iOS 18 and Xcode 16" loading="lazy" width="1920" height="1181" srcset="https://www.appcoda.com/content/images/size/w600/2024/10/swiftui-6-announcement.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/10/swiftui-6-announcement.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/10/swiftui-6-announcement.png 1600w, https://www.appcoda.com/content/images/2024/10/swiftui-6-announcement.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>Our SwiftUI book caters to both beginners and intermediate developers who are eager to learn the ins and outs of the new SwiftUI framework. Each chapter in our book features a minimum of one simple project, allowing you to gain hands-on experience. By working on these projects, you will understand how to work with various types of UI elements and build interactive UIs, plus learn the new APIs coming with the latest version of SwiftUI.</p><p>Once you have grasped the fundamentals, you will delve into building a personal finance app using SwiftUI. All the projects and accompanying source code can be downloaded, serving as valuable references. Feel free to incorporate the code into your own projects&#x2014;whether personal or commercial.</p><h2 id="bonus-get-the-photo-translator-app-for-free">Bonus: Get the Photo Translator App for Free</h2><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/10/phototranslator-screenshot.png" class="kg-image" alt="Announcing Mastering SwiftUI for iOS 18 and Xcode 16" loading="lazy" width="1480" height="1002" srcset="https://www.appcoda.com/content/images/size/w600/2024/10/phototranslator-screenshot.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/10/phototranslator-screenshot.png 1000w, https://www.appcoda.com/content/images/2024/10/phototranslator-screenshot.png 1480w" sizes="(min-width: 720px) 720px"></figure><p>Our&#xA0;professional package&#xA0;is your all-in-one solution for mastering iOS app development. It includes four comprehensive Swift &amp; SwiftUI books, along with additional app templates to enhance your learning experience. For a limited time until October 17th, you&apos;ll also receive the Photo Translator project as a bonus. This cutting-edge app showcases iOS 18&apos;s new Translator framework and Text Recognition APIs, providing you with hands-on experience in implementing these advanced features using Swift and SwiftUI.</p><p>To purchase the Mastering SwiftUI book, visit the official page <a href="https://www.appcoda.com/swiftui" rel="noreferrer">here</a>. </p>]]></content:encoded></item><item><title><![CDATA[Using Navigation Transition to Create Hero Animation in iOS 18]]></title><description><![CDATA[<p>Apple&apos;s engineers may have long recognized the widespread desire among iOS developers to recreate the elegant hero animation featured in the App Store app. Understanding the complexity and time investment typically required to implement such animations from scratch, Apple has incorporated this feature into the <a href="https://developer.apple.com/ios/?ref=appcoda.com" rel="noreferrer">iOS 18 SDK</a></p>]]></description><link>https://www.appcoda.com/navigation-transition/</link><guid isPermaLink="false">66e2c1b8fafc178e3c597e5b</guid><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Thu, 12 Sep 2024 10:43:37 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1524293772845-840102eccb82?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQ0fHxpcGhvbmV8ZW58MHx8fHwxNzI2MDg0Nzg2fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1524293772845-840102eccb82?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQ0fHxpcGhvbmV8ZW58MHx8fHwxNzI2MDg0Nzg2fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Using Navigation Transition to Create Hero Animation in iOS 18"><p>Apple&apos;s engineers may have long recognized the widespread desire among iOS developers to recreate the elegant hero animation featured in the App Store app. Understanding the complexity and time investment typically required to implement such animations from scratch, Apple has incorporated this feature into the <a href="https://developer.apple.com/ios/?ref=appcoda.com" rel="noreferrer">iOS 18 SDK</a>.</p><p>With this update, you can now achieve a similar animated transition effect in your own apps using just a few lines of code. This significant enhancement empowers developers to create visually appealing and seamless transitions, elevating the overall user experience of their apps.</p><p>In this tutorial, we&apos;ll explore how to leverage the new&#xA0;<code>NavigationTransition</code>&#xA0;protocol and the&#xA0;<code>matchedTransitionSource</code>&#xA0;modifier to create hero animations during view transitions.</p><h2 id="the-simple-demo-app">The Simple Demo App</h2><p>Let&apos;s dive into a demo app to explore the new APIs. We&apos;ll begin with a simple app that shows a list of cafes in a standard scroll view. Our goal is to implement a feature where tapping on a cafe takes you to a new screen displaying a full image with a hero animation.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/09/swiftui-navtransition-app-demo.png" class="kg-image" alt="Using Navigation Transition to Create Hero Animation in iOS 18" loading="lazy" width="1920" height="1075" srcset="https://www.appcoda.com/content/images/size/w600/2024/09/swiftui-navtransition-app-demo.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/09/swiftui-navtransition-app-demo.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/09/swiftui-navtransition-app-demo.png 1600w, https://www.appcoda.com/content/images/2024/09/swiftui-navtransition-app-demo.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="using-navigation-transition-protocol">Using Navigation Transition Protocol</h2><p>To display a full image and animate the view transition using hero animations, you can use the following steps:</p><ol><li>Embed the scroll view within a navigation stack.</li><li>Use&#xA0;<code>NavigationLink</code>&#xA0;to enable tapping on the card view.</li><li>Declare a namespace with&#xA0;<code>@Namespace</code>&#xA0;to support the hero animation.</li><li>Attach the&#xA0;<code>matchedTransitionSource</code>&#xA0;modifier to the excerpt mode card view.</li><li>Attach the&#xA0;<code>navigationTransition</code>&#xA0;modifier to the full content mode card view.</li></ol><p>By completing these steps, SwiftUI will automatically generate a smooth hero animation, expanding the selected cafe item into a full-screen image when tapped.</p><h2 id="creating-hero-animations-for-view-transitions">Creating Hero Animations for View Transitions</h2><p>Now we&apos;ll modify the project to support navigation. To start, let&apos;s embed the scroll view within a navigation stack, as shown below:</p><pre><code class="language-swift">NavigationStack {
	ScrollView {
	
		// Existing code
		
	}
}</code></pre><p>Next, create the detail view for displaying the full image like below. It accepts a cafe object as input and displays its image in a full-screen view.</p><pre><code class="language-swift">struct DetailView: View {
    var cafe: Cafe
    @Environment(\.dismiss) var dismiss
    
    var body: some View {
        Image(cafe.image)
            .resizable()
            .scaledToFill()
            .frame(minWidth: 0, maxWidth: .infinity)
            .clipped()
            .overlay(alignment: .topTrailing) {
                Button {
                    dismiss()
                } label: {
                    Image(systemName: &quot;xmark.circle.fill&quot;)
                        .font(.system(size: 30))
                        .foregroundStyle(Color.white)
                        .opacity(0.7)
                        .padding()
                        .padding(.top, 30)
                }
            }
            .ignoresSafeArea()
    }
}</code></pre><p>To enable interaction with the cafe photos, we can use&#xA0;NavigationLink&#xA0;to manage the navigation. When tapped, the app displays the detail view which shows the image in full screen view.</p><pre><code class="language-swift">ForEach(sampleCafes) { cafe in
    
    NavigationLink {
        DetailView(cafe: cafe)
        
    } label: {
        Image(cafe.image)
            .resizable()
            .scaledToFill()
            .frame(minWidth: 0, maxWidth: .infinity)
            .frame(height: 400)
            .clipShape(RoundedRectangle(cornerRadius: 20))
    }
    .padding()
}</code></pre><p>In preview mode, you can navigate between the detail view and the list view. At this stage, the transition uses the default animation for navigation stacks, without any custom effects.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/09/swiftui-navtransition-navigationlink.gif" class="kg-image" alt="Using Navigation Transition to Create Hero Animation in iOS 18" loading="lazy" width="1268" height="868" srcset="https://www.appcoda.com/content/images/size/w600/2024/09/swiftui-navtransition-navigationlink.gif 600w, https://www.appcoda.com/content/images/size/w1000/2024/09/swiftui-navtransition-navigationlink.gif 1000w, https://www.appcoda.com/content/images/2024/09/swiftui-navtransition-navigationlink.gif 1268w" sizes="(min-width: 720px) 720px"></figure><p>Now it comes to the fun part. Let&#x2019;s create the hero animation by using the new&#xA0;NavigationTransition&#xA0;protocol. The very first step is to define a namespace for the animation. In&#xA0;<code>ContentView</code>, declare the following namespace variable:</p><pre><code class="language-swift">@Namespace var namespace</code></pre><p>Next, apply the&#xA0;matchedTransitionSource&#xA0;modifier to the source view, which is the image view in the list. Then, use the&#xA0;navigationTransition&#xA0;modifier on the detail view. Update your code as shown below:</p><pre><code class="language-swift">NavigationLink {
    DetailView(cafe: cafe)
        .navigationTransition(.zoom(sourceID: cafe.id, in: namespace))
        .toolbarVisibility(.hidden, for: .navigationBar)
    
} label: {
    Image(cafe.image)
        .resizable()
        .scaledToFill()
        .frame(minWidth: 0, maxWidth: .infinity)
        .frame(height: 400)
        .clipShape(RoundedRectangle(cornerRadius: 20))
        .matchedTransitionSource(id: cafe.id, in: namespace)
}</code></pre><p>To enhance the visual experience, I&apos;ve included the&#xA0;<code>toolbarVisibility</code>&#xA0;modifier to conceal the navigation bar. This removes the Back button from view, creating a more immersive full-screen presentation of the cafe image.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/09/swiftui-navtransition-hero-animation.gif" class="kg-image" alt="Using Navigation Transition to Create Hero Animation in iOS 18" loading="lazy" width="1228" height="882" srcset="https://www.appcoda.com/content/images/size/w600/2024/09/swiftui-navtransition-hero-animation.gif 600w, https://www.appcoda.com/content/images/size/w1000/2024/09/swiftui-navtransition-hero-animation.gif 1000w, https://www.appcoda.com/content/images/2024/09/swiftui-navtransition-hero-animation.gif 1228w" sizes="(min-width: 720px) 720px"></figure><p>In preview mode, test the app by tapping a cafe image. It will display a full-screen image with a hero animation. To return to the list, either tap the &quot;X&quot; button or simply drag the image downwards. The app will animate the image back to its original position in the list, providing a fluid and intuitive user experience.</p><h2 id="summary">Summary</h2><p>The new&#xA0;<code>NavigationTransition</code>&#xA0;protocol has made it remarkably simple for developers to create hero animations for view transitions, allowing for a richer user experience with just a few lines of code.&#xA0;Consider exploring this new feature to elevate your app&apos;s interactivity and user satisfaction.</p><p>It&apos;s important to note that this API is only compatible with iOS 18 and later. If your app needs to support older iOS versions, you&apos;ll have to implement the animation yourself. Our &quot;<a href="https://www.appcoda.com/swiftui" rel="noreferrer">Mastering SwiftUI</a>&quot; book provides guidance on how to achieve this.</p>]]></content:encoded></item><item><title><![CDATA[Extracting Text From Images Using Vision APIs]]></title><description><![CDATA[<p>The Vision framework has long included text recognition capabilities. We already have a <a href="https://www.appcoda.com/swiftui-text-recognition/" rel="noreferrer">detailed tutorial</a> that shows you how to scan an image and perform text recognition using the <a href="https://developer.apple.com/documentation/vision/?ref=appcoda.com" rel="noreferrer">Vision</a> framework. Previously, we utilized&#xA0;<code>VNImageRequestHandler</code>&#xA0;and&#xA0;<code>VNRecognizeTextRequest</code>&#xA0;to extract text from an image.</p><p>Over the years,</p>]]></description><link>https://www.appcoda.com/vision-text-recognition/</link><guid isPermaLink="false">669f8bee7f3d058096978b10</guid><category><![CDATA[AI]]></category><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Tue, 23 Jul 2024 11:04:12 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1508691345478-73b1e0352e35?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE2OHx8aXBob25lfGVufDB8fHx8MTcyMTczMjUxNnww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1508691345478-73b1e0352e35?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE2OHx8aXBob25lfGVufDB8fHx8MTcyMTczMjUxNnww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Extracting Text From Images Using Vision APIs"><p>The Vision framework has long included text recognition capabilities. We already have a <a href="https://www.appcoda.com/swiftui-text-recognition/" rel="noreferrer">detailed tutorial</a> that shows you how to scan an image and perform text recognition using the <a href="https://developer.apple.com/documentation/vision/?ref=appcoda.com" rel="noreferrer">Vision</a> framework. Previously, we utilized&#xA0;<code>VNImageRequestHandler</code>&#xA0;and&#xA0;<code>VNRecognizeTextRequest</code>&#xA0;to extract text from an image.</p><p>Over the years, the Vision framework has evolved significantly. In iOS 18, Vision introduces new APIs that leverage the power of Swift 6. In this tutorial, we will explore how to use these new APIs to perform text recognition. You will be amazed by the improvements in the framework, which save you a significant amount of code to implement the same feature.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/07/swiftui-text-recognition-demo.png" class="kg-image" alt="Extracting Text From Images Using Vision APIs" loading="lazy" width="1994" height="1400" srcset="https://www.appcoda.com/content/images/size/w600/2024/07/swiftui-text-recognition-demo.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/07/swiftui-text-recognition-demo.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/07/swiftui-text-recognition-demo.png 1600w, https://www.appcoda.com/content/images/2024/07/swiftui-text-recognition-demo.png 1994w" sizes="(min-width: 720px) 720px"></figure><p>As always, we will create a demo application to guide you through the APIs. We will build a simple app that allows users to select an image from the photo library, and the app will extract the text from it in real time.</p><p>Let&#x2019;s get started.</p><h2 id="loading-the-photo-library-with-photospicker">Loading the Photo Library with PhotosPicker</h2><p>Assuming you&#x2019;ve created a new SwiftUI project on Xcode 16, go to&#xA0;<code>ContentView.swift</code>&#xA0;and start building the basic UI of the demo app:</p><pre><code class="language-swift">import SwiftUI
import PhotosUI

struct ContentView: View {
    
    @State private var selectedItem: PhotosPickerItem?
    
    @State private var recognizedText: String = &quot;No text is detected&quot;
    
    var body: some View {
        VStack {
            ScrollView {
                VStack {
                    Text(recognizedText)
                }
            }
            .contentMargins(.horizontal, 20.0, for: .scrollContent)
            
            Spacer()
            
            PhotosPicker(selection: $selectedItem, matching: .images) {
                Label(&quot;Select a photo&quot;, systemImage: &quot;photo&quot;)
            }
            .photosPickerStyle(.inline)
            .photosPickerDisabledCapabilities([.selectionActions])
            .frame(height: 400)
            
        }
        .ignoresSafeArea(edges: .bottom)
    }
}</code></pre><p>We utilize&#xA0;PhotosPicker&#xA0;to access the photo library and load the images in the lower part of the screen. The upper part of the screen features a scroll view for display the recognized text.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/07/swiftui-text-recognition-photospicker.png" class="kg-image" alt="Extracting Text From Images Using Vision APIs" loading="lazy" width="1920" height="928" srcset="https://www.appcoda.com/content/images/size/w600/2024/07/swiftui-text-recognition-photospicker.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/07/swiftui-text-recognition-photospicker.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/07/swiftui-text-recognition-photospicker.png 1600w, https://www.appcoda.com/content/images/2024/07/swiftui-text-recognition-photospicker.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>We have a state variable to keep track of the selected photo. To detect the selected image and load it as&#xA0;Data, you can attach the&#xA0;onChange&#xA0;modifier to the&#xA0;PhotosPicker&#xA0;view like this:</p><pre><code class="language-swift">.onChange(of: selectedItem) { oldItem, newItem in
    Task {
        guard let imageData = try? await newItem?.loadTransferable(type: Data.self) else {
            return
        }
    }
}</code></pre><h2 id="text-recognition-with-vision">Text Recognition with Vision</h2><p>The new APIs in the Vision framework have simplified the implementation of text recognition. Vision offers 31 different request types, each tailored for a specific kind of image analysis. For instance,&#xA0;<code>DetectBarcodesRequest</code>&#xA0;is used for identifying and decoding barcodes. For our purposes, we will be using&#xA0;<code>RecognizeTextRequest</code>.</p><p>In the&#xA0;<code>ContentView</code>&#xA0;struct, add an import statement to import&#xA0;<code>Vision</code>&#xA0;and create a new function named&#xA0;<code>recognizeText</code>:</p><pre><code class="language-swift">private func recognizeText(image: UIImage) async {
    guard let cgImage = image.cgImage else { return }
    
    let textRequest = RecognizeTextRequest()
    
    let handler = ImageRequestHandler(cgImage)
    
    do {
        let result = try await handler.perform(textRequest)
        let recognizedStrings = result.compactMap { observation in
            observation.topCandidates(1).first?.string
        }
        
        recognizedText = recognizedStrings.joined(separator: &quot;\n&quot;)
        
    } catch {
        recognizedText = &quot;Failed to recognized text&quot;
        print(error)
    }
}</code></pre><p>This function takes in an&#xA0;<code>UIImage</code>&#xA0;object, which is the selected photo, and extract the text from it. The&#xA0;<code>RecognizeTextRequest</code>&#xA0;object is designed to identify rectangular text regions within an image.</p><p>The&#xA0;<code>ImageRequestHandler</code>&#xA0;object processes the text recognition request on a given image. When we call its&#xA0;<code>perform</code>function, it returns the results as&#xA0;<code>RecognizedTextObservation</code>&#xA0;objects, each containing details about the location and content of the recognized text.</p><p>We then use&#xA0;<code>compactMap</code>&#xA0;to extract the recognized strings. The&#xA0;<code>topCandidates</code>&#xA0;method returns the best matches for the recognized text. By setting the maximum number of candidates to 1, we ensure that only the top candidate is retrieved.</p><p>Finally, we use the&#xA0;<code>joined</code>&#xA0;method to concatenate all the recognized strings.</p><p>With the&#xA0;<code>recognizeText</code>&#xA0;method in place, we can update the&#xA0;<code>onChange</code>&#xA0;modifier to call this method, performing text recognition on the selected photo.</p><pre><code class="language-swift">.onChange(of: selectedItem) { oldItem, newItem in
    Task {
        guard let imageData = try? await newItem?.loadTransferable(type: Data.self) else {
            return
        }
        
        await recognizeText(image: UIImage(data: imageData)!)
    }
}</code></pre><p>With the implementation complete, you can now run the app in a simulator to test it out. If you have a photo containing text, the app should successfully extract and display the text on screen.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/07/swiftui-text-recognition-demo-app.png" class="kg-image" alt="Extracting Text From Images Using Vision APIs" loading="lazy" width="2000" height="1358" srcset="https://www.appcoda.com/content/images/size/w600/2024/07/swiftui-text-recognition-demo-app.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/07/swiftui-text-recognition-demo-app.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/07/swiftui-text-recognition-demo-app.png 1600w, https://www.appcoda.com/content/images/2024/07/swiftui-text-recognition-demo-app.png 2036w" sizes="(min-width: 720px) 720px"></figure><h2 id="summary">Summary</h2><p>With the introduction of the new Vision APIs in iOS 18, we can now achieve text recognition tasks with remarkable ease, requiring only a few lines of code to implement. This enhanced simplicity allows developers to quickly and efficiently integrate text recognition features into their applications.</p><p>What do you think about this improvement of the Vision framework? Feel free to leave comment below to share your thought.</p>]]></content:encoded></item><item><title><![CDATA[Using Translation API in Swift and SwiftUI]]></title><description><![CDATA[<p>iOS already includes a system-wide translation feature that allows users to easily translate text into various languages. With the release of iOS 17.4 (and <a href="https://www.apple.com/ios/ios-18-preview/?ref=appcoda.com" rel="noreferrer">iOS 18</a>), you can now leverage the new Translation API to integrate this powerful translation capability into your apps.</p><p>Apple provides two options for developers</p>]]></description><link>https://www.appcoda.com/translation-api/</link><guid isPermaLink="false">66755d0d7f3d058096978ae4</guid><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Fri, 21 Jun 2024 11:07:20 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1605170439002-90845e8c0137?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI2fHxhcHBsZSUyMHRyYW5zbGF0ZXxlbnwwfHx8fDE3MTg5Njc5ODR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1605170439002-90845e8c0137?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI2fHxhcHBsZSUyMHRyYW5zbGF0ZXxlbnwwfHx8fDE3MTg5Njc5ODR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Using Translation API in Swift and SwiftUI"><p>iOS already includes a system-wide translation feature that allows users to easily translate text into various languages. With the release of iOS 17.4 (and <a href="https://www.apple.com/ios/ios-18-preview/?ref=appcoda.com" rel="noreferrer">iOS 18</a>), you can now leverage the new Translation API to integrate this powerful translation capability into your apps.</p><p>Apple provides two options for developers to utilize the Translation API. The quickest and simplest method is to use the&#xA0;<code>.translationPresentation</code>&#xA0;modifier, which displays a translation overlay in your app. For a more flexible solution, you can directly call the Translation API to build a custom translation feature.</p><p>In this tutorial, we will explore both approaches and guide you through their implementation using a <a href="https://www.appcoda.com/swiftui" rel="noreferrer">SwiftUI</a> demo app. Please note that you will need Xcode 16 to follow along.</p><h2 id="using-the-translationpresentation-modifier">Using the translationPresentation Modifier</h2><p>Let&apos;s start with the straightforward approach: the&#xA0;<code>.translationPresentation</code>&#xA0;modifier. In Safari, users can highlight any text to access the translation option, which then displays a translation overlay with the translated text.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-translation-presentation.png" class="kg-image" alt="Using Translation API in Swift and SwiftUI" loading="lazy" width="2000" height="1132" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-translation-presentation.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-translation-presentation.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/06/swiftui-translation-presentation.png 1600w, https://www.appcoda.com/content/images/2024/06/swiftui-translation-presentation.png 2056w" sizes="(min-width: 720px) 720px"></figure><p>If you want to bring this translation overlay to your app, all you need is to import the&#xA0;Translation&#xA0;package and use the&#xA0;.translationPresentation&#xA0;modifier. Take a look at the following sample code:</p><pre><code class="language-swift">import SwiftUI
import Translation

struct ContentView: View {
    
    @State private var showTranslation = false
    @State private var sampleText = article
    
    var body: some View {
        VStack {
            Text(sampleText)
                .font(.system(.body, design: .rounded))
                
            Button(&quot;Translate&quot;) {
                showTranslation.toggle()
            }
            .controlSize(.extraLarge)
            .buttonStyle(.borderedProminent)          
        }
        .padding()
        .translationPresentation(isPresented: $showTranslation, text: article)
    }
}</code></pre><p>The app displays some sample text in English with a&#xA0;<em>Translate</em>&#xA0;button placed below it.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-translation-app-demo.png" class="kg-image" alt="Using Translation API in Swift and SwiftUI" loading="lazy" width="1682" height="978" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-translation-app-demo.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-translation-app-demo.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/06/swiftui-translation-app-demo.png 1600w, https://www.appcoda.com/content/images/2024/06/swiftui-translation-app-demo.png 1682w" sizes="(min-width: 720px) 720px"></figure><p>Now, when you tap the &quot;Translate&quot; button, a translation overlay appears, displaying the translated text in your desired language. Other than iOS, the Translation API also works on both iPadOS and macOS. Currently, this translation feature cannot be tested in Xcode Preview; you must deploy the app onto a real device for testing.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-translation-translated-text-bg.png" class="kg-image" alt="Using Translation API in Swift and SwiftUI" loading="lazy" width="1558" height="1114" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-translation-translated-text-bg.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-translation-translated-text-bg.png 1000w, https://www.appcoda.com/content/images/2024/06/swiftui-translation-translated-text-bg.png 1558w" sizes="(min-width: 720px) 720px"></figure><p>The&#xA0;.translationPresentation&#xA0;modifier allows you to specify an optional action to be performed when users tap the &quot;Replace with Translation&quot; button. For instance, if you want to replace the original text with the translated text when the button is tapped, you can define this action like this:</p><pre><code class="language-swift">.translationPresentation(isPresented: $showTranslation, text: article) { translatedText in
    
    sampleText = translatedText
    
}</code></pre><p>Once you specify the action in the modifier, you will see the &#x201C;Replace with Translation&#x201D; option in the translation overlay.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-translation-replace.png" class="kg-image" alt="Using Translation API in Swift and SwiftUI" loading="lazy" width="1468" height="650" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-translation-replace.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-translation-replace.png 1000w, https://www.appcoda.com/content/images/2024/06/swiftui-translation-replace.png 1468w" sizes="(min-width: 720px) 720px"></figure><h2 id="working-with-the-translation-api">Working with the Translation API</h2><p>For greater control over translations, you can use the Translation API directly instead of relying on the translation overlay. For instance, if your app displays a list of article excerpts and you want to offer translation support, the translation overlay might not be ideal because users would have to select each excerpt individually for translation.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-translation-batch-demo.png" class="kg-image" alt="Using Translation API in Swift and SwiftUI" loading="lazy" width="1644" height="962" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-translation-batch-demo.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-translation-batch-demo.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/06/swiftui-translation-batch-demo.png 1600w, https://www.appcoda.com/content/images/2024/06/swiftui-translation-batch-demo.png 1644w" sizes="(min-width: 720px) 720px"></figure><p>A more efficient solution is to perform a batch translation of all the article excerpts when users tap the &quot;Translate&quot; button. Let&#x2019;s create a simple demo to see how to work with the Translation API and handle batch translations.</p><p>Below is the sample code for creating the UI above:</p><pre><code class="language-swift">struct BatchTranslationDemo: View {
    
    @State private var articles = sampleArticles
    
    var body: some View {
        NavigationStack {
            List(articles) { article in
                
                VStack {
                    Text(article.text)
                    
                    if article.translatedText != &quot;&quot; {
                        Text(article.translatedText)
                            .frame(maxWidth: .infinity, alignment: .leading)
                            .padding()
                            .background(Color(.systemGray4))
                    }
                }
                
            }
            .listStyle(.plain)

            .toolbar {
                Button {
                    
                } label: {
                    Label(&quot;Translate&quot;, systemImage: &quot;translate&quot;)
                        .labelStyle(.iconOnly)
                }

            }
        }
        
    }
}</code></pre><p>To perform a batch translation, you first need to define a translation configuration that specifies both source and target languages. In the code, you can declare a state variable to hold the configuration like below:</p><pre><code class="language-swift">@State private var configuration: TranslationSession.Configuration?</code></pre><p>And then, in the closure of the toolbar&#x2019;s&#xA0;Button, we can instantiate the configuration:</p><pre><code class="language-swift">Button {
    
    if configuration == nil {
        configuration = TranslationSession.Configuration(source: .init(identifier: &quot;en-US&quot;), target: .init(identifier: &quot;zh-Hant-TW&quot;))
        return
    }
    
    configuration?.invalidate()
    
} label: {
    Label(&quot;Translate&quot;, systemImage: &quot;translate&quot;)
        .labelStyle(.iconOnly)
}</code></pre><p>We specify English as the source language and Traditional Chinese as the target language. If you do not specify the languages, the Translation API will automatically create a default configuration, with iOS determining the source and target languages for you.</p><p>To perform translation, you attach the&#xA0;<code>.translationTask</code>&#xA0;modifier to the list view:</p><pre><code class="language-swift">List(articles) { article in
	.
	.
	.
}
.translationTask(configuration) { session in
    
    let requests = articles.map { TranslationSession.Request(sourceText: $0.text, clientIdentifier: $0.id.uuidString) }
    
    if let responses = try? await session.translations(from: requests) {
        
        responses.forEach { response in
            updateTranslation(response: response)
        }
    }
}</code></pre><p>This modifier initiates a translation task using the specified configuration. Whenever the configuration changes and is not&#xA0;<code>nil</code>, the translation task is executed. Within the closure, we prepare a set of translation requests and use the session&apos;s&#xA0;<code>translations(from:)</code>&#xA0;method to perform a batch translation.</p><p>If the translation task succeeds, it returns an array of translation responses containing the translated text. We then pass this translated text to the&#xA0;<code>updateTranslation</code>&#xA0;method to display it on screen.</p><pre><code class="language-swift">func updateTranslation(response: TranslationSession.Response) {
    
    guard let index = articles.firstIndex(where: { $0.id.uuidString == response.clientIdentifier }) else {
        return
    }
    
    articles[index].translatedText = response.targetText
    
}</code></pre><p>Let&apos;s deploy the app to a real device for testing. I tested the app on my iPad Air. When you tap the &quot;Translate&quot; button, the app should display additional article excerpts in Traditional Chinese.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-translation-batch-tc-demo.png" class="kg-image" alt="Using Translation API in Swift and SwiftUI" loading="lazy" width="1560" height="1080" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-translation-batch-tc-demo.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-translation-batch-tc-demo.png 1000w, https://www.appcoda.com/content/images/2024/06/swiftui-translation-batch-tc-demo.png 1560w" sizes="(min-width: 720px) 720px"></figure><h2 id="summary">Summary</h2><p>With the new Translation API introduced in iOS 17.4 (and iOS 18), developers can now easily integrate powerful translation features into their apps. This tutorial covers two primary methods for utilizing the API: the simple approach using the&#xA0;<code>.translationPresentation</code>&#xA0;modifier for displaying a translation overlay, and a more flexible approach using the Translation API directly for custom translation solutions.</p><p>We demonstrate both approaches in this tutorial. As illustrated in the demo, you can easily add translation capabilities with just a few lines of code. Given the simplicity and power of this API, there&#x2019;s no reason not to include translation functionality in your apps.</p>]]></content:encoded></item><item><title><![CDATA[What’s New in SwiftUI for iOS 18]]></title><description><![CDATA[<p>The world of SwiftUI is constantly evolving, with each update pushing the boundaries of app development. With iOS 18, the enhancements are both exciting and significant, set to transform how developers engage with SwiftUI.</p><p>This guide aims to explore every new feature and improvement in this version, offering a comprehensive</p>]]></description><link>https://www.appcoda.com/swiftui-ios-18/</link><guid isPermaLink="false">666c1b817f3d058096978ac8</guid><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Fri, 14 Jun 2024 10:36:54 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1555099962-4199c345e5dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIwfHxtYWMlMjBjb2Rpbmd8ZW58MHx8fHwxNzE4MzYxMjg4fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1555099962-4199c345e5dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIwfHxtYWMlMjBjb2Rpbmd8ZW58MHx8fHwxNzE4MzYxMjg4fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="What&#x2019;s New in SwiftUI for iOS 18"><p>The world of SwiftUI is constantly evolving, with each update pushing the boundaries of app development. With iOS 18, the enhancements are both exciting and significant, set to transform how developers engage with SwiftUI.</p><p>This guide aims to explore every new feature and improvement in this version, offering a comprehensive overview of the changes.</p><h2 id="the-floating-tab-bar">The Floating Tab Bar</h2><p>The Tab view in SwiftUI has been greatly enhanced with the addition of a floating tab bar. This new feature can seamlessly transition into a sidebar, providing users with an intuitive way to access the full functionality of an app.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-tabbar-sidebar.gif" class="kg-image" alt="What&#x2019;s New in SwiftUI for iOS 18" loading="lazy" width="1576" height="862" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-tabbar-sidebar.gif 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-tabbar-sidebar.gif 1000w, https://www.appcoda.com/content/images/2024/06/swiftui-tabbar-sidebar.gif 1576w" sizes="(min-width: 720px) 720px"></figure><p>On iPad, users can now tap a sidebar button on the tab bar to transform the tab bar into sidebar. For developers, it&#x2019;s just a line of code if you want to support this feature. All you need is to set the tab view style to <code>.sidebarAdaptable</code>:</p><pre><code class="language-swift">struct ContentView: View {
    @State var customization = TabViewCustomization()
    
    var body: some View {
        TabView {
            Tab(&quot;Home&quot;, systemImage: &quot;house.fill&quot;) {
                
            }
            
            Tab(&quot;Bookmark&quot;, systemImage: &quot;bookmark.circle.fill&quot;) {
                
            }
            
            Tab(&quot;Videos&quot;, systemImage: &quot;video.circle.fill&quot;) {
                
            }
            
            Tab(&quot;Profile&quot;, systemImage: &quot;person.crop.circle&quot;) {
                
            }
            
            Tab(&quot;Settings&quot;, systemImage: &quot;gear&quot;) {
                
            }
            
        }
        .tint(.yellow)
        .tabViewStyle(.sidebarAdaptable)
        .tabViewCustomization($customization)
    }
}
</code></pre><p>Once the option is set, users can effortlessly switch between a sidebar and a tab bar, enhancing navigation flexibility. Additionally, the new tab bar offers extensive customization. By attaching the&#xA0;<code>.tabViewCustomization</code>&#xA0;modifier to the Tab view, users can tailor the menu items of the tab bar.</p><h2 id="sheet-presentation-sizing">Sheet Presentation Sizing</h2><p>Sheet presentation sizing is now consistent and streamlined across platforms. By using the&#xA0;<code>.presentationSizing</code>&#xA0;modifier, you can easily create sheets with ideal dimensions using presets such as&#xA0;<code>.form</code>&#xA0;or&#xA0;<code>.page</code>, or even specify custom sizes. Here is a sample:</p><pre><code class="language-swift">struct PresentationSizingDemo: View {
    
    @State private var showSheet = false
    
    var body: some View {
        Button {
            showSheet.toggle()
        } label: {
            Text(&quot;Show sheet&quot;)
        }
        .sheet(isPresented: $showSheet) {
            Text(&quot;This is a quick demo of presentation sizing.&quot;)
                .presentationSizing(.form)
        }
    }
}
</code></pre><p>On iPad, the&#xA0;<code>.form</code>&#xA0;preset displays a smaller sheet compared to&#xA0;<code>.page</code>. However, there is no size difference on iPhone.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-presentation-sizing.png" class="kg-image" alt="What&#x2019;s New in SwiftUI for iOS 18" loading="lazy" width="2000" height="1136" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-presentation-sizing.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-presentation-sizing.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/06/swiftui-presentation-sizing.png 1600w, https://www.appcoda.com/content/images/2024/06/swiftui-presentation-sizing.png 2140w" sizes="(min-width: 720px) 720px"></figure><h2 id="color-mesh-gradients">Color Mesh Gradients</h2><p>SwiftUI now offers extensive support for colorful mesh gradients. The new <code>MeshGradient</code> feature allows you to create two-dimensional gradients using a grid of positioned colors. By combining control points and colors, you can design a wide variety of gradient effects.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-mesh-gradient.png" class="kg-image" alt="What&#x2019;s New in SwiftUI for iOS 18" loading="lazy" width="1844" height="1070" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-mesh-gradient.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-mesh-gradient.png 1000w, https://www.appcoda.com/content/images/size/w1600/2024/06/swiftui-mesh-gradient.png 1600w, https://www.appcoda.com/content/images/2024/06/swiftui-mesh-gradient.png 1844w" sizes="(min-width: 720px) 720px"></figure><p>Below shows a couple of gradients created using <code>MeshGradient</code>:</p><pre><code class="language-swift">struct ColorMeshDemo: View {
    var body: some View {
        VStack {
            MeshGradient(
                width: 3,
                height: 3,
                points: [
                    .init(0, 0), .init(0.5, 0), .init(1, 0),
                    .init(0, 0.5), .init(0.3, 0.5), .init(1, 0.5),
                    .init(0, 1), .init(0.5, 1), .init(1, 1)
                ],
                colors: [
                    .gray, .purple, .indigo,
                    .orange, .cyan, .blue,
                    .yellow, .green, .teal
                ]
            )
            
            MeshGradient(
                width: 2,
                height: 2,
                points: [
                    .init(0, 0), .init(1, 0),
                    .init(0, 1), .init(1, 1)
                ],
                colors: [
                    .red, .purple,
                    .yellow, .green
                ]
            )
        }
        .ignoresSafeArea()
    }
}

</code></pre><h2 id="zoom-transition">Zoom Transition</h2><p>SwiftUI now has the built-in support of zoom transition. You can use the <code>.matchedTransitionSource</code> modifier to easily render the zoom transition.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-zoom-transition.gif" class="kg-image" alt="What&#x2019;s New in SwiftUI for iOS 18" loading="lazy" width="1082" height="722" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-zoom-transition.gif 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-zoom-transition.gif 1000w, https://www.appcoda.com/content/images/2024/06/swiftui-zoom-transition.gif 1082w" sizes="(min-width: 720px) 720px"></figure><p>If you&apos;re familiar with using&#xA0;<code>matchedGeometryEffect</code>, you&apos;ll find&#xA0;<code>matchedTransitionSource</code>&#xA0;quite similar. Below is sample code we wrote to create the zoom transition shown above:</p><pre><code class="language-swift">struct ZoomTransitionDemo: View {
    let samplePhotos = (1...20).map { Photo(name: &quot;coffee-\($0)&quot;) }
    
    @Namespace() var namespace
    
    var body: some View {
        NavigationStack {
            ScrollView {
                LazyVGrid(columns: [ GridItem(.adaptive(minimum: 150)) ]) {
                    
                    ForEach(samplePhotos) { photo in
                        NavigationLink {
                            Image(photo.name)
                                .resizable()
                                .navigationTransition(.zoom(sourceID: photo.id, in: namespace))
                        } label: {
                            Image(photo.name)
                                .resizable()
                                .scaledToFill()
                                .frame(minWidth: 0, maxWidth: .infinity)
                                .frame(height: 150)
                                .cornerRadius(30.0)
                        }
                        .matchedTransitionSource(id: photo.id, in: namespace)
                        
                    }
                }
            }
        }
        .padding()
    }
}
</code></pre><p>The&#xA0;<code>matchedTransitionSource</code>&#xA0;modifier is applied to a&#xA0;<code>NavigationLink</code>&#xA0;with a specific photo ID, designating the view as the source of the navigation transition. For the destination view, which is also an&#xA0;<code>Image</code>&#xA0;view, the&#xA0;<code>navigationTransition</code>&#xA0;modifier is used to render the zoom transition.</p><h2 id="more-animations-for-sf-symbols-6">More Animations for SF Symbols 6</h2><p>iOS 17 introduced a fantastic collection of expressive animations for SF Symbols.&#xA0;Developers can leverage these animations using the new&#xA0;<strong><code>symbolEffect</code></strong>&#xA0;modifier. iOS 18 pushes the SF Symbols to version 6 with an even wider variety of animated symbols for developers to utilize in their apps.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-sfanimation-rotate.gif" class="kg-image" alt="What&#x2019;s New in SwiftUI for iOS 18" loading="lazy" width="748" height="286" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-sfanimation-rotate.gif 600w, https://www.appcoda.com/content/images/2024/06/swiftui-sfanimation-rotate.gif 748w" sizes="(min-width: 720px) 720px"></figure><p>Here is a sample code snippet for the new <code>rotate</code> animation:</p><pre><code class="language-swift">Image(systemName: &quot;ellipsis.message&quot;)
            .font(.system(size: 300))
            .symbolRenderingMode(.palette)
            .foregroundStyle(.purple, .gray)
            .symbolEffect(.rotate, value: animate)
            .onTapGesture {
                animate.toggle()
            }
</code></pre><p>On top of the <code>rotate</code> animation, SF Symbols 6 also provides two other types of animation including <code>.wiggle</code> and <code>.breathe</code>.</p><h2 id="enhancements-of-swiftui-charts">Enhancements of SwiftUI Charts</h2><p>The SwiftUI Charts framework now supports vectorized and function plots. For example, let&#x2019;s say you want to plot a graph for the following function:</p><pre><code>y = x^2</code></pre><p>You can use <code>LinePlot</code> to plot the graph like this:</p><pre><code class="language-swift">Chart {
    LinePlot(x: &quot;x&quot;, y: &quot;y&quot;) { x in
        return pow(x, 2)
    }
    .foregroundStyle(.green)
    .lineStyle(.init(lineWidth: 10))
}
.chartXScale(domain: -4...4)
.chartYScale(domain: -4...4)
.chartXAxis {
    AxisMarks(values: .automatic(desiredCount: 10))
}
.chartYAxis {
    AxisMarks(values: .automatic(desiredCount: 10))
}
.chartPlotStyle { plotArea in
    plotArea
        .background(.yellow.opacity(0.02))
}
</code></pre><p>You can simply provide the function to a&#xA0;<code>LinePlot</code>&#xA0;to graph a function.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-charts-lineplot.png" class="kg-image" alt="What&#x2019;s New in SwiftUI for iOS 18" loading="lazy" width="1542" height="960" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-charts-lineplot.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-charts-lineplot.png 1000w, https://www.appcoda.com/content/images/2024/06/swiftui-charts-lineplot.png 1542w" sizes="(min-width: 720px) 720px"></figure><h2 id="more-control-of-scroll-views">More Control of Scroll Views</h2><p>The new version of SwiftUI delivers a powerful set of new APIs that give developers fine-grained control over their scroll views. The introduction of the&#xA0;<code>onScrollGeometryChange</code>&#xA0;modifier allows you to keep track with the state of scroll views. This new capability enables you to efficiently react to changes in the scroll view&apos;s content offsets, content size, and other scroll-related properties.</p><p>Here&apos;s a sample code snippet that demonstrates how you can use this modifier to display a &quot;Scroll to Top&quot; button after the user has scrolled down a list:</p><pre><code class="language-swift">struct ScrollViewDemo: View {
    
    let samplePhotos = (1...20).map { Photo(name: &quot;coffee-\($0)&quot;) }
    
    @State private var showScrollToTop = false
    
    var body: some View {
        ScrollView {
            VStack {
                ForEach(samplePhotos) { photo in
                    Image(photo.name)
                        .resizable()
                        .scaledToFill()
                        .frame(height: 200)
                        .clipShape(RoundedRectangle(cornerRadius: 15))
                }
            }
        }
        .padding(.horizontal)
        .overlay(alignment: .bottom) {
            if showScrollToTop {
                Button(&quot;Scroll to top&quot;) {
                    
                }
                .controlSize(.extraLarge)
                .buttonStyle(.borderedProminent)
                .tint(.green)
            }
        }
        .onScrollGeometryChange(for: Bool.self) { geometry in
            geometry.contentOffset.y &lt; geometry.contentInsets.bottom + 200
            
        } action: { oldValue, newValue in
            withAnimation {
                showScrollToTop = !newValue
            }
        }

    }
}
</code></pre><p>The geometry of a scroll view changes frequently while scrolling. We can leverage the <code>onScrollGeometryChange</code> modifier to capture the update and display the &#x201C;Scroll to top&#x201D; button accordingly.</p><p>SwiftUI also introduces the&#xA0;<code>onScrollVisibilityChange</code>&#xA0;modifier for views within a scroll view. This modifier allows you to detect when a particular view becomes visible and perform specific actions in response.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-scrollview-visible.gif" class="kg-image" alt="What&#x2019;s New in SwiftUI for iOS 18" loading="lazy" width="968" height="440" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-scrollview-visible.gif 600w, https://www.appcoda.com/content/images/2024/06/swiftui-scrollview-visible.gif 968w" sizes="(min-width: 720px) 720px"></figure><p>Suppose we have a&#xA0;<code>Rectangle</code>&#xA0;view at the end of a scroll view and we want to trigger a color change animation only when this view comes into view. We can use the&#xA0;<code>onScrollVisibilityChange</code>&#xA0;modifier to detect when the view becomes visible and when it goes off-screen.</p><pre><code class="language-swift">Rectangle()
    .fill(color)
    .frame(height: 100)
    .onScrollVisibilityChange(threshold: 0.9) { visible in
        withAnimation(.linear(duration: 5)) {
            color = visible ? .green : .blue
        }
    }
</code></pre><h2 id="widgets-in-control-center">Widgets in Control Center</h2><p>You now have the ability to design custom resizable controls, like buttons and toggles, which can be placed in the Control Center or on the lock screen. Controls are a new kind of Widget that that are easy to build with App Intents.</p><p>To create a control widget in Control Center, you adopt the <code>ControlWidget</code> protocol and provide the implementation. Here is a sample code provided by Apple:</p><pre><code class="language-swift">struct StartPartyControl: ControlWidget {
    var body: some ControlWidgetConfiguration {
        StaticControlConfiguration(
            kind: &quot;com.apple.karaoke_start_party&quot;
        ) {
            ControlWidgetButton(action: StartPartyIntent()) {
                Label(&quot;Start the Party!&quot;, systemImage: &quot;music.mic&quot;)
                Text(PartyManager.shared.nextParty.name)
            }
        }
    }
}
</code></pre><p>We will further look into control widgets in a separate tutorial.</p><h2 id="a-new-mix-modifier-for-color">A new Mix Modifier for Color</h2><p>You can now blend two different colors to create your desired hue by using the new <code>mix</code> modifier. Here is an example:</p><pre><code class="language-swift">VStack {
    Color.purple.mix(with: .green, by: 0.3)
        .frame(height: 100)
    
    Color.purple.mix(with: .green, by: 0.5)
        .frame(height: 100)
    
    Color.purple.mix(with: .green, by: 0.8)
        .frame(height: 100)
}
</code></pre><p>Simply provide the&#xA0;<code>mix</code>&#xA0;modifier with the color to blend and the blend ratio. SwiftUI will then generate the new color based on these parameters.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-color-blend-mix.png" class="kg-image" alt="What&#x2019;s New in SwiftUI for iOS 18" loading="lazy" width="1062" height="506" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-color-blend-mix.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-color-blend-mix.png 1000w, https://www.appcoda.com/content/images/2024/06/swiftui-color-blend-mix.png 1062w" sizes="(min-width: 720px) 720px"></figure><h2 id="visual-effects-for-text">Visual Effects for Text</h2><p>You can now extend SwiftUI Text views with custom rendering effects by adopting the <code>TextRenderer</code>. Here is a sample text renderer:</p><pre><code class="language-swift">struct CustomTextRenderer: TextRenderer {
    
    func draw(layout: Text.Layout, in context: inout GraphicsContext) {
        
        for line in layout {
            for (index, slice) in runs.enumerated() {
                context.opacity = (index % 2 == 0) ? 0.4 : 1.0
                context.translateBy(x: 0, y: index % 2 != 0 ? -15 : 15)
                
                context.draw(slice)
            }
        }
    }
}

struct TextAnimationDemo: View {
    var body: some View {
        Text(&quot;What&apos;s New in SwiftUI&quot;)
            .font(.system(size: 100))
            .textRenderer(CustomTextRenderer())
    }
}
</code></pre><p>By implementing the <code>draw</code> method, you can customize the visual effect of each character.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/2024/06/swiftui-text-visual-effect.png" class="kg-image" alt="What&#x2019;s New in SwiftUI for iOS 18" loading="lazy" width="1202" height="608" srcset="https://www.appcoda.com/content/images/size/w600/2024/06/swiftui-text-visual-effect.png 600w, https://www.appcoda.com/content/images/size/w1000/2024/06/swiftui-text-visual-effect.png 1000w, https://www.appcoda.com/content/images/2024/06/swiftui-text-visual-effect.png 1202w" sizes="(min-width: 720px) 720px"></figure><h2 id="summary">Summary</h2><p>The iOS 18 update introduces a host of significant enhancements to SwiftUI. This tutorial offers a concise introduction to some of the new features. For more complex features, we will be creating detailed, standalone tutorials to thoroughly explore their applications and benefits. Be sure to stay tuned for these upcoming in-depth guides.</p>]]></content:encoded></item><item><title><![CDATA[Building an AI Image Recognition App Using Google Gemini and SwiftUI]]></title><description><![CDATA[<p>Previously, we provided a <a href="https://www.appcoda.com/swiftui-google-gemini-ai/?ref=localhost">brief introduction to Google Gemini APIs</a> and demonstrated how to build a Q&amp;A application using SwiftUI. You should realize how straightforward it is to integrate Google Gemini and enhance your apps with AI features. We have also developed a demo application to demonstrate how</p>]]></description><link>https://www.appcoda.com/swiftui-image-recognition/</link><guid isPermaLink="false">66612a0f166d3c03cf011534</guid><category><![CDATA[AI]]></category><category><![CDATA[SwiftUI]]></category><dc:creator><![CDATA[Simon Ng]]></dc:creator><pubDate>Tue, 14 May 2024 18:28:49 GMT</pubDate><media:content url="https://www.appcoda.com/content/images/wordpress/2024/05/neh9w4cdmna.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.appcoda.com/content/images/wordpress/2024/05/neh9w4cdmna.jpg" alt="Building an AI Image Recognition App Using Google Gemini and SwiftUI"><p>Previously, we provided a <a href="https://www.appcoda.com/swiftui-google-gemini-ai/?ref=localhost">brief introduction to Google Gemini APIs</a> and demonstrated how to build a Q&amp;A application using SwiftUI. You should realize how straightforward it is to integrate Google Gemini and enhance your apps with AI features. We have also developed a demo application to demonstrate how to construct a chatbot app using the AI APIs.</p><p>The <code>gemini-pro</code> model discussed in the previous tutorial is limited to generating text from text-based input. However, Google Gemini also offers a multimodal model called <code>gemini-pro-vision</code>, which can generate text descriptions from images. In other words, this model has the capacity to detect and describe objects in an image.</p><p>In this tutorial, we will demonstrate how to use Google Gemini APIs for image recognition. This simple app allows users to select an image from their photo library and uses Gemini to describe the contents of the photo.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-demo-gray.png" class="kg-image" alt="Building an AI Image Recognition App Using Google Gemini and SwiftUI" loading="lazy" width="2000" height="1136" srcset="https://www.appcoda.com/content/images/size/w600/wordpress/2024/05/swiftui-objectrecognition-demo-gray.png 600w, https://www.appcoda.com/content/images/size/w1000/wordpress/2024/05/swiftui-objectrecognition-demo-gray.png 1000w, https://www.appcoda.com/content/images/size/w1600/wordpress/2024/05/swiftui-objectrecognition-demo-gray.png 1600w, https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-demo-gray.png 2028w" sizes="(min-width: 720px) 720px"></figure><p>Before proceeding with this tutorial, please visit <a href="https://makersuite.google.com/app/apikey?ref=localhost">Google AI Studio</a> and create your own API key if you haven&#x2019;t done so already.</p><h2 id="adding-google-generative-ai-package-in-xcode-projects">Adding Google Generative AI Package in Xcode Projects</h2><p>Assuming you&#x2019;ve already created an app project in Xcode, the first step to using Gemini APIs is importing the SDK. To accomplish this, right-click on the project folder in the project navigator and select <em>Add Package Dependencies</em>. In the dialog box, input the following package URL:</p><pre><code class="language-bash">https://github.com/google/generative-ai-swift</code></pre><p>You can then click on the <em>Add Package</em> button to download and incorporate the <em>GoogleGenerativeAI</em> package into the project.</p><p>Next, to store the API key, create a property file named&#xA0;<code>GeneratedAI-Info.plist</code>. In this file, create a key named&#xA0;<code>API_KEY</code>&#xA0;and enter your API key as the value.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-apikey.png" class="kg-image" alt="Building an AI Image Recognition App Using Google Gemini and SwiftUI" loading="lazy" width="2000" height="552" srcset="https://www.appcoda.com/content/images/size/w600/wordpress/2024/05/swiftui-objectrecognition-apikey.png 600w, https://www.appcoda.com/content/images/size/w1000/wordpress/2024/05/swiftui-objectrecognition-apikey.png 1000w, https://www.appcoda.com/content/images/size/w1600/wordpress/2024/05/swiftui-objectrecognition-apikey.png 1600w, https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-apikey.png 2022w" sizes="(min-width: 720px) 720px"></figure><p>To read the API key from the property file, create another Swift file named&#xA0;<code>APIKey.swift</code>. Add the following code to this file:</p><pre><code class="language-swift">enum APIKey {
  // Fetch the API key from `GenerativeAI-Info.plist`
  static var `default`: String {

    guard let filePath = Bundle.main.path(forResource: &quot;GenerativeAI-Info&quot;, ofType: &quot;plist&quot;)
    else {
      fatalError(&quot;Couldn&apos;t find file &apos;GenerativeAI-Info.plist&apos;.&quot;)
    }

    let plist = NSDictionary(contentsOfFile: filePath)

    guard let value = plist?.object(forKey: &quot;API_KEY&quot;) as? String else {
      fatalError(&quot;Couldn&apos;t find key &apos;API_KEY&apos; in &apos;GenerativeAI-Info.plist&apos;.&quot;)
    }

    if value.starts(with: &quot;_&quot;) {
      fatalError(
        &quot;Follow the instructions at https://ai.google.dev/tutorials/setup to get an API key.&quot;
      )
    }

    return value
  }
}</code></pre><h2 id="building-the-app-ui">Building the App UI</h2><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-basicui.png" class="kg-image" alt="Building an AI Image Recognition App Using Google Gemini and SwiftUI" loading="lazy" width="1802" height="1092" srcset="https://www.appcoda.com/content/images/size/w600/wordpress/2024/05/swiftui-objectrecognition-basicui.png 600w, https://www.appcoda.com/content/images/size/w1000/wordpress/2024/05/swiftui-objectrecognition-basicui.png 1000w, https://www.appcoda.com/content/images/size/w1600/wordpress/2024/05/swiftui-objectrecognition-basicui.png 1600w, https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-basicui.png 1802w" sizes="(min-width: 720px) 720px"></figure><p>The user interface is straightforward. It features a button at the bottom of the screen, allowing users to access the built-in Photo library. After a photo is selected, it appears in the image view.</p><p>To bring up the built-in Photos library, we use <code>PhotosPicker</code>, which is a native photo picker view for managing photo selections. When presenting the <code>PhotosPicker</code> view, it showcases the photo album in a separate sheet, rendered atop your app&#x2019;s interface.</p><p>First, you need to import the <code>PhotosUI</code> framework in order to use the photo picker view:</p><pre><code class="language-cpp">import PhotosUI</code></pre><p>Next, update the <code>ContentView</code> struct like this to implement the user interface:</p><pre><code class="language-php">struct ContentView: View {

    @State private var selectedItem: PhotosPickerItem?
    @State private var selectedImage: Image?

    var body: some View {
        VStack {

            if let selectedImage {
                selectedImage
                    .resizable()
                    .scaledToFit()
                    .clipShape(RoundedRectangle(cornerRadius: 20.0))
            } else {

                Image(systemName: &quot;photo&quot;)
                    .imageScale(.large)
                    .foregroundStyle(.gray)
                    .frame(maxWidth: .infinity, maxHeight: .infinity)
                    .background(Color(.systemGray6))
                    .clipShape(RoundedRectangle(cornerRadius: 20.0))
            }

            Spacer()

            PhotosPicker(selection: $selectedItem, matching: .images) {
                Label(&quot;Select Photo&quot;, systemImage: &quot;photo&quot;)
                    .frame(maxWidth: .infinity)
                    .bold()
                    .padding()
                    .foregroundStyle(.white)
                    .background(.indigo)
                    .clipShape(RoundedRectangle(cornerRadius: 20.0))
            }
        }
        .padding(.horizontal)
        .onChange(of: selectedItem) { oldItem, newItem in
            Task {
                if let image = try? await newItem?.loadTransferable(type: Image.self) {
                    selectedImage = image
                }
            }
        }
    }
}</code></pre><p>To use the&#xA0;<code>PhotosPicker</code>&#xA0;view, we declare a state variable to store the photo selection and then instantiate a&#xA0;<code>PhotosPicker</code>&#xA0;view by passing the binding to the state variable. The&#xA0;<code>matching</code>&#xA0;parameter allows you to specify the asset type to display.</p><p>When a photo is selected, the photo picker automatically closes, storing the chosen photo in the <code>selectedItem</code> variable of type <code>PhotosPickerItem</code>. The <code>loadTransferable(type:completionHandler:)</code> method can be used to load the image. By attaching the <code>onChange</code> modifier, you can monitor updates to the <code>selectedItem</code> variable. If there is a change, we invoke the <code>loadTransferable</code> method to load the asset data and save the image to the <code>selectedImage</code> variable.</p><p>Because <code>selectedImage</code> is a state variable, SwiftUI automatically detects when its content changes and displays the image on the screen.</p><h2 id="image-analysis-and-object-recognition">Image Analysis and Object Recognition</h2><p>Having selected an image, the next step is to use the Gemini APIs to perform image analysis and generate a text description from the image.</p><p>Before using the APIs, insert the following statement at the very beginning of <code>ContentView.swift</code> to import the framework:</p><pre><code class="language-cpp">import GoogleGenerativeAI</code></pre><p>Next, declare a <code>model</code> property to hold the AI model:</p><pre><code class="language-php">let model = GenerativeModel(name: &quot;gemini-pro-vision&quot;, apiKey: APIKey.default)</code></pre><p>For image analysis, we utilize the <code>gemini-pro-vision</code> model provided by Google Gemini. Then, we declare two state variables: one for storing the generated text and another for tracking the analysis status.</p><pre><code class="language-less">@State private var analyzedResult: String?
@State private var isAnalyzing: Bool = false</code></pre><p>Next, create a new function named <code>analyze()</code> to perform image analysis:</p><pre><code class="language-swift">@MainActor func analyze() {

    self.analyzedResult = nil
    self.isAnalyzing.toggle()

    // Convert Image to UIImage
    let imageRenderer = ImageRenderer(content: selectedImage)
    imageRenderer.scale = 1.0

    guard let uiImage = imageRenderer.uiImage else {
        return
    }

    let prompt = &quot;Describe the image and explain what the objects found in the image&quot;

    Task {
        do {
            let response = try await model.generateContent(prompt, uiImage)

            if let text = response.text {
                print(&quot;Response: \(text)&quot;)
                self.analyzedResult = text
                self.isAnalyzing.toggle()
            }
        } catch {
            print(error.localizedDescription)
        }
    }
}</code></pre><p>Before using the model&#x2019;s API, we need to convert the image view into an <code>UIImage</code>. We then invoke the <code>generateContent</code> method with the image and a predefined prompt, asking Google Gemini to describe the image and identify the objects within it.</p><p>When the response arrives, we extract the text description and assign it to the <code>analyzedResult</code> variable.</p><p>Next, insert the following code and place it above the <code>Spacer()</code> view:</p><pre><code class="language-less">ScrollView {
    Text(analyzedResult ?? (isAnalyzing ? &quot;Analyzing...&quot; : &quot;Select a photo to get started&quot;))
        .font(.system(.title2, design: .rounded))
}
.padding()
.frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .leading)
.background(Color(.systemGray6))
.clipShape(RoundedRectangle(cornerRadius: 20.0))</code></pre><p>This scroll view displays the text generated by Gemini. Optionally, you can add an <code>overlay</code> modifier to the <code>selectedImage</code> view. This will display a progress view while an image analysis is being performed.</p><pre><code class="language-scss">.overlay {

    if isAnalyzing {
        RoundedRectangle(cornerRadius: 20.0)
            .fill(.black)
            .opacity(0.5)

        ProgressView()
            .tint(.white)
    }
}</code></pre><p>After implementing all the changes, the preview pane should now be displaying a newly designed user interface. This interface comprises of the selected image, the image description area, and a button to select photos from the photo library. This is what you should see in your preview pane if all the steps have been followed and executed correctly.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-scrollview.png" class="kg-image" alt="Building an AI Image Recognition App Using Google Gemini and SwiftUI" loading="lazy" width="1920" height="856" srcset="https://www.appcoda.com/content/images/size/w600/wordpress/2024/05/swiftui-objectrecognition-scrollview.png 600w, https://www.appcoda.com/content/images/size/w1000/wordpress/2024/05/swiftui-objectrecognition-scrollview.png 1000w, https://www.appcoda.com/content/images/size/w1600/wordpress/2024/05/swiftui-objectrecognition-scrollview.png 1600w, https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-scrollview.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>Finally, insert a line of code in the <code>onChange</code> modifier to call the <code>analyze()</code> method after the <code>selectedImage</code>. That&#x2019;s all! You can now test the app in the preview pane. Click on the <em>Select Photo</em> button and choose a photo from the library. The app will then send the selected photo to Google Gemini for analysis and display the generated text in the scroll view.</p><figure class="kg-card kg-image-card"><img src="https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-demoresult.png" class="kg-image" alt="Building an AI Image Recognition App Using Google Gemini and SwiftUI" loading="lazy" width="1920" height="856" srcset="https://www.appcoda.com/content/images/size/w600/wordpress/2024/05/swiftui-objectrecognition-demoresult.png 600w, https://www.appcoda.com/content/images/size/w1000/wordpress/2024/05/swiftui-objectrecognition-demoresult.png 1000w, https://www.appcoda.com/content/images/size/w1600/wordpress/2024/05/swiftui-objectrecognition-demoresult.png 1600w, https://www.appcoda.com/content/images/wordpress/2024/05/swiftui-objectrecognition-demoresult.png 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="summary">Summary</h2><p>The tutorial demonstrates how to build an AI image recognition app using Google Gemini APIs and SwiftUI. The app allows users to select an image from their photo library and uses Gemini to describe the contents of the photo.</p><p>From the code we have just worked on, you can see that it only requires a few lines to prompt Google Gemini to generate text from an image. Although this demo illustrates the process using a single image, the API actually supports multiple images. For further details on how it functions, please refer to the <a href="https://ai.google.dev/gemini-api/docs/?ref=localhost">official documentation</a>.</p>]]></content:encoded></item></channel></rss>