Using Core ML with SwiftUI: Build an AI-Powered App

 

Artificial Intelligence (AI) is revolutionizing mobile app experiences, and with Core ML, Apple makes it easy to integrate machine learning into iOS applications. When combined with SwiftUI, you can create powerful, AI-driven applications with a modern and interactive UI.

In this guide, we’ll explore how to integrate Core ML into a SwiftUI app, process images, and display real-time AI-powered results.

What is Core ML?

Core ML is Apple’s machine learning framework that allows developers to integrate trained models into iOS apps for tasks like image recognition, text analysis, and object detection.

Key Benefits of Core ML:

✅ Optimized for Apple hardware (fast and efficient on-device processing)
✅ Supports popular ML models like Vision, Natural Language Processing (NLP), and Sound Analysis
✅ Works offline without needing an internet connection
✅ Ensures privacy by running models on-device

Step 1: Get a Pre-Trained Core ML Model

You can download pre-trained models from Apple’s Core ML Model Galleryor convert models using tools like Core ML Tools.

For this tutorial, we’ll use MobileNetV2, a model that can classify images into different categories.

🔗 Download MobileNetV2: Apple’s Core ML Models

Once downloaded, drag and drop the .mlmodel file into your Xcode project.

Step 2: Create a SwiftUI View to Select and Classify an Image

1. Load the Core ML Model

First, create a MLModel instance in a SwiftUI ViewModel.

import SwiftUI
import CoreML
import Vision
import UIKit

class ImageClassifier: ObservableObject {
@Published var classificationResult: String = ""

private var model: VNCoreMLModel

init() {
do {
let config = MLModelConfiguration()
let model = try MobileNetV2(configuration: config).model
self.model = try VNCoreMLModel(for: model)
} catch {
fatalError("Failed to load Core ML model: \(error)")
}
}

func classifyImage(_ image: UIImage) {
guard let ciImage = CIImage(image: image) else { return }
let request = VNCoreMLRequest(model: model) { request, error in
guard let results = request.results as? [VNClassificationObservation], let topResult = results.first else {
return
}
DispatchQueue.main.async {
self.classificationResult = topResult.identifier
}
}
let handler = VNImageRequestHandler(ciImage: ciImage)
try? handler.perform([request])
}
}

2. Create the SwiftUI UI

Now, let’s build a UI where users can pick an image from the gallery and classify it using the Core ML model.

struct ContentView: View {
@StateObject private var classifier = ImageClassifier()
@State private var selectedImage: UIImage? = nil
@State private var isImagePickerPresented = false

var body: some View {
VStack {
if let image = selectedImage {
Image(uiImage: image)
.resizable()
.scaledToFit()
.frame(height: 300)
} else {
Text("Tap to select an image")
.foregroundColor(.gray)
}

Button("Choose Image") {
isImagePickerPresented = true
}
.padding()

Text("Classification: \(classifier.classificationResult)")
.font(.headline)
.padding()
}
.sheet(isPresented: $isImagePickerPresented) {
ImagePicker(image: $selectedImage) { image in
classifier.classifyImage(image)
}
}
}
}

3. Implement ImagePicker

To allow image selection, we use a UIImagePickerController wrapper.

import UIKit
import SwiftUI

struct ImagePicker: UIViewControllerRepresentable {
@Binding var image: UIImage?
var onImagePicked: (UIImage) -> Void

func makeCoordinator() -> Coordinator {
Coordinator(self)
}

func makeUIViewController(context: Context) -> UIImagePickerController {
let picker = UIImagePickerController()
picker.delegate = context.coordinator
return picker
}

func updateUIViewController(_ uiViewController: UIImagePickerController, context: Context) {}

class Coordinator: NSObject, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
let parent: ImagePicker

init(_ parent: ImagePicker) {
self.parent = parent
}

func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
if let uiImage = info[.originalImage] as? UIImage {
parent.image = uiImage
parent.onImagePicked(uiImage)
}
picker.dismiss(animated: true)
}
}
}

Step 3: Run the App and Test AI Classification

  1. Launch the app on a device or simulator.
  2. Choose an image from your photo library.
  3. The app will classify the image and display the predicted object.

Conclusion

With Core ML and SwiftUI, you can easily integrate AI-powered features into your iOS apps. This tutorial covered:

✅ Adding a pre-trained Core ML model to your project
✅ Selecting images and processing them with Vision
✅ Displaying classification results in real-time

🚀 Next Steps:

  • Train a custom Core ML model with Create ML or TensorFlow.
  • Explore Real-Time Object Detection with Vision and Core ML.
  • Use Natural Language Processing (NLP) to analyze text within SwiftUI apps.


Comments

Popular posts from this blog

Dependency Injection in iOS with SwiftUI

Infinite Scrolling in SwiftUI with Real-Time Pagination