Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
Skataka 
posted an update Oct 13
Post
2431
SwiftMistralCoreML
Hi Everyone,

I have created a Swift library to interact with Mistral 7B models in CoreML on macOS.

I hope you find it helpful.

https://github.com/cardona/SwiftMistralCoreML

An open-source Swift library that enables macOS and iOS projects to utilize the Mistral-Interact7B models (INT4 and upcoming FP16) in chat mode. This library includes a complete Swift implementation of the tokenizer and Byte Pair Encoding (BPE) encoder, providing an out-of-the-box solution for integrating advanced language models into your Swift applications.

Features

Full Swift Implementation: Includes tokenizer and BPE encoder written entirely in Swift.
CoreML Integration: Leverages Apple's CoreML framework to run Mistral-Interact7B models efficiently.
Multiple Decoding Strategies: Supports Greedy and Top-K sampling, with plans to add more strategies.
Chat Functionality: Designed to work in chat mode for interactive applications.
FP16 Support (Coming Soon): Future version will support FP16 models for improved performance.

Thanks for the update!

Wow 🤯 Do you have some benchmarks/experience with performance on different MacBook models?

·

Sure, here you go; this data is using my Mac. I haven't been able to test it on an iPhone, as I don't have the 16, but I believe it should also work with the iPhone 16.

Context and Setup
Device used: Mac Studio M1 Ultra 64GB with macOS 15.0.1
Models tested: Mistral7B-Instruct V0.3:

  • INT4
  • FP16
  • Decoding algorithm: Greedy

First Run (Model Adaptation):
This is the initial execution when the app is first launched, and Core ML adapts the model to your device’s hardware. This adaptation process is time-consuming as Core ML optimizes the model to leverage the device’s capabilities in future runs.

Configuration Execution Type Time for First Token (s) Generation Speed (tokens/s)
INT4 First Run (Model Adaptation) 32.81 -
INT4 Subsequent Runs - First Token 2.62 4.55
INT4 Additional Iteration 0.32 3.90
FP16 First Run (Model Adaptation) 87.28 -
FP16 Subsequent Runs - First Token 23.17 5.93
FP16 Additional Iteration 7.52 4.09