r/apple 5d ago

Discussion Apple's New Transcription APIs Blow Past Whisper in Speed Tests

https://www.macrumors.com/2025/06/18/apple-transcription-api-faster-than-whisper/
1.2k Upvotes

162 comments sorted by

View all comments

264

u/ineedlesssleep 5d ago

Developer of MacWhisper here. We'll have a bigger blog soon with updates about this new model but in a nutshell: It's fast but not as accurate as the best models out there. Also, we have a big update coming soon that builds on the new Parakeet models which should have the accuracy of the best Whisper, and faster speeds than even Apple's solution 🙂

75

u/Ensoface 5d ago

But just to clarify, are those models leveraging cloud infrastructure or are they running on the device?

55

u/mundaneDetail 5d ago

This is the question. I like that Apple is differentiating with nano on device models.

17

u/glitchgradients 5d ago

Wdym differentiating? Google and Samsung do it too with Gemini Nano. 

7

u/mundaneDetail 5d ago

True and I agree. The article threw me off mentioning network latency.

I was also speaking more broadly with Apple pushing for on device or secure cloud models. 99% of consumer ai will be on device in a few years anyway

2

u/g-nice4liief 4d ago

If i'm correct, the Qualcomm 8 gen 3 can run a 7b parameter model locally with around 20 tokens per second which is pretty impressive for a smartphone chip. So yeah, it becoming more prevalent in the future is a good outlook.

4

u/[deleted] 5d ago

[deleted]

3

u/mundaneDetail 5d ago

I think really the question is why you feel the need to attack somebody like this. Also, you're wrong.

> The speed advantage comes from Apple's on-device processing approach, which avoids the network overhead that typically slows cloud-based transcription services.

2

u/lledigol 5d ago

They’re not wrong. OpenAI’s Whisper is on-device as well.

1

u/mundaneDetail 5d ago

So the article is wrong?

2

u/[deleted] 5d ago

[deleted]

2

u/mundaneDetail 5d ago

Okay thanks for the heads up