HN
Paper
All
Show
Ask
Jobs
Top
Today
Last 7 days
Last months
This year
Statistics
All
Show
Ask
Jobs
Top stories
Today
Last 7 days
Last months
This year
Statistics
Stories by
sanchitmonga22
Launch HN: RunAnywhere (YC W26) – Faster AI Inference on Apple Silicon
240 points
sanchitmonga22
2026-03-10T17:14:52Z
github.com
Fastest LLM decode engine on Apple Silicon. 658 tok/s on M4-max,beats mlx by 19%
5 points
sanchitmonga22
2026-03-07T01:39:58Z
www.runanywhere.ai