Loading...
Loading...
LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.
Input
$0.01
per 1M tokens
Output
$0.02
per 1M tokens
Blended
$0.01
per 1M tokens
Cheaper than 92% of models. Median price is $0.56/1M tokens.
Daily
$0.01
Monthly
$0.38
Context Window
33K
tokens
Larger than 8% of models
Context Window Comparison
Quality Index
7.0
491st of 507
Top 97%
Coding Index
2.3
395th of 417
Top 95%
Math Index
25.3
198th of 269
Top 74%
Price/1M
$0.01
47th cheapest
98% below median
Top 8%
Context Window
33K
353rd largest
Top 92%