anonzzzies 2 hours ago

From this thread [0] I can assume that because, while 1.6T, it is A49B, it can run (theoretically, very slow maybe) locally on consumer hardeware, or is that wrong?

[0] https://news.ycombinator.com/item?id=47864835

  • Quasimarion an hour ago

    Theoretically with streaming, any model that fit the disk can run on consumer hardware, just terribly slow.

woeirua 2 hours ago

Hmm. Looks like DeepSeek is just about 2 months behind the leaders now.

  • anonzzzies 2 hours ago

    If that is really so, it would be now be good enough to replace claude for us; we use sonnet only; with our setup, use cases and tooling it works as well as opus 4.6, 4.7 so far. We won't replace sonnet as long as they have subscriptions but it is good to have alternatives for when they force pay per use eventually.

    • arunkant 4 minutes ago

      Yep, it should be better and more efficient then sonnet.

cmrdporcupine 2 hours ago

Pricing: https://api-docs.deepseek.com/quick_start/pricing

"Pro" $3.48 / 1M output tokens vs $4.40 for GLM 5.1 or $4.00 for Kimi K2.6

"Flash" is only $0.28 / 1M and seems quite competent

(EDIT: Note that if you hit the setting that opencode etc hit (deepseek-chat / deepseek-reasoner) for DeepSeek API, it appears to be "flash".)

  • taosx 2 hours ago

    I estimated that even with heavy usage it would cost your around 30-70$ depending on caching at around 40M tokens. That would give you around double the usage compared to gpt-5.5 on the 200$ sub

  • mudkipdev 2 hours ago

    This is refreshing right after GPT-5.5's $30

taosx 2 hours ago

So the R line (R2) is discontinued or folder back into v4 right?

  • mudkipdev 2 hours ago

    I believe the R stood for reasoning, just like OpenAI had their own dedicated o1/o3 family, but now every model just has it built-in.