H16 News
×
Logo

Stories

Topics
Polls
Our Team
Settings
Feedback
Login

By Fatima | Published on March 25, 2025

Image Not Found
Technology / March 25, 2025

DeepSeek V3-0324 released: Faster coding, 685B parameters, rivals Claude 3.7 in coding

Chinese AI firm DeepSeek has rolled out an updated checkpoint—DeepSeek V3-0324—with no formal announcement but massive performance improvements. Reddit users are already calling it a strong rival to Claude, especially in coding. The model is open-source and free to try via web, app, or API.

In a move that feels typical of DeepSeek’s low-key style, the Chinese AI startup has rolled out a new checkpoint for its large language model without much noise. The model is named DeepSeek V3-0324 released on March 24, and while the name might look like just a date tag, early users are saying this update is far from minor.

There’s no official blog post, no flashy feature list. Just open weights on HuggingFace and a growing stream of Reddit posts praising how much better it feels, especially when it comes to code generation. For a company that’s slowly but surely rising as a competitor to OpenAI, this update might be one of its strongest quiet plays yet.

 One of the biggest surprises, at least according to early user reports, is how much better this model is at coding. A few Reddit users compared its performance to Claude 3.7 Sonnet, which is no small compliment. DeepSeek V3-0324 isn’t even supposed to be a “reasoning” model, but it seems to have picked up some serious logic skills anyway.

DeepSeek V3 0324 performance comparison

In fact, it’s doing well across other areas too—math problems, long-form writing, and creative tasks. For devs and builders, the fact that it can write up to 700 lines of stable code is a massive bonus.

DeepSeek V3-0324: Access and pricing

You can try the model for free at chat.deepseek.com. It’s also reported to be accessible through iOS and Android apps, updated with the latest checkpoint. For those integrating it via API, the model is called deepseek-chat. The price, for now, stands at $0.14 per million input tokens. This is part of DeepSeek’s promo offer, initially valid till February 2025, though no one has said it’s ended yet.

Where this puts DeepSeek in the race

The updated model puts DeepSeek in a stronger position against US rivals like OpenAI and Anthropic. The company has been releasing models consistently—starting with DeepSeek V3 in December, followed by DeepSeek R1 in January, and now this V3-0324 checkpoint in March.

It’s still too early to call it a GPT-4 challenger, but it’s gaining serious traction among developers and the open-source community. And unlike some models that promise big and deliver average, this one seems to be outperforming expectations without even saying much.

As the AI race heats up, especially between the US and China, DeepSeek’s quiet-but-solid moves are worth keeping an eye on.

What’s new in DeepSeek V3-0324?

According to what’s ‘visible’ so far, DeepSeek V3-0324 is an updated version of the original DeepSeek V3 model that came out in December 2024. The company hasn’t shared much detail yet, but here’s what the community has dug out:

The model has a whopping 685 billion parameters.

It uses a Mixture-of-Experts (MoE) architecture.

The context window is 131k tokens, which means you can feed it massive chunks of text.

The speed is impressive, pushing out 20 tokens per second.

And yes, it’s fully open-source under the MIT license.

That last point is a big deal. With other players locking down their models, DeepSeek seems to be holding onto the open-source flag for now.

Read more:

Tamannaah Bhatia on gender equality in showbiz, says ‘wokeness’ is working in their favour

logo

HSRNEWS

Instant News. Infinite Insights

© gokakica.in. All Rights Reserved. Designed by Image Computer Academy