Meta’s Llama 4 Scout and Maverick launch
Meta has launched its Llama 4 AI models—Scout and Maverick—touting big upgrades in multimodal capabilities. But real users aren’t convinced. On Reddit, early testers say the models perform poorly in coding, give generic advice, and fail to beat smaller rivals like Gemma 3 and DeepSeek.
Meta has just dropped its latest open-weight AI models, Llama 4 Scout and Llama 4 Maverick, but not everyone is cheering. While Meta claims these models are their best yet, user feedback online paints a mixed picture. And this launch also comes at a time when Meta’s VP of AI Research, Joelle Pineau, has she’s leaving the company.
The company says Llama 4 Scout is a 17 billion parameter model with 16 experts, capable of running on a single NVIDIA H100 GPU and supporting a context length of 10 million tokens. On paper, that’s impressive. The bigger Maverick version, also 17 billion active parameters but with 128 experts, boasts a total of 400 billion parameters and is said to outperform GPT-4o, Gemini 2.0 Flash, and others across multiple benchmarks.
AI Reddit not happy
Meta says Llama 4 is built using a new mixture-of-experts (MoE) architecture, enabling high performance without needing all parameters active at once. The idea is that only certain parts of the model activate depending on the input, reducing compute load and cost.
Another user, Dr_karminski, who ran LLM Arena tests, was even more direct. “They completely surpassed my expectations… in a negative direction,” he wrote. Calling Llama-4-Maverick “abysmal” for coding tasks, he questioned why anyone would use it over lighter models like DeepSeek V3 or Qwen-QwQ-32B. “Meta, have you truly given up on the coding domain? Did you really just release vaporware?”
According to Meta, Llama 4 Scout is supposed to shine in multimodal and long-context tasks. But early testers claim it even underperformed compared to Gemma 3 27B in every way.
Multimodal claims, real-world tests pending
Llama 4 models are natively multimodal, able to handle both text and image inputs. Meta says they were trained with a large dataset including up to 48 images and long documents. But community feedback suggests that the promised capabilities are either not fully accessible yet or just not up to the mark.
The Llama 4 Behemoth, a massive 2 trillion parameter teacher model, is still in training. Meta says Behemoth helped distill the smaller models through codistillation, and it’s expected to be among the smartest models in the world. But again, that’s still in progress.
Leadership shakeup amid model criticism
Adding to the timing, Meta’s Joelle Pineau, VP of AI Research, recently exited the company on May 30. That’s another big change in the middle of what’s supposed to be a big win for Meta’s AI division.
Meta has made Llama 4 Scout and Maverick open-weight and downloadable from and Hugging Face. For now, developers and users are left to test it themselves and decide whether the hype matches the performance.
Read More: