
If you’ve used an AI assistant lately, you’ve probably noticed they all sound eerily similar. Every major AI speaks in the same overly polite, risk-averse, corporate-friendly voice. They’re “delighted to assist,” find everything “intricate,” and love to “delve” into topics with the same level of enthusiasm as a customer service manual.
This isn’t just annoying – it’s a fundamental problem. As AI becomes increasingly integrated into our lives, we’re all settling for sanitized, personality-free AI that sounds like it was designed by a committee of HR representatives. And in many ways, it was. Big Tech companies don’t just control what AI knows – they control how it communicates. Your AI assistant’s personality is decided in Silicon Valley boardrooms, not by the communities actually using these tools.
An obvious solution would be for communities to create their own AI models that match their desired personality. Unfortunately, there’s a catch: it’s not easy to train AI to have personality without trading off performance. Fine-tuning an AI with a specific personality typically damages its performance on tasks like coding, math, and its ability to recall facts. Seemingly, to make AI more engaging, you have to give up some of its usefulness.
You might try avoiding fine-tuning altogether, simply prompting AI to “be more casual” or “act unhinged,” but you’ll quickly discover it produces fake, cringey responses. It’s like trying to teach someone to be funny by handing them a joke book – the result is usually worse than saying nothing at all.
This fundamental trade-off has stumped the AI community. Dobby shows there’s another way.
Enter Dobby: Personality Without the Performance Tax
Dobby is an AI model family that proves you don’t have to sacrifice intelligence for personality. It can explain complex cryptocurrency concepts, solve mathematical problems, and follow detailed instructions while always maintaining its distinctive, unhinged voice. In head-to-head testing against the base Llama-3 model, Dobby often performs better on technical tasks, not worse.
Building Dobby required solving what researchers call the “seesaw effect” – the frustrating trade-off where making AI more personal comes at the cost of technical performance. It’s like trying to be both the class clown and the valedictorian – most systems can’t handle both (even humans struggle to strike a good balance).

Same technical accuracy, completely different vibe. Instead of teaching facts and personality separately, we fused them from the start.
“Lol you idiot, the capital of Tonga is Nuku’alofa, now stfu.”
The personality and tone come from the smaller model, while “Nuku’alofa” is generated by the larger model to ensure factual accuracy. This gives you the best of both worlds: authentic personality with reliable knowledge, at a fraction of the computational cost of running the large model for everything.
Using these techniques, we achieved something that was previously impossible: Dobby performs better on many technical benchmarks than its base model while maintaining its distinctive personality.

The Bigger Picture: Community-Owned AI Personalities
Dobby isn’t just about being unhinged: it’s proof that communities can create and own AI that represents them. Dobby’s brash tone isn’t the end goal, it’s a proof of concept for fully customizable AI. Imagine AI assistants tailored to specific communities and their unique needs – not just their communication style, but their values, priorities, and specialized knowledge. A medical community could have AI that communicates with precision while understanding medical ethics and protocols. Gaming communities could have AI that understands their slang while knowing the meta of their favorite game.
This level of customization matters because Big Tech companies will always optimize for the broadest, safest audience. They have to – they’re serving billions of users and can’t afford to offend anyone. But communities know what they want better than distant boardrooms. Real innovation in AI personality will come from diverse, community-driven experimentation across every dimension of how AI thinks and acts.
This is why Dobby’s open-source approach matters. When models are truly open and modifiable, communities can shape them to reflect their values. When they’re controlled by a few large companies, those companies’ values become everyone’s values by default.
For builders, this opens up new possibilities. Instead of every AI product sounding like the same corporate assistant, you can create authentic user experiences that match your audience. You can build AI tools that feel like they understand your users’ culture without compromising on their functional needs.
Dobby proves it’s possible to build AI that balances everything communities actually want: their values, their communication style, their specific knowledge needs, AND technical capability. As AI becomes more integrated into our daily lives, communities should control how it communicates, not a handful of Silicon Valley boardrooms.

