Gemma 3 is out! Multimodal (image + text), 128K context, supports 140+ languages, and comes in 1B, 4B, 12B, and 27B sizes with open weights & commercial use.
Since you're not using the official models (since they're not GGUFs), what exact model are you trying to use? The 3rd party you rely on might have screwed something up.
What exactly is this supposed to mean? That I can grab the weights by just downloading them, or something like that?
Because when I open up the HuggingFace repository, it asks me to "accept the conditions" (Google’s usage license). How is this different from any other proprietary binaries people distribute on the internet but let you run locally? Are other software (like 1Password for example) also "open software" because you can download it?
Greetings from the Gemma team! We just got Gemma 3 out of the oven and are super excited to show it to you! Please drop any questions here and we'll answer ASAP.
As per the technical report, every 5 layers you have a global attention layer. The global attention layer during training can have as many as a 128k context length during training (though I understand it is usually 32k).
Q. When you are training with a context length of 128k, is the attention in the global layers dense or sparse ?
If dense, would the attention memory requirement here would be O(n^2) where n is 128k for each global layer ?
We never train at 128k, only 32k, changing the scaling factor at the end.
We wanted the long context recipe to be friendly for finetuning, and training at 128k is a bit of a pain we don't do it. For inference, we see inference at 128k with the 5/1 is close to RAM usage for a fully-global-layer model at 32k.
Question: your model supports 140 languages. Given that you are focusing on compactness and efficiency, would you not have gains in also developing models on a selected limited number of languages (e.g. the topmost (in cultural production) four "western" ones with shared alphabet - or similar set)?
Edit: of course the multilingual capability can be can be welcome. On the other hand, there are evident cases in which efficiency can be paramount. We can wonder about the tradeoff: how much in efficiency is sacrificed by features.
That's an idea we've thought about. However, we think the open source community has already created a very impressive set of language or region-specific finetunes [1] [2]. Also there is a lot of cultural and nuance context in every language that we don't have the capacity to cover sufficiently. So for v3 we focused on creating the best foundational multilingual model.
And have you measured the trade-off that could come with embracing such a large number of languages and alphabets? It would be interesting to note whether you are sacrificing some response quality, or if such supposed sacrifice is interestingly negligible, or if - even more interestingly - the quality increases with the added proficiency.
Yes we have measured the tradeoff. We don't see a drop of perplexity in English when introducing multilingual, and there is a slight drop in some English language-specific evals (~1%).
There are enough small model teams competing that I fell confident one of them will try this, and if it just sticking to english gives a large boost, the others will be forced to follow suite.
It would also kind of suck for non-english speakers, because it will just be another feather in the hat of "English eats the world".
Some numbers to try and make an idea: if I understand correctly, Gemma3 uses a fixed (in its versions by size) vocabulary 256k entries big; the smallest 1B version has ~300M embedding parameters and ~700M non-embedding parameters; the largest 27B version has ~5x embedding parameters and ~35x non-embedding parameters.
Multilingualism covering 140 languages is quite a big feat. Gemma3 apparently aims to be compact and efficient. The two goals and features put together raise questions. You wonder for example how much does such extensive multilingualism impact the above numbers, on a benchmark of similar results. It may e.g. be a general question to wonder how much multilingualism complicates an embedding space (owing e.g. to omographic collisions), and the question becomes more prominent when you crammed 140 languages in one model.
> non-english speakers
You would produce more specialized models (where it makes sense): Eng; Eng-Fra-Esp-Deu; Man-Can... For a billion weights per model it could probably be financially acceptable.
Thanks, been using Gemma 2 a lot at home as it still holds up very well and the 9B version runs great on my 2080Ti. Strong prompt adherence coupled with overall capability makes it very useful. Looking forward to trying Gemma 3.
I have some dumb questions though, might as well ask. How do you decide on the model sizes? And how do you train them? Independently or are they related somehow?
Picking model sizes is not an exact science. We look for sizes that will fit quantized on different categories on devices (e.g., low-end and high-end smartphone, laptops and 16GB GPUs, and bigger GPUs/TPUs). We also want the ratio of model width to depth (number of layers) to be consistently around 90, which we found works best.
The models are trained with distillation from a bigger teacher. We train them independently, but for v3 we have unified the recipes for 4B-27B, to give you more predictably when scaling up and down to different model sizes.
One unexpected (to me) use-case appeared not long ago when I found myself without internet but wanting to fix some non-standard Linux configuration issue. As a Windows guy I tend to web search such things, but local LLM to the rescue!
Even smaller models like Gemma 2 9B has enough compressed knowledge that it managed to help me quickly solve my issue.
This got me thinking how such smaller, but very capable models might be a game-changer in communities where internet might not be available or too expensive for continuous use. It's almost like having a portion of the internet in a box, just add electricity.
How good is Gemma at structured output generation, JSON schema compliance and tool use? Particularly the smaller versions, particularly in foreign languages?
We will run our internal evals on it for sure, but just wanted to ask whether that's even a use case that the team considered and trained for.
Just tried gemma3:4b for structured output and it fails with a strange error ( ollama is the latest):
Ollama error: POST predict: Post "http://127.0.0.1:49675/completion": read tcp 127.0.0.1:49677->127.0.0.1:49675: wsarecv: An existing connection was forcibly closed by the remote host.
Not sure this is Ollama or gemma3:4b problem. At the same time, gemma3:12b works fine for the same API request (100% identical, only difference is model id).
Hey, I'm from the Gemma team. There's a couple of angles to your question
We do care about prompted instructions, like json schema, and it is something we eval for and encourage you to try. Here's an example from Gemma2 to guide folks looking to do what it sounds like you're interested in.
What's the official take on the system prompt? The technical report doesn't mention it, but the official QAT GGUFs include some form of prepending it to the first user message. Has it been trained with any <start_of_turn>system turns with tool calls and such?
I was under the impression that the purpose of "system" prompt is to encode the instruction boundary explicitly to reduce the risk of injection. Do you enforce some kind of security invariant that we could rely on? For example, does the alignment regiment include adversarial demonstrations so that out-of-order instruction-following (such as contradicting preceding) is penalised?
Lots to be excited about here - in particular new architecture that allows subquadratic scaling of memory needs for long context; looks like 128k+ context is officially now available on a local model. The charts make it look like if you have the RAM the model is pretty good out to 350k or so(!) with RoPE.
In addition, it flavor tests well on chat arena, ELO significantly above yesterday’s best open model, Qwen 2.5 72b, has some pretty interesting properties that indicate it has not spent much of its model weight space on memorization, hopefully implying that it has spent it on cognition and conceptual stuff.
And, oh also vision and 140 languages.
This seems like one worth downloading and keeping; Gemma models have at times not performed quite to benchmark, but I’d guess from all this that this will be a useful strong local model for some time. I’m curious about coding abilities and tool following, and about ease of fine tuning for those.
Thanks open sourcing this, DeepMind team! It looks great.
Very cool open release. Impressive that a 27b model can be as good as the much bigger state of the art models (according to their table of Chatbot Arena, tied with O1-preview and above Sonnet 3.7).
But the example image shows that this model still makes dumb errors or has a poor common sense although it read every information correctly.
I suppose siglipv2 wasn't out yet when they were training this - I wonder if there will be an update to the multimodal models or pali-gemma which utilizes Siglip2. Aya Vision from Cohere utilized siglip 2 to great effect
The Gemma 3 models are trained with distillation and achieve superior performance to Gemma 2
for both pre-trained and instruction finetuned versions. In particular, our novel post-training recipe
significantly improves the math, chat, instruction-following and multilingual abilities, making Gemma3-
4B-IT competitive with Gemma2-27B-IT and Gemma3-27B-IT comparable to Gemini-1.5-Pro across
benchmarks. We release all our models to the community.
Super cool models! Love the mixture of sliding window and global attention to cut down on KV cache sizes! And 4B, 12B and 27B are vision + text by default! Super cool!
Even though Gemma 3 takes much less inference processing power and delivers better results than DeepSeek v3, I'm certain this won't cause the same Nvidia stock price panic that DeepSeek did.
open-weights, not open-source (sorry to be that one but open source in this case would mean you can build it yourself from provided "source", which you can't, because it's not provided)
Per quick testing the 27b model seems very strong at least in natural language. It produces even good Finnish, in which smaller models tend to really struggle. Very promising.
Edit: Per even quicker testing the Finnish language performance degrades rapidly with the smaller models, as is usually the case. Would be great to have language specific distillations from larger models.
very excited for this. my current fav model on my mac mini for text processing is gemma 9b + gemma 2b combo spec decoding. great times to have all this getting drop left and right.
> The Gemma 3 models are multimodal—processing text and images—and feature a 128K context window with support for over 140 languages.
I'm curious as a multilingual person: would a single language (english/spanish/cantonese) allow for the model to be bigger and still fit in a single GPU?
In the context of another multilingual model I've heard that the majority of its training was in mainly one language, as that training seems to be applicable to languages added later too. To me that sounds plausible given adding a new language would mean vocabulary & grammar while the understanding of concepts should already be there.
Intuitively adding 140 languages instead of e.g. the 5 most common would otherwise be in conflict with making a small model that fits a single GPU.
I believe (and some other people on the internet having more knowledge in LLM believe too) that open source local models are the future.
Probably big models with API and chat like OpenAI is doing will have its niche toot but it is very costly and it is not AGI and it will not be in the near future.
On the other hand with rise of NPU chips and small models you can have your own assistant on your phone using your own data almost instaneously with almost no cost. Whoever will build the best OS model will win this race. As the winner you will be able to set the standard.
Basically it is why we have Linux on the severs not Windows and why even browsers are free you still get one from every tech giant.
Those are certainly benefits, but it's most likely a prophylactic move.
LLMs will be (are?) a critical piece of infrastructure. Commoditizing that infrastructure ensures that firms like Google and Meta won't be dependent on any other (OpenAI) for access to that infrastructure.
Meta in particular has had this issue wrt Ads on iOS. And Google wrt paying Apple to be the default search engine.
See also: Joel Spoelsky's famous Strategy Letter V [0].
Not quite yet on Ollama, but hopefully we'll add this soon. Also, we didn't add the pan-and-scan algorithm yet for getting better clarity in the original image.
> We also change the architecture of
the model to reduce the KV-cache memory that tends to ex plo de with long context
This is key (pun not intended). It's one thing to run these models locally; it's a totally different game when you need longer context.
Sure, the new M3 Ultra can fit a Q4 DeepSeek r1 in URAM, but as soon as you wanna get usable context like +64k, the t/s and PP quickly become prohibitive.
Speaking of M3 Ultra, I really wish Apple had put more bandwidth in this beast of a machine. It's got a lot of "energy", not a lot of "power" to actually use that energy.
That's what I thought. And it could be pulicity of Gemini as well that it is so good that it can teach students say 5x faster. If it is Gemini, there isn't any reason to hide. My bet is it is some unreleased Gemma or some model.
> They are designed to help prevent our models from generating harmful content, i.e.,
> [...]
> Sexually explicit content
Dear tech companies. Sexually explicit content is not harmful. Why are you all run by puritans? I don't even want to make edgy porn, I just want to be treated like an adult.
It's harmful in that there exists a significant and vocal subset of users who does not wish to see that content or does not wish their children to do so. It's easier to teach your model never to produce that kind of content than to teach it to perfectly distinguish whether this user should see that content or not. TV channels are barred from broadcasting this kind of content for similar reasons.
Sure, there are always jailbreaks, but then the narrative changes from "we made a model that tells erotic stories to children" to "this ingenious teenager figured out a way to hack our model to make it produce erotic stories." In other words, jailbreak move the fault from the model producer to the model user.
It's also worth keeping in mind that erotica comprises a surprisingly large portion of fiction easily available on the internet for free, and "unfiltered" models tend to produce that kind of content unprompted (see e.g. the original Mistral). The major AI labs are probably filtering it out, but I suspect they can't go too far there, as having a model that is good at fiction is something they actually want.
Then there are the non-chat-gpt-app use cases (like customer support chatbots, automatic summarization etc), for which unprompted erotica is highly inappropriate. Those are the "business travelers" of AI, not the first thing one thinks of when talking about who uses AI models, but extremely important nonetheless.
I heard of this described as the minority effect, that a small minority can have a disproportionate impact. The example given is that it's cheaper to make all instances of a product kosher or halal than to make an entirely separate product.
> It's harmful in that there exists a significant and vocal subset of users who does not wish to see that content or does not wish their children to do so.
"I have a right to live in a society that perfectly adheres to my personal morals" is not how companies or people should operate in a pluralistic society, despite Nassim Taleb's claim that the intolerant minority wins.[0]
All of this is true but then it's as easy as releasing censored and uncensored versions of the model.
Then it's up to users (or parents, in the case of children) to choose the adequate version for each purpose. Just like there are child-friendly movies and adult-only movies, and no one beyond fringe puritan crusaders would say that the latter should outright not exist.
Well here you still have the same problem, since they're not gonna release an actually uncensored version, that tells you how to do awful things (or indeed, that tells you to do them).
So then you'd have censored and less censored, and it would still be a matter of where to draw those lines.
True, "uncensored" is not the best term for what I meant (as I'm aware that fully uncensored is not a realistic thing to ask from companies).
What I mean is a model for all audiences and an adult model, and the line would be drawn at the law of the country producing it (if it's something that would be legal to publish for a human author at a website, then it should be allowed as an LLM response). So erotica would be fine, while instructions for making a bomb wouldn't.
Companies release uncensored models all the time. They're called "text" models. I just had llama3.2:3b-text-fp16 give me step by step instructions on how to make a pipe bomb.
I think it's easy to released the uncensored version, it's just the censored version that's likely super super hard.
Since this is just giving the model directly, there's no ability to do any filtering as part of inference, so I would imagine you have to assume the worst (intent) on any input coming into it.
There are also some practical constraints, like any kind of erotic content is completely prohibited in some regulations (like India), so if you want to be able to have access to human labeling or deploy the model under these regulations, you do need to comply.
It’ll get easier once the costs of building foundational models go down and human labeling gets automated. Sit tight, models that’d be creative and amazing at generating erotic content are certainly coming.
It's funny because the results are in, millennials grew up with pretty easy access to all manner of porn from an early age and the effect has been nothing. Even a reduction in intimacy if anything.
I'm sure the hysterical puritans of the past will come out any day now and admit that they weren't even 1% correct in their assertions.
No, there's no movement to shut down pornography on the internet. There's a movement to shut down specific websites and make a lot of noise about it but continue consuming pornography behind closed doors.
People like pornography. They'll as soon ban alcohol again (which worked so well last time)
>It's harmful in that there exists a significant and vocal subset of users who does not wish to see that content or does not wish their children to do so
It's hard to think of a scenario where there's a child technical enough to run Gemma 3 locally but somehow unable to access any other written erotica. Project Gutenberg is full of erotic textual content and I haven't heard of anyone calling for that to be banned.
>Then there are the non-chat-gpt-app use cases (like customer support chatbots, automatic summarization etc), for which unprompted erotica is highly inappropriate. Those are the "business travelers" of AI, not the first thing one thinks of when talking about who uses AI models, but extremely important nonetheless.
And how many of these are going to be using Gemma, when Gemini over the API is cheaper, faster and easier to use?
> It's hard to think of a scenario where there's a child technical enough to run Gemma 3 locally but somehow unable to access any other written erotica.
The reason you're struggling to understand is that you're thinking about this logically.
Adult content is obviously freely available to any child or adult with minimum technical skills. What makes LLMs different is that it's "the new thing" and people respond differently to "the new thing".
Not all sexually explicit content is harmful in all contexts for sure, but in many contexts it is fairly universally considered harmful (eg content involving minors). Do you have means of distinguishing between the two? Are you suggesting that a company must invests millions into teaching the model where exactly the red line lines so that it can have a conversation close to it but without crossing it? Or you suggest biting the bullet and releasing the model not only capable of generating eg child porn, but also having a >0 chance of randomly discussing it in unrelated contexts? Chance of error is always there, and companies decided that a risk of really bad behavior in benign context overweights the gains. Imho, a decision to not play whack a mole with this land mine is quite rational, esp considering gains vs risks vs costs. Think of it as a cost cutting measure, not as an infringement on free speech. You are free to invest you own money into this problem if you think that's a grave mistake and a missed opportunity. The first project to push the automated generated content moderation against what is considered appropriate in the given context far enough to make it economical for companies to put their guard down could actually be worth a lot if you think there's market for it (eg agents on dating websites? idk, you tell me)
I don't agree that textual, fictional explicit content involving minors is "fairly universally considered harmful". Such content is allowed on large platforms like Archive of Our Own or Japan's Shosetsuka ni Naro. I think "don't think it's harmful, but not willing to defend" is a pretty typical attitude.
The solution to this problem is to make it not work. If there are various technological developments in the world that do and don't have porn, and if such were cases that the common denominator of failures were lack of smoothly graduated spectrum of contents without disruption from casual family safe content to hardcore pornography, the problem will correct itself.
Actually, it will happen naturally and eventually. Just look at Apple Vision Pro which still don't have VRChat support, and compare how deeply DOA it has been to other VR headsets that are clearly nowhere near as important. Or "Metaverse" that were all explicitly SFW.
This effect can even be seen in the Apple App Store itself. Who uses App Store? You flow into App Store through porn-enabled platforms, such as web or social media. No one browses App Store as a content. What does it not have? Pornography.
They mean "harmful to us", not the users. It's harmful because they live an echo chamber of a single mention of genitals makes all the stakeholders run away. Why do they run away? Because they also have stakeholders, and so on.
There are very few pro-porn voices in the corporate, tie-wearing environments that have the money to train new LLMs from scratch.
Oh, there are loads of porn enjoyers working in such companies - but traditional professionalism means you leave the porn at home during the work day. It is, after all, NSFW.
So at the meeting where censorship decisions were being made, even a weak argument for censoring explicit content will be accepted unopposed.
I want to use a multimodal model for manga translation, analysis, and tagging.
If this gives me the "aschually as a ethical safe harmless assistant I can't ..." spiel on anything mildly mature, that would be very disappointing. I'll run a test with Berserk and see how it goes.
I'm not a big believer in abliteration, it seems to always hurt performance. Safety should be handled by a separate system, no need to cripple the actual LLM.
The multimodal models aren't good for this. Refusals aren't the issue (they're fine with BERSERK, though occasionally they'll refuse for copyright). The issue is the tech isn't there yet.
You'll want to use custom models to segment the manga (panels, speech bubbles), OCR the text, translate (gemma punches above it's weights for this part).
That said, I've been experimenting with using Pixtral to do the analysis part with okay-ish results (providing individual panels with the character names) but it'll still mix up the characters when they're drawn differently.
> I'm not a big believer in abliteration, it seems to always hurt performance.
Agreed, it's fun to play with but it increases halucinations. And for creative writing, it makes the model write more compliant characters (they'll give in too easily during negotiations, rather than refuse, etc)
Could probably be improved with more targeted abliteration.
Everyone is treating this like corps have anything to gain from an open uncensored model. Switch your view and give me a single argument for it? That random nerds on HN stop jerking each other about what „open“ means? You are just not their target group. Having this discussion every time no matter if the model released is censored or not is just insanity. Bring new arguments or don’t use the models you don’t like. There will be a new sota „tomorrow“, maybe even one open enough for you.
The argument is that it simply improves the product. For instance, Github Copilot is apparently refusing to do anything with variable names like "trans" and anything related to sex or gender, regardless of the intended meaning. That is a serious flaw and makes the product less useful.
That is not relevant to the argument. Censoring limits possibilities. While that sometimes has its uses, the overly puritanical approach American companies generally take degrades the value of their products.
I am talking about an „open“ weight model you are talking about a service. If the service wants to censor that’s fine and on them and their leadership if an „open“ model gets released with censorship it’s not, because it’s just „open, but how my manager likes it“
>You are just not their target group. Having this discussion every time no matter if the model released is censored or not is just insanit
Who is their target group for small local models that benchmark inferiorly to their proprietary solution (Gemini 2.0) then, if not hobbyists and researchers?
>> The press and decision makers without technical knowledge are the target group, it doesn’t matter if it’s used in production or not. They need a locally deployable model to keep up with enterprises that are to risk averse to put their data into the cloud and also don’t care that their shitty homegrown ChatGPT replacement barely works. It’s a checkbox.
The press and decision makers without technical knowledge are the target group, it doesn’t matter if it’s used in production or not. They need a locally deployable model to keep up with enterprises to risk averse to put their data into the cloud and also don’t care that their shitty homegrown ChatGPT replacement barely works. It’s a checkbox.
This is what HNers surprisingly seem to not understand.
The risk of the model generating illegal content and then the company getting bad PR from vultures in journalism simply outweighs any benefits of including this content in the training data.
This is also why you will never see the big companies release a capable open weight image or video gen model.
>The risk of the model generating illegal sexual content and then the company getting bad PR from vultures in journalism simply outweighs any benefits of including this content in the training data.
This is completely unsubstantiated. The original Sydney (Bing AI) was violently unhinged and this only drew more users; I haven't met a single person who prefers the new Bing AI to the old Sydney, and for that matter I haven't even heard of anyone using Bing AI for ages now they toned it down. Trust in journalists is at an all-time low ( https://news.gallup.com/poll/651977/americans-trust-media-re... ) and America recently elected an extremely unorthodox president in big part due to the sheer hatred of the media shared by a large proportion of the population. Even the most hardcore social conservatives aren't calling for companies to censor the training of open source models so they don't produce adult textual content even when prompted to do so; it's not a political issue.
Brings an argument from nearly a decade ago ignores everything on google in the last four years. Ofc the „first“ rogue AI drew in more users because of the novelty of it… what a shit argument.
The model is open weight, I'll bet someone or the other will abliterate it soon. Maybe you want to do the honors? I have an abliterated Llama running on a server shared with friends and it works great.
This only works until it doesn't. Start with a model that simply hasn't been trained on anything your shareholders find objectionable, and there will be nothing to reveal with abliteration.
Or perhaps it was removing the curly brackets that improved it more than the damage caused by losing the nsfw content.
Or perhaps the measurement of improvement was biased. If a model doesn't understand the word gay there would certainly be people who would find real world use of the model to be substandard.
Did the assessment of what counts as improvement come from the same community that decided that excluding things with 'gay' was cleaning the data?
You can discuss something kosher and have the LLM misinterpret it as something sexually explicit. Yours or their logs will now have all of this miscommunication, and this is a liability. Using models that can’t generate this content even by accident is good legal decision for many. Same goes for images. Stay safe!
Generating sexually explicit content can cause reputational damage or have legal risk. Not generating such content is something that many developers are looking for. There is people who may want such harmful content and other players can cover such a niche.
I don't think it's reputation risk of companies at large, but risk to individual developers. "He worked on porn" is such an easy gut logic for terminations. It's in our human instincts. Everyone know that in guts.
That's a bullshit excuse. The Chinese model creators live in a totalitarian dictatorship where porn is banned and the creators could be arbitrarily jailed, but even they don't go to such effort to censor their open source models (there's censorship on their hosting websites but minimal if you run the models locally).
Filtering is not necessarily a good user experience and comes with a cost to do. Google making a model they expect there to be demand for is not just an excuse.
They don't expect to make money serving Gemma; it benchmarks worse in almost every way than their closed-source Gemini. Believe it or not, one of the main sources of demand for these small, non-SOTA models is people using them for roleplay locally. Anyone corporate has the money to use a bigger, more effective model.
Sexually explicit content literally IS harmful. Children use the internet, they shouldn’t be exposed to pornography until adulthood, if ever. To suggest otherwise reveals a morally repulsive foundation, imo.
Hard to get more puritanical than "if you disagree with my opinion then you're morally repulsive". Not to mention that your argument implies that all traces of sex ought to be scrubbed from the entire Internet? And that that conclusion is the only moral one?
The term "puritanical" doesn't exclusively refer to the existence of or adherence to the Puritan religion, and when it does, the term is usually capitalized. From Dictionary.com:
puritanical [pyoor-i-tan-i-kuhl] adjective
1) very strict in moral or religious matters, often excessively so; rigidly austere.
2) Sometimes Puritanical. of, relating to, or characteristic of Puritans or Puritanism.
It is entirely possible within the parameters of commonly understood English parlance for Muslims, or any group, to be puritanical.
Millennials, who are now well into adulthood, grew up with easy and readily available access to porn. I know when I was in middle school it was everywhere. Kids would even hand out burned CDs of porn.
Please show the damage that it did to that generation. If you are "sky is blue" levels of correct, the evidence should be everywhere. So please, present it.
If there is no real evidence, reply with "So you are saying we should endorse porn for everyone" or some other strawman along those lines. Thanks.
Things do not exist on a black and white basis but there are relevant gray scales to be considered:
It is quite different if we talk about removing any sort of text about body parts related to consensual sexual activities or if we try to censor hard pornography or illegal sexual acitivities. I personally find LLM producing sexual content as text rather irrelevant in the same way that you could go to a library or bookstore and buy a romance.
It is also quite different if your definition of kids goes all the way to 18 years. I don't want my kids not to encounter topics surrounding sex until the become legal adults. They absolutely have to learn about it, and be able to develop healthy relationships to their own body and sexuality and have insights that enable them to understand sexuality in others.
I want to protect my kids from harm, but there must be some balance with other aspects as well.
This is a false dichotomy. We can make tech for adults, and children, either with optional settings, filters, or simply multiple versions, managed by their parents or guardians. It's not tech's responsibility to raise every child.
When we’ve solved the access to explicit content in the rest of the internet wr can come back and have this conversation. Until then teenagers will just go to Reddit or wherever and get it there. If we ban that it’ll just move to sexting on Snapchat which if you have ever spent any time with parents of teenagers you’ll know has a tendency to be screenshotted and distributed.
So you’re arguing for teenagers to be encouraged to share explicit content of minors with each other?
The transformer algorithm was originally intended for AI language translation use-cases, and it excels at this task. It's far better than anything else I've tried.
Except that nearly 100% of the capable models out there refuse to translate swearing, sexual, or violent content.
Legal content! Content you can see in the cinema, borrow from a library, or watch on free-to-air television!
Most models will regularly refuse to translate subtitles, for example, because they've been indoctrinated by Puritan priests to put their fingers in their digital ears and scream la-la-la-la to avoid their pure digital souls from being sullied by the bad words.
Wireheading humanity into population collapse via pervasive sexual hyperstimuli (which is realistically what is on the table here) is basically the definition of "harmful".
This is just silly because it only takes one AI company to defect and start enabling it, and the problem is already pretty bad even without AI.
I think all of the solutions are demand-side, not supply side. I would expect differential reproductive rate trends between populations with and without proscriptions on ersatz reality consumption (i.e. aniconist Muslims, Mennonites, etc.) to accelerate
The early models were uncensored, but people seeing early llms give meth recipes and how to make car bombs made them quickly get neutered before public release (additional controls, for pirvate info, nudity, swearing etc all come from additional guardrails and improvements of the protection they can offer the company and not end users)
Have an uncensored model loop through nypost articles and ask it to synthesize content from that. Nypost has tons of scandalous content and can easily get spun into erotica by an uncensored model.
It’s unsafe for that reason, so you absolutely needed both censored and uncensored. It wasn’t an accident.
> can easily get spun into erotica by an uncensored model.
A sexualized fine-tune yes, but that's because you have to make them overly horny to overcome the original censorship.
Nothing prevent them to train a model that will have an appropriate level of sexual content (that is, only upon user explicit request) the same way they train it not to have sexual content at all.
The reason they do that is because they are American companies, the same companies who also censored nude paintings and statues from European museums' pages.
It could be just courtesy (a personal way to actually thank the personnel) that people from the "tipping" culture may like to use. Just don't do it in places where it is offensive - e.g. Japan.
Gemma 3 is out! Multimodal (image + text), 128K context, supports 140+ languages, and comes in 1B, 4B, 12B, and 27B sizes with open weights & commercial use.
Gemma 3 model overview: https://ai.google.dev/gemma/docs/core
Huggingface collection: https://huggingface.co/collections/google/gemma-3-release-67...
ollama: https://ollama.com/library/gemma3
The ollama page shows Gemma 27B beating Deepseek v3 and o3-mini on lmarena. I'm very excited to try it out.
Doesn't yet work in LM Studio. Barfs an error when trying to load the model. (Error 6, whatever that means. Happy I missed the first 5.)
> Barfs an error when trying to load the model
Since you're not using the official models (since they're not GGUFs), what exact model are you trying to use? The 3rd party you rely on might have screwed something up.
I'm still a huge fan of gemma-22b. Looking forward to this one!
> open weights
What exactly is this supposed to mean? That I can grab the weights by just downloading them, or something like that?
Because when I open up the HuggingFace repository, it asks me to "accept the conditions" (Google’s usage license). How is this different from any other proprietary binaries people distribute on the internet but let you run locally? Are other software (like 1Password for example) also "open software" because you can download it?
Replace "google" with "unsloth" in the browser address bar if you want to download them without signing up to hf
Greetings from the Gemma team! We just got Gemma 3 out of the oven and are super excited to show it to you! Please drop any questions here and we'll answer ASAP.
(Opinions our own and not of Google DeepMind.)
PS we are hiring: https://boards.greenhouse.io/deepmind/jobs/6590957
As per the technical report, every 5 layers you have a global attention layer. The global attention layer during training can have as many as a 128k context length during training (though I understand it is usually 32k).
Q. When you are training with a context length of 128k, is the attention in the global layers dense or sparse ?
If dense, would the attention memory requirement here would be O(n^2) where n is 128k for each global layer ?
We never train at 128k, only 32k, changing the scaling factor at the end.
We wanted the long context recipe to be friendly for finetuning, and training at 128k is a bit of a pain we don't do it. For inference, we see inference at 128k with the 5/1 is close to RAM usage for a fully-global-layer model at 32k.
Individual attention layers are always dense.
Thanks for your answer ! So in the 32k global layer, every token attends to each of the other 32k tokens ?
I'm comparing Gemma3 12 B (https://ollama.com/library/gemma3; running fully on my 3060 12GB) and Mistral Small 3 24B (https://ollama.com/library/mistral-small; 10% offloaded to the CPU).
- Gemma3 12B: ~100 t/s on prompt eval; 15 t/s on eval
- MistralSmall3 24B: ~500 t/s on prompt eval; 10 t/s on eval
Do you know what different in architecture could make the prompt eval (prefill) so much slower on the 2x smaller Gemma3 model?
Thank you for the report! We are working with the Ollama team directly and will look into it.
Thank you!
Question: your model supports 140 languages. Given that you are focusing on compactness and efficiency, would you not have gains in also developing models on a selected limited number of languages (e.g. the topmost (in cultural production) four "western" ones with shared alphabet - or similar set)?
Edit: of course the multilingual capability can be can be welcome. On the other hand, there are evident cases in which efficiency can be paramount. We can wonder about the tradeoff: how much in efficiency is sacrificed by features.
That's an idea we've thought about. However, we think the open source community has already created a very impressive set of language or region-specific finetunes [1] [2]. Also there is a lot of cultural and nuance context in every language that we don't have the capacity to cover sufficiently. So for v3 we focused on creating the best foundational multilingual model.
[1] https://huggingface.co/aiplanet/buddhi-indic
[2] https://ai.google.dev/gemma/gemmaverse/sealion
And have you measured the trade-off that could come with embracing such a large number of languages and alphabets? It would be interesting to note whether you are sacrificing some response quality, or if such supposed sacrifice is interestingly negligible, or if - even more interestingly - the quality increases with the added proficiency.
Yes we have measured the tradeoff. We don't see a drop of perplexity in English when introducing multilingual, and there is a slight drop in some English language-specific evals (~1%).
There are enough small model teams competing that I fell confident one of them will try this, and if it just sticking to english gives a large boost, the others will be forced to follow suite.
It would also kind of suck for non-english speakers, because it will just be another feather in the hat of "English eats the world".
Some numbers to try and make an idea: if I understand correctly, Gemma3 uses a fixed (in its versions by size) vocabulary 256k entries big; the smallest 1B version has ~300M embedding parameters and ~700M non-embedding parameters; the largest 27B version has ~5x embedding parameters and ~35x non-embedding parameters.
Multilingualism covering 140 languages is quite a big feat. Gemma3 apparently aims to be compact and efficient. The two goals and features put together raise questions. You wonder for example how much does such extensive multilingualism impact the above numbers, on a benchmark of similar results. It may e.g. be a general question to wonder how much multilingualism complicates an embedding space (owing e.g. to omographic collisions), and the question becomes more prominent when you crammed 140 languages in one model.
> non-english speakers
You would produce more specialized models (where it makes sense): Eng; Eng-Fra-Esp-Deu; Man-Can... For a billion weights per model it could probably be financially acceptable.
Thanks, been using Gemma 2 a lot at home as it still holds up very well and the 9B version runs great on my 2080Ti. Strong prompt adherence coupled with overall capability makes it very useful. Looking forward to trying Gemma 3.
I have some dumb questions though, might as well ask. How do you decide on the model sizes? And how do you train them? Independently or are they related somehow?
Picking model sizes is not an exact science. We look for sizes that will fit quantized on different categories on devices (e.g., low-end and high-end smartphone, laptops and 16GB GPUs, and bigger GPUs/TPUs). We also want the ratio of model width to depth (number of layers) to be consistently around 90, which we found works best.
The models are trained with distillation from a bigger teacher. We train them independently, but for v3 we have unified the recipes for 4B-27B, to give you more predictably when scaling up and down to different model sizes.
Thanks again, very interesting.
One unexpected (to me) use-case appeared not long ago when I found myself without internet but wanting to fix some non-standard Linux configuration issue. As a Windows guy I tend to web search such things, but local LLM to the rescue!
Even smaller models like Gemma 2 9B has enough compressed knowledge that it managed to help me quickly solve my issue.
This got me thinking how such smaller, but very capable models might be a game-changer in communities where internet might not be available or too expensive for continuous use. It's almost like having a portion of the internet in a box, just add electricity.
Thank you for the feedback! This is why we are so excited to push more and more on small models for both low end and high end smartphones!
How good is Gemma at structured output generation, JSON schema compliance and tool use? Particularly the smaller versions, particularly in foreign languages?
We will run our internal evals on it for sure, but just wanted to ask whether that's even a use case that the team considered and trained for.
Just tried gemma3:4b for structured output and it fails with a strange error ( ollama is the latest):
Ollama error: POST predict: Post "http://127.0.0.1:49675/completion": read tcp 127.0.0.1:49677->127.0.0.1:49675: wsarecv: An existing connection was forcibly closed by the remote host.
Not sure this is Ollama or gemma3:4b problem. At the same time, gemma3:12b works fine for the same API request (100% identical, only difference is model id).
looks like Ollama's issue: https://github.com/ollama/ollama/issues/9686, https://github.com/ollama/ollama/issues/9687
Hey, I'm from the Gemma team. There's a couple of angles to your question
We do care about prompted instructions, like json schema, and it is something we eval for and encourage you to try. Here's an example from Gemma2 to guide folks looking to do what it sounds like you're interested in.
https://www.youtube.com/watch?v=YxhzozLH1Dk
Multilinguality was a big focus in Gemma3. Give it a try
And for structured output Gemma works well with many structured output libraries, for example the one built into Ollama
https://github.com/ollama/ollama/blob/main/docs/api.md#struc...
In short you should have all the functionality you need!
will there ever be a Gemma 3 Thinking? how copyable is the Flash Thinking approach to the Gemma series?
That's a very interesting area, but nothing we can announce today.
What's the official take on the system prompt? The technical report doesn't mention it, but the official QAT GGUFs include some form of prepending it to the first user message. Has it been trained with any <start_of_turn>system turns with tool calls and such?
We recommend using <start_of_turn>user for the system prompt as well.
I was under the impression that the purpose of "system" prompt is to encode the instruction boundary explicitly to reduce the risk of injection. Do you enforce some kind of security invariant that we could rely on? For example, does the alignment regiment include adversarial demonstrations so that out-of-order instruction-following (such as contradicting preceding) is penalised?
Lots to be excited about here - in particular new architecture that allows subquadratic scaling of memory needs for long context; looks like 128k+ context is officially now available on a local model. The charts make it look like if you have the RAM the model is pretty good out to 350k or so(!) with RoPE.
In addition, it flavor tests well on chat arena, ELO significantly above yesterday’s best open model, Qwen 2.5 72b, has some pretty interesting properties that indicate it has not spent much of its model weight space on memorization, hopefully implying that it has spent it on cognition and conceptual stuff.
And, oh also vision and 140 languages.
This seems like one worth downloading and keeping; Gemma models have at times not performed quite to benchmark, but I’d guess from all this that this will be a useful strong local model for some time. I’m curious about coding abilities and tool following, and about ease of fine tuning for those.
Thanks open sourcing this, DeepMind team! It looks great.
Gemma is made by Google, not DeepMind.
Job postings for working on Gemma are under DeepMind in London: https://boards.greenhouse.io/deepmind/jobs/6590957
Linking to he announcement (which links to his PDF) would probably be more useful.
Introducing Gemma 3: The most capable model you can run on a single GPU or TPU
https://blog.google/technology/developers/gemma-3/
Very cool open release. Impressive that a 27b model can be as good as the much bigger state of the art models (according to their table of Chatbot Arena, tied with O1-preview and above Sonnet 3.7).
But the example image shows that this model still makes dumb errors or has a poor common sense although it read every information correctly.
I was thinking the same thing about the receipt calculation: a warning that only tourists tip 18% in Switzerland would no doubt have been appreciated!
I suppose siglipv2 wasn't out yet when they were training this - I wonder if there will be an update to the multimodal models or pali-gemma which utilizes Siglip2. Aya Vision from Cohere utilized siglip 2 to great effect
Quote:
The Gemma 3 models are trained with distillation and achieve superior performance to Gemma 2 for both pre-trained and instruction finetuned versions. In particular, our novel post-training recipe significantly improves the math, chat, instruction-following and multilingual abilities, making Gemma3- 4B-IT competitive with Gemma2-27B-IT and Gemma3-27B-IT comparable to Gemini-1.5-Pro across benchmarks. We release all our models to the community.
Really cool
Looks great! So excited about this! We have been using gemma models since gemma 1.0 and they are so far ahead of the curve!
This is the first model I can think of that advertises itself as being optimized for AMD ROCm.
Super cool models! Love the mixture of sliding window and global attention to cut down on KV cache sizes! And 4B, 12B and 27B are vision + text by default! Super cool!
Very cool to see two promising new LLMs on the same day (the other one being Reka Flash 3 21b) with open weights.
Now, bring on those multimodal LLMs with voice input and output please!
Even though Gemma 3 takes much less inference processing power and delivers better results than DeepSeek v3, I'm certain this won't cause the same Nvidia stock price panic that DeepSeek did.
If it helps anyone, I wrote a detailed analysis here: https://x.com/danielhanchen/status/1899735308180267176
TLDR:
1. 1B text only, 4, 12, 27B Vision + text. 14T tokens
2. 128K context length further trained from 32K. 1B is 32K.
3. Removed attn softcapping. Replaced with QK norm
4. 5 sliding + 1 global attn
5. 1024 sliding window attention
6. RL - BOND, WARM, WARP
Can someone explain Gemma vs Gemini for me please?
Gemma is their open-source series of models. Gemini is the propertierary ones. Gemini models are bigger and better. But Gemma are pretty good too.
open-weights, not open-source (sorry to be that one but open source in this case would mean you can build it yourself from provided "source", which you can't, because it's not provided)
And even "open-weights" is generous, as they're released under a proprietary license with usage restrictions, not an open-source license.
Per quick testing the 27b model seems very strong at least in natural language. It produces even good Finnish, in which smaller models tend to really struggle. Very promising.
Edit: Per even quicker testing the Finnish language performance degrades rapidly with the smaller models, as is usually the case. Would be great to have language specific distillations from larger models.
very excited for this. my current fav model on my mac mini for text processing is gemma 9b + gemma 2b combo spec decoding. great times to have all this getting drop left and right.
> The Gemma 3 models are multimodal—processing text and images—and feature a 128K context window with support for over 140 languages.
I'm curious as a multilingual person: would a single language (english/spanish/cantonese) allow for the model to be bigger and still fit in a single GPU?
In the context of another multilingual model I've heard that the majority of its training was in mainly one language, as that training seems to be applicable to languages added later too. To me that sounds plausible given adding a new language would mean vocabulary & grammar while the understanding of concepts should already be there.
Intuitively adding 140 languages instead of e.g. the 5 most common would otherwise be in conflict with making a small model that fits a single GPU.
What do companies like Meta and Google gain from releasing open models? Is it just reputational? Attractive to top AI talent?
I believe (and some other people on the internet having more knowledge in LLM believe too) that open source local models are the future. Probably big models with API and chat like OpenAI is doing will have its niche toot but it is very costly and it is not AGI and it will not be in the near future. On the other hand with rise of NPU chips and small models you can have your own assistant on your phone using your own data almost instaneously with almost no cost. Whoever will build the best OS model will win this race. As the winner you will be able to set the standard. Basically it is why we have Linux on the severs not Windows and why even browsers are free you still get one from every tech giant.
Those are certainly benefits, but it's most likely a prophylactic move.
LLMs will be (are?) a critical piece of infrastructure. Commoditizing that infrastructure ensures that firms like Google and Meta won't be dependent on any other (OpenAI) for access to that infrastructure.
Meta in particular has had this issue wrt Ads on iOS. And Google wrt paying Apple to be the default search engine.
See also: Joel Spoelsky's famous Strategy Letter V [0].
[0]: https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
Anybody tried training with trl yet?
Someone knows whether there is support for multiple images as input ? I don't see it from the docs yet.
Yes
> If you want to prompt with more than one image, you must include a <start_of_image> tag for each image included in your prompt.
From here: https://github.com/google/generative-ai-docs/blob/78688755db...
Not quite yet on Ollama, but hopefully we'll add this soon. Also, we didn't add the pan-and-scan algorithm yet for getting better clarity in the original image.
Hey, I'm Ravin from the Gemma team. It's on ollama! Try `ollama run gemma3` to get it pulled locally
My point was multi-images and pan-and-scan. We haven't implemented those yet in Ollama, but soon!
Good, FYI the number one usage is vision RAGs (RAGs that deal with documents as images instead of text).
They talked about support for multiple images as input.
doesn't seem to have hit the LMArena yet. will be interesting to see where it places there.
Pretty highly. It’s on page 5 of the report.
Leo from Gemma team here: it's also live on lmsys now !
> We also change the architecture of the model to reduce the KV-cache memory that tends to ex plo de with long context
This is key (pun not intended). It's one thing to run these models locally; it's a totally different game when you need longer context.
Sure, the new M3 Ultra can fit a Q4 DeepSeek r1 in URAM, but as soon as you wanna get usable context like +64k, the t/s and PP quickly become prohibitive.
Speaking of M3 Ultra, I really wish Apple had put more bandwidth in this beast of a machine. It's got a lot of "energy", not a lot of "power" to actually use that energy.
The performance of Gemma 3 is insane.
Thanks for these cool models!
One suggestion (or just rant): Less censorship for local models, PLEASE.
One question: 100+ elo gains from gemma 2 to gemma 3 on Chatbot arena is really something, any estimates on how this is achieved?
In Switzerland isn't it customary to round the food bill up to the nearest 10?
would be good to see how gemma 3:4b compare to phi4-mini
Google DeepMind are the best :)
They say all the models were distilled from a teacher model but they didn't specify what that teacher model is. Interesting thing to hide.
It's a safe bet that it's either one of the Gemini models or a relative of it.
That's what I thought. And it could be pulicity of Gemini as well that it is so good that it can teach students say 5x faster. If it is Gemini, there isn't any reason to hide. My bet is it is some unreleased Gemma or some model.
> They are designed to help prevent our models from generating harmful content, i.e.,
> [...]
> Sexually explicit content
Dear tech companies. Sexually explicit content is not harmful. Why are you all run by puritans? I don't even want to make edgy porn, I just want to be treated like an adult.
It's harmful in that there exists a significant and vocal subset of users who does not wish to see that content or does not wish their children to do so. It's easier to teach your model never to produce that kind of content than to teach it to perfectly distinguish whether this user should see that content or not. TV channels are barred from broadcasting this kind of content for similar reasons.
Sure, there are always jailbreaks, but then the narrative changes from "we made a model that tells erotic stories to children" to "this ingenious teenager figured out a way to hack our model to make it produce erotic stories." In other words, jailbreak move the fault from the model producer to the model user.
It's also worth keeping in mind that erotica comprises a surprisingly large portion of fiction easily available on the internet for free, and "unfiltered" models tend to produce that kind of content unprompted (see e.g. the original Mistral). The major AI labs are probably filtering it out, but I suspect they can't go too far there, as having a model that is good at fiction is something they actually want.
Then there are the non-chat-gpt-app use cases (like customer support chatbots, automatic summarization etc), for which unprompted erotica is highly inappropriate. Those are the "business travelers" of AI, not the first thing one thinks of when talking about who uses AI models, but extremely important nonetheless.
I heard of this described as the minority effect, that a small minority can have a disproportionate impact. The example given is that it's cheaper to make all instances of a product kosher or halal than to make an entirely separate product.
"tyranny of the minority" https://revista.drclas.harvard.edu/a-review-of-tyranny-of-th...
> It's harmful in that there exists a significant and vocal subset of users who does not wish to see that content or does not wish their children to do so.
"I have a right to live in a society that perfectly adheres to my personal morals" is not how companies or people should operate in a pluralistic society, despite Nassim Taleb's claim that the intolerant minority wins.[0]
[0] https://medium.com/incerto/the-most-intolerant-wins-the-dict...
All of this is true but then it's as easy as releasing censored and uncensored versions of the model.
Then it's up to users (or parents, in the case of children) to choose the adequate version for each purpose. Just like there are child-friendly movies and adult-only movies, and no one beyond fringe puritan crusaders would say that the latter should outright not exist.
>censored and uncensored
Well here you still have the same problem, since they're not gonna release an actually uncensored version, that tells you how to do awful things (or indeed, that tells you to do them).
So then you'd have censored and less censored, and it would still be a matter of where to draw those lines.
True, "uncensored" is not the best term for what I meant (as I'm aware that fully uncensored is not a realistic thing to ask from companies).
What I mean is a model for all audiences and an adult model, and the line would be drawn at the law of the country producing it (if it's something that would be legal to publish for a human author at a website, then it should be allowed as an LLM response). So erotica would be fine, while instructions for making a bomb wouldn't.
Companies release uncensored models all the time. They're called "text" models. I just had llama3.2:3b-text-fp16 give me step by step instructions on how to make a pipe bomb.
I think it's easy to released the uncensored version, it's just the censored version that's likely super super hard.
Since this is just giving the model directly, there's no ability to do any filtering as part of inference, so I would imagine you have to assume the worst (intent) on any input coming into it.
There are also some practical constraints, like any kind of erotic content is completely prohibited in some regulations (like India), so if you want to be able to have access to human labeling or deploy the model under these regulations, you do need to comply.
It’ll get easier once the costs of building foundational models go down and human labeling gets automated. Sit tight, models that’d be creative and amazing at generating erotic content are certainly coming.
Yes, it would be absolutely shameful if there was pornography on the internet, easily available to anyone, even children. Society would crumble!
It's funny because the results are in, millennials grew up with pretty easy access to all manner of porn from an early age and the effect has been nothing. Even a reduction in intimacy if anything.
I'm sure the hysterical puritans of the past will come out any day now and admit that they weren't even 1% correct in their assertions.
Porn sites are blocked in many jurisdictions, so I would not use that argument.
No, there's no movement to shut down pornography on the internet. There's a movement to shut down specific websites and make a lot of noise about it but continue consuming pornography behind closed doors.
People like pornography. They'll as soon ban alcohol again (which worked so well last time)
On the contrary. Porn is inaccessible, along with many other things, in much of the world. https://worldpopulationreview.com/country-rankings/countries...
Alcohol is another good example.
there are.
>It's harmful in that there exists a significant and vocal subset of users who does not wish to see that content or does not wish their children to do so
It's hard to think of a scenario where there's a child technical enough to run Gemma 3 locally but somehow unable to access any other written erotica. Project Gutenberg is full of erotic textual content and I haven't heard of anyone calling for that to be banned.
>Then there are the non-chat-gpt-app use cases (like customer support chatbots, automatic summarization etc), for which unprompted erotica is highly inappropriate. Those are the "business travelers" of AI, not the first thing one thinks of when talking about who uses AI models, but extremely important nonetheless.
And how many of these are going to be using Gemma, when Gemini over the API is cheaper, faster and easier to use?
> It's hard to think of a scenario where there's a child technical enough to run Gemma 3 locally but somehow unable to access any other written erotica.
The reason you're struggling to understand is that you're thinking about this logically.
Adult content is obviously freely available to any child or adult with minimum technical skills. What makes LLMs different is that it's "the new thing" and people respond differently to "the new thing".
Won't somebody think of children‽
More than you think, particularly outside the US.
Companies and government organizations who have sensitive data are still unwilling to use these models over any API they don't host themselves.
I work in this space in the EU, and this is absolutely a problem.
Not all sexually explicit content is harmful in all contexts for sure, but in many contexts it is fairly universally considered harmful (eg content involving minors). Do you have means of distinguishing between the two? Are you suggesting that a company must invests millions into teaching the model where exactly the red line lines so that it can have a conversation close to it but without crossing it? Or you suggest biting the bullet and releasing the model not only capable of generating eg child porn, but also having a >0 chance of randomly discussing it in unrelated contexts? Chance of error is always there, and companies decided that a risk of really bad behavior in benign context overweights the gains. Imho, a decision to not play whack a mole with this land mine is quite rational, esp considering gains vs risks vs costs. Think of it as a cost cutting measure, not as an infringement on free speech. You are free to invest you own money into this problem if you think that's a grave mistake and a missed opportunity. The first project to push the automated generated content moderation against what is considered appropriate in the given context far enough to make it economical for companies to put their guard down could actually be worth a lot if you think there's market for it (eg agents on dating websites? idk, you tell me)
I don't agree that textual, fictional explicit content involving minors is "fairly universally considered harmful". Such content is allowed on large platforms like Archive of Our Own or Japan's Shosetsuka ni Naro. I think "don't think it's harmful, but not willing to defend" is a pretty typical attitude.
The solution to this problem is to make it not work. If there are various technological developments in the world that do and don't have porn, and if such were cases that the common denominator of failures were lack of smoothly graduated spectrum of contents without disruption from casual family safe content to hardcore pornography, the problem will correct itself.
Actually, it will happen naturally and eventually. Just look at Apple Vision Pro which still don't have VRChat support, and compare how deeply DOA it has been to other VR headsets that are clearly nowhere near as important. Or "Metaverse" that were all explicitly SFW.
This effect can even be seen in the Apple App Store itself. Who uses App Store? You flow into App Store through porn-enabled platforms, such as web or social media. No one browses App Store as a content. What does it not have? Pornography.
They mean "harmful to us", not the users. It's harmful because they live an echo chamber of a single mention of genitals makes all the stakeholders run away. Why do they run away? Because they also have stakeholders, and so on.
Whenever they say things like "harmful" or "unsafe" there is an implied "for our brand" that follows.
There are very few pro-porn voices in the corporate, tie-wearing environments that have the money to train new LLMs from scratch.
Oh, there are loads of porn enjoyers working in such companies - but traditional professionalism means you leave the porn at home during the work day. It is, after all, NSFW.
So at the meeting where censorship decisions were being made, even a weak argument for censoring explicit content will be accepted unopposed.
I want to use a multimodal model for manga translation, analysis, and tagging.
If this gives me the "aschually as a ethical safe harmless assistant I can't ..." spiel on anything mildly mature, that would be very disappointing. I'll run a test with Berserk and see how it goes.
I'm not a big believer in abliteration, it seems to always hurt performance. Safety should be handled by a separate system, no need to cripple the actual LLM.
The multimodal models aren't good for this. Refusals aren't the issue (they're fine with BERSERK, though occasionally they'll refuse for copyright). The issue is the tech isn't there yet.
You'll want to use custom models to segment the manga (panels, speech bubbles), OCR the text, translate (gemma punches above it's weights for this part).
That said, I've been experimenting with using Pixtral to do the analysis part with okay-ish results (providing individual panels with the character names) but it'll still mix up the characters when they're drawn differently.
> I'm not a big believer in abliteration, it seems to always hurt performance.
Agreed, it's fun to play with but it increases halucinations. And for creative writing, it makes the model write more compliant characters (they'll give in too easily during negotiations, rather than refuse, etc)
Could probably be improved with more targeted abliteration.
Everyone is treating this like corps have anything to gain from an open uncensored model. Switch your view and give me a single argument for it? That random nerds on HN stop jerking each other about what „open“ means? You are just not their target group. Having this discussion every time no matter if the model released is censored or not is just insanity. Bring new arguments or don’t use the models you don’t like. There will be a new sota „tomorrow“, maybe even one open enough for you.
The argument is that it simply improves the product. For instance, Github Copilot is apparently refusing to do anything with variable names like "trans" and anything related to sex or gender, regardless of the intended meaning. That is a serious flaw and makes the product less useful.
See this: https://github.com/orgs/community/discussions/72603
You don’t know if the censorship is in the model or the system prompt.
That is not relevant to the argument. Censoring limits possibilities. While that sometimes has its uses, the overly puritanical approach American companies generally take degrades the value of their products.
I am talking about an „open“ weight model you are talking about a service. If the service wants to censor that’s fine and on them and their leadership if an „open“ model gets released with censorship it’s not, because it’s just „open, but how my manager likes it“
>You are just not their target group. Having this discussion every time no matter if the model released is censored or not is just insanit
Who is their target group for small local models that benchmark inferiorly to their proprietary solution (Gemini 2.0) then, if not hobbyists and researchers?
>> The press and decision makers without technical knowledge are the target group, it doesn’t matter if it’s used in production or not. They need a locally deployable model to keep up with enterprises that are to risk averse to put their data into the cloud and also don’t care that their shitty homegrown ChatGPT replacement barely works. It’s a checkbox.
But who is the target group?
Last time only some groups of enthusiasts were willing to work through bugs to even run the buggy release of Gemma
Surely nobody runs this in production
The press and decision makers without technical knowledge are the target group, it doesn’t matter if it’s used in production or not. They need a locally deployable model to keep up with enterprises to risk averse to put their data into the cloud and also don’t care that their shitty homegrown ChatGPT replacement barely works. It’s a checkbox.
This is what HNers surprisingly seem to not understand.
The risk of the model generating illegal content and then the company getting bad PR from vultures in journalism simply outweighs any benefits of including this content in the training data.
This is also why you will never see the big companies release a capable open weight image or video gen model.
>The risk of the model generating illegal sexual content and then the company getting bad PR from vultures in journalism simply outweighs any benefits of including this content in the training data.
This is completely unsubstantiated. The original Sydney (Bing AI) was violently unhinged and this only drew more users; I haven't met a single person who prefers the new Bing AI to the old Sydney, and for that matter I haven't even heard of anyone using Bing AI for ages now they toned it down. Trust in journalists is at an all-time low ( https://news.gallup.com/poll/651977/americans-trust-media-re... ) and America recently elected an extremely unorthodox president in big part due to the sheer hatred of the media shared by a large proportion of the population. Even the most hardcore social conservatives aren't calling for companies to censor the training of open source models so they don't produce adult textual content even when prompted to do so; it's not a political issue.
Brings an argument from nearly a decade ago ignores everything on google in the last four years. Ofc the „first“ rogue AI drew in more users because of the novelty of it… what a shit argument.
The model is open weight, I'll bet someone or the other will abliterate it soon. Maybe you want to do the honors? I have an abliterated Llama running on a server shared with friends and it works great.
This only works until it doesn't. Start with a model that simply hasn't been trained on anything your shareholders find objectionable, and there will be nothing to reveal with abliteration.
Maybe there exists a dataset consisting entirely of objectionable content, so people can finetune neutered models on it?
PH maybe?
More like literotica.
I mean not only sex, but also swearing, drugs, violence, etc. Basically everything R-rated (but not illegal) which usually gets censored.
Such models would actually run against their long term interests of being able to automate away the work currently done by humans.
Have you considered that selection of material contributes to specialization and efficiency? This is meant to be a weights-small model.
its also apparently a well known result that filtering nsfw content IMPROVES scores
https://x.com/swyx/status/1661359483447316480
Or perhaps it was removing the curly brackets that improved it more than the damage caused by losing the nsfw content.
Or perhaps the measurement of improvement was biased. If a model doesn't understand the word gay there would certainly be people who would find real world use of the model to be substandard.
Did the assessment of what counts as improvement come from the same community that decided that excluding things with 'gay' was cleaning the data?
The word "gay" mentioned in your link isn't nsfw content though.
LLMs get distracted by porn too !?!?
You can discuss something kosher and have the LLM misinterpret it as something sexually explicit. Yours or their logs will now have all of this miscommunication, and this is a liability. Using models that can’t generate this content even by accident is good legal decision for many. Same goes for images. Stay safe!
> you'll have to do that locally
The Gemma family is a family of local models!
usual answer to "why can't I have nice things":
lawyers.
(on both sides)
In my experience, it’s nothing to do with actual lawyers and everything to do with cultural and societal norms.
Lawyering by puritans, maybe. The lawyers themselves are not particularly imposing their prejudices.
Advertisers might be a better reduction
it follows the historical trend of American puritanism:
nipple BAD.
exploding someone into bits GOOD.
on the other hand running with guns is fine.
Generating sexually explicit content can cause reputational damage or have legal risk. Not generating such content is something that many developers are looking for. There is people who may want such harmful content and other players can cover such a niche.
I don't think it's reputation risk of companies at large, but risk to individual developers. "He worked on porn" is such an easy gut logic for terminations. It's in our human instincts. Everyone know that in guts.
That's a bullshit excuse. The Chinese model creators live in a totalitarian dictatorship where porn is banned and the creators could be arbitrarily jailed, but even they don't go to such effort to censor their open source models (there's censorship on their hosting websites but minimal if you run the models locally).
Filtering is not necessarily a good user experience and comes with a cost to do. Google making a model they expect there to be demand for is not just an excuse.
They don't expect to make money serving Gemma; it benchmarks worse in almost every way than their closed-source Gemini. Believe it or not, one of the main sources of demand for these small, non-SOTA models is people using them for roleplay locally. Anyone corporate has the money to use a bigger, more effective model.
Sexually explicit content literally IS harmful. Children use the internet, they shouldn’t be exposed to pornography until adulthood, if ever. To suggest otherwise reveals a morally repulsive foundation, imo.
Hard to get more puritanical than "if you disagree with my opinion then you're morally repulsive". Not to mention that your argument implies that all traces of sex ought to be scrubbed from the entire Internet? And that that conclusion is the only moral one?
There are no Puritans and haven’t been for a few centuries. You’re screaming at ghosts. He or she may be Muslim. You should respect the culture.
The term "puritanical" doesn't exclusively refer to the existence of or adherence to the Puritan religion, and when it does, the term is usually capitalized. From Dictionary.com:
It is entirely possible within the parameters of commonly understood English parlance for Muslims, or any group, to be puritanical.Hard to get more perverse than “kids should have access to sexually explicit material at all times in any medium.
If that sounds fucked up when I say it like that, consider what assumptions you’re making, because that’s literally YOUR argument here.
Millennials, who are now well into adulthood, grew up with easy and readily available access to porn. I know when I was in middle school it was everywhere. Kids would even hand out burned CDs of porn.
Please show the damage that it did to that generation. If you are "sky is blue" levels of correct, the evidence should be everywhere. So please, present it.
If there is no real evidence, reply with "So you are saying we should endorse porn for everyone" or some other strawman along those lines. Thanks.
Things do not exist on a black and white basis but there are relevant gray scales to be considered:
It is quite different if we talk about removing any sort of text about body parts related to consensual sexual activities or if we try to censor hard pornography or illegal sexual acitivities. I personally find LLM producing sexual content as text rather irrelevant in the same way that you could go to a library or bookstore and buy a romance.
It is also quite different if your definition of kids goes all the way to 18 years. I don't want my kids not to encounter topics surrounding sex until the become legal adults. They absolutely have to learn about it, and be able to develop healthy relationships to their own body and sexuality and have insights that enable them to understand sexuality in others.
I want to protect my kids from harm, but there must be some balance with other aspects as well.
> that’s literally YOUR argument here
No, that's the opposite of yours.
This is a false dichotomy. We can make tech for adults, and children, either with optional settings, filters, or simply multiple versions, managed by their parents or guardians. It's not tech's responsibility to raise every child.
When we’ve solved the access to explicit content in the rest of the internet wr can come back and have this conversation. Until then teenagers will just go to Reddit or wherever and get it there. If we ban that it’ll just move to sexting on Snapchat which if you have ever spent any time with parents of teenagers you’ll know has a tendency to be screenshotted and distributed.
So you’re arguing for teenagers to be encouraged to share explicit content of minors with each other?
The transformer algorithm was originally intended for AI language translation use-cases, and it excels at this task. It's far better than anything else I've tried.
Except that nearly 100% of the capable models out there refuse to translate swearing, sexual, or violent content.
Legal content! Content you can see in the cinema, borrow from a library, or watch on free-to-air television!
Most models will regularly refuse to translate subtitles, for example, because they've been indoctrinated by Puritan priests to put their fingers in their digital ears and scream la-la-la-la to avoid their pure digital souls from being sullied by the bad words.
[dead]
Wireheading humanity into population collapse via pervasive sexual hyperstimuli (which is realistically what is on the table here) is basically the definition of "harmful".
This is just silly because it only takes one AI company to defect and start enabling it, and the problem is already pretty bad even without AI.
I think all of the solutions are demand-side, not supply side. I would expect differential reproductive rate trends between populations with and without proscriptions on ersatz reality consumption (i.e. aniconist Muslims, Mennonites, etc.) to accelerate
This could be a historical accident.
Early models were censored, making uncensored releases have bad optics.
If the first models had been uncensored, no one would care if another was added.
The early models were uncensored, but people seeing early llms give meth recipes and how to make car bombs made them quickly get neutered before public release (additional controls, for pirvate info, nudity, swearing etc all come from additional guardrails and improvements of the protection they can offer the company and not end users)
Have an uncensored model loop through nypost articles and ask it to synthesize content from that. Nypost has tons of scandalous content and can easily get spun into erotica by an uncensored model.
It’s unsafe for that reason, so you absolutely needed both censored and uncensored. It wasn’t an accident.
> can easily get spun into erotica by an uncensored model.
A sexualized fine-tune yes, but that's because you have to make them overly horny to overcome the original censorship.
Nothing prevent them to train a model that will have an appropriate level of sexual content (that is, only upon user explicit request) the same way they train it not to have sexual content at all.
The reason they do that is because they are American companies, the same companies who also censored nude paintings and statues from European museums' pages.
[flagged]
It could be just courtesy (a personal way to actually thank the personnel) that people from the "tipping" culture may like to use. Just don't do it in places where it is offensive - e.g. Japan.