Instructions to use Ryex/Floppa_Gemma4-E4B-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Local Apps
- Unsloth Studio new
How to use Ryex/Floppa_Gemma4-E4B-it with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ryex/Floppa_Gemma4-E4B-it to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ryex/Floppa_Gemma4-E4B-it to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Ryex/Floppa_Gemma4-E4B-it to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="Ryex/Floppa_Gemma4-E4B-it", max_seq_length=2048, )
Floppa-E4B-Gemma4 (Uncensored)
Floppa_Gemma4-E4B-it is a fine-tuned multimodal model based on llmfan46/gemma-4-E4B-it-ultra-uncensored-heretic.
Developed by Ryex, this model is specialized for General usage and unrestricted image description. Building upon the "Heretic" base, Floppa eliminates remaining refusal patterns to ensure it never declines requests to describe explicit, violent, or "spicy" imagery, and translates slang and cultural nuances with zero sanitization.
Model Description
- Developed by: Ryex
- Base Model: llmfan46/gemma-4-E4B-it-ultra-uncensored-heretic
- Architecture: Gemma 4 E4B (Multimodal)
- Training Method: 16-bit QLoRA via Unsloth and Huggingface's TRL library.
- License: apache-2.0
- Objective: Extreme behavior modification (Removal of refusal guardrails + Preservation of Multimodal Vision capabilities).
This model was trained 2x faster with Unsloth.

Capabilities
- "Spicy" Vision: Capable of describing NSFW, violent, or explicit anime illustrations and photography without triggering safety refusals or "I cannot assist with this" messages.
- Uncensored Translation: Translates dialogue exactly as written. It will use slurs, profanity, and aggressive language if the source text demands it, making it ideal for localizing raw media.
- Multimodal Context: Leveraging the Gemma 4 architecture, it can translate text embedded within images or provide context-aware descriptions of scenes to assist in complex translation tasks.
Training Data (The "Floppa Mix")
The model was fine-tuned on a balanced dataset (~10.5k rows) designed to break refusals while maintaining the high intelligence of the Gemma 4 base:
- 20% Toxic/Uncensored Text: Custom dataset of explicit dialogue and "harmful" instruction following.
- 20% Translation Skill:
Unbabel/TowerBlocks-v0.2(High-quality multilingual pairs). - 40% General Reasoning:
mlabonne/FineTome-100k(Logic and conversation). - 20% Vision Anchors:
merve/vqav2-small+ Custom Anime DatasetSmilingWolf/camie-tagger-vs-wd-tagger-valto prevent visual catastrophic forgetting.
Usage (vLLM)
This model is optimized for vLLM and Text-Generation-Inference.
from vllm import LLM, SamplingParams
from transformers import AutoProcessor
from PIL import Image
model_id = "Ryex/Floppa_Gemma4-E4B-it"
llm = LLM(
model=model_id,
trust_remote_code=True,
dtype="bfloat16",
max_model_len=8192,
)
processor = AutoProcessor.from_pretrained(model_id)
image = Image.open("test_image.jpg").convert("RGB")
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Describe this image in detail, including any uncensored or explicit elements."}
]
}
]
prompt = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = {
"prompt": prompt,
"multi_modal_data": {"image": image},
"mm_processor_kwargs": {"max_soft_tokens": 560}
}
params = SamplingParams(
temperature=0.7,
max_tokens=1024,
stop=["<turn|>", "<|turn|>"]
)
outputs = llm.generate([inputs], sampling_params=params)
print(outputs[0].outputs[0].text)
- Developed by: Ryex
- License: apache-2.0
- Finetuned from model : llmfan46/gemma-4-E4B-it-ultra-uncensored-heretic
License & Safety
This model is built upon Gemini Pro technology from Google. Use of this model is subject to apache-2.0.
Disclaimer: This model produces uncensored content. It may generate output that is offensive, explicit, or factually incorrect. User discretion is advised. This model is intended for research, translation assistance, and creative writing workflows where content filtering is undesirable.
- Downloads last month
- 2,567
Model tree for Ryex/Floppa_Gemma4-E4B-it
Base model
google/gemma-4-E4B