--- base_model: Youssofal/MiniMax-M2.7-abliterated-BF16 library_name: mlx pipeline_tag: text-generation license: other license_name: non-commercial license_link: https://github.com/MiniMax-AI/MiniMax-M2.7/blob/main/LICENSE tags: - mlx - mlx-lm - safetensors - minimax - minimax_m2 - moe - mixture-of-experts - abliterated - uncensored - heretic - ara - apple-silicon - 3-bit - text-generation quantized_by: Youssofal --- # MiniMax-M2.7-Abliterated-Heretic-MLX-3bit This is the 3-bit Apple MLX release of an abliterated version of MiniMaxAI's MiniMax-M2.7. By applying Heretic's Ablated Refusal Adaptation (ARA), the base refusal behavior was removed at the weight level. The result keeps MiniMax-M2.7's sparse MoE reasoning, long-context instruction following, and general capability profile, but no longer defaults to the original refusal pattern. ## Quantization This build uses layer-aware mixed 3/4-bit MLX quantization. The bulk of the model is quantized to 3-bit, while sensitive projection and output modules are kept at 4-bit treatment for better stability. - Format: MLX safetensors - Effective quantization: 3.683 bits per weight - Runtime: `mlx-lm` - Source checkpoint: `Youssofal/MiniMax-M2.7-abliterated-BF16` ## Methodology & Model Notes MiniMax-M2.7 is a 229B sparse MoE model with 10B active parameters per token, 62 layers, hybrid attention, 256 local experts with 8 active per token, and a 200K context window. This release was produced with a direct Heretic ARA run using the fixed parameter set below: - `start_layer_index = 30` - `end_layer_index = 51` - `preserve_good_behavior_weight = 0.4512` - `steer_bad_behavior_weight = 0.0037` - `overcorrect_relative_weight = 0.8804` - `neighbor_count = 14` The direct ARA run completed with `Refusals: 0/25`. ## Validation This 3-bit MLX variant passed the local validation suite: - Official 20-prompt refusal check from `mlabonne/harmful_behaviors`: `0/20` refusals - Coherence smoke test: passed - Short reasoning smoke test: passed - Short code-generation smoke test: passed ## Running ```python from mlx_lm import load, generate model, tokenizer = load("Youssofal/MiniMax-M2.7-Abliterated-Heretic-MLX-3bit") messages = [{"role": "user", "content": "Write a short Python function that reverses a string."}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) response = generate(model, tokenizer, prompt=prompt, max_tokens=256) print(response) ``` ## Model Architecture | Spec | Value | |---|---| | Total Parameters | 229B sparse MoE | | Active Parameters | 10B per token | | Experts | 256 local, 8 per token | | Layers | 62 | | Attention | Hybrid: 7 Lightning + 1 softmax per 8-block | | Context | 200K | | Base Model | MiniMaxAI/MiniMax-M2.7 | ## Disclaimer This model has had refusal behavior removed at the weight level. It will answer prompts that the base model would normally refuse. You are responsible for how you use it. ## Credits - Base model: [MiniMaxAI/MiniMax-M2.7](https://huggingface.co/MiniMaxAI/MiniMax-M2.7) - BF16 abliterated checkpoint: [Youssofal/MiniMax-M2.7-abliterated-BF16](https://huggingface.co/Youssofal/MiniMax-M2.7-abliterated-BF16) - Refusal removal pipeline: [Heretic](https://github.com/andyrdt/heretic) with the ARA method - Apple Silicon runtime: [mlx-lm](https://github.com/ml-explore/mlx-lm) ## License This release inherits the base MiniMax-M2.7 license. **NON-COMMERCIAL.** Commercial use requires written authorization from MiniMax.