KavinduHansaka/prompt-gen-10k-flux-sdxl
Viewer • Updated • 10k • 14
How to use KavinduHansaka/Llama-3.2-3B-ImageGen with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="KavinduHansaka/Llama-3.2-3B-ImageGen") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("KavinduHansaka/Llama-3.2-3B-ImageGen")
model = AutoModelForCausalLM.from_pretrained("KavinduHansaka/Llama-3.2-3B-ImageGen")How to use KavinduHansaka/Llama-3.2-3B-ImageGen with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "KavinduHansaka/Llama-3.2-3B-ImageGen"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "KavinduHansaka/Llama-3.2-3B-ImageGen",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/KavinduHansaka/Llama-3.2-3B-ImageGen
How to use KavinduHansaka/Llama-3.2-3B-ImageGen with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "KavinduHansaka/Llama-3.2-3B-ImageGen" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "KavinduHansaka/Llama-3.2-3B-ImageGen",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "KavinduHansaka/Llama-3.2-3B-ImageGen" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "KavinduHansaka/Llama-3.2-3B-ImageGen",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use KavinduHansaka/Llama-3.2-3B-ImageGen with Docker Model Runner:
docker model run hf.co/KavinduHansaka/Llama-3.2-3B-ImageGen
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("KavinduHansaka/Llama-3.2-3B-ImageGen")
model = AutoModelForCausalLM.from_pretrained("KavinduHansaka/Llama-3.2-3B-ImageGen")This repository provides a LoRA-finetuned & merged version of meta-llama/Llama-3.2-3B, specialized for image prompt generation.
It is designed to create cinematic, detailed, and structured prompts for text-to-image models such as Stable Diffusion XL and Flux.
Note: This is a prompt-generation model, not an instruction/chat model. It is trained to produce concise, creative prompts suitable for diffusion-based image synthesis.
config.json, generation_config.json model.safetensors) tokenizer.json, tokenizer_config.json, special_tokens_map.json)import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
REPO_ID = "KavinduHansaka/Llama-3.2-3B-ImageGen"
tok = AutoTokenizer.from_pretrained(REPO_ID)
model = AutoModelForCausalLM.from_pretrained(
REPO_ID, device_map="auto", torch_dtype=torch.bfloat16
)
prompt = "Create a cinematic noir macro photo with film grain, 1:1 ratio, sharp focus."
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=120, do_sample=True, temperature=0.5, top_p=0.9)
print(tok.decode(out[0], skip_special_tokens=True))
transformers, peft, accelerate, torch, sentencepiece@misc{llama3.2-3B,
title = {LLaMA 3.2 (3B)},
author = {Meta AI},
year = {2024},
url = {https://huggingface.co/meta-llama/Llama-3.2-3B}
}
@misc{llama3.2-3B-ImageGen,
title = {Llama-3.2-3B Image Prompt Generator (LoRA Merged)},
author = {Kavindu Hansaka Jayasinghe},
year = {2025},
url = {https://huggingface.co/KavinduHansaka/Llama-3.2-3B-ImageGen}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="KavinduHansaka/Llama-3.2-3B-ImageGen")