LFM2.5-350M Home Assistant ( Stable Release)
A purpose-trained smart home automation model fine-tuned from LiquidAI/LFM2.5-350M. This model controls lights, doors, thermostats, TVs, fans, speakers, and home scenes through structured tool calls, with full awareness of current device states.
⚠️ Version Notice: This is the Stable Release, trained exclusively on our robust State-Aware dataset (v13 Final Merge). The experimental multi-stage version is currently still in training and will be released for comparison once stabilized.
👉 Which file should I download? For the most stable experience right now, download one of the following GGUF files:
LFM2.5-350M-home-assistant-sft.F16.ggufLFM2.5-350M-home-assistant-sft.Q4_K_M.gguf(Recommended for most users, best balance of speed and size)LFM2.5-350M-home-assistant-sft.Q8_0.gguf(Highest quality quantization)
What It Does
Given a natural language command and the current state of all connected devices, the model outputs the correct tool call — or explains in plain text why no action is needed. It handles:
- Advanced Device Disambiguation — If you say "Turn off the TV", the model will intelligently resolve which TV you mean by checking if only one TV is connected, checking if you are in a room with a TV, or inferring intent from the state (e.g., if only one TV is currently ON, it turns that one off via a
<think>trace). - Media & Music Playback — "Play Truth In The World By Lucky Dube" →
control_speaker(room='living_room', action='play', media='Truth In The World By Lucky Dube') - Direct commands — "Turn on the bedroom light" →
toggle_lights(room='bedroom', state='on') - Already-satisfied detection — "Turn on the bedroom light" when
bedroom:onin STATE → "The bedroom light is already on." (no tool call) - Pronoun resolution — "Turn off the light" when
current_user_room=kitchen→toggle_lights(room='kitchen', state='off') - Bulk state-aware actions — "Turn off what's on" → reads STATE, emits one call per lit room using
<think>logic. - Undo / repeat via action log — "Undo that" +
[RECENT ACTIONS: toggle_lights(bedroom, on)]→toggle_lights(room='bedroom', state='off') - Multi-device compound commands — "Lock the front door and turn off the living room light" → uses a rigid reasoning format (
Total: N tool calls required) to emit parallel tool calls. - Topology-aware rejection — "Turn on the garage light" when garage not in connected rooms →
intent_unclear(reason='unsupported_device') - Scene activation — "Movie night" / "Bedtime" / "I'm leaving" →
set_scene(...) - Fan control — "Set the bedroom fan to high" →
control_fan(room='bedroom', state='on', speed='high') - Thermostat — "Make it 72 degrees" →
set_thermostat(temperature=72, mode='heat') - Syntactic Action Triggers — Internal reasoning strictly concludes with
ACTION REQUIRED.orACTION NOT REQUIRED. Text reply only.to reliably signal structural intent before opening a JSON block.
Tool Schema
The model outputs calls from the following 10-tool schema. All tools use exact parameter names.
toggle_lights(room: str, state: 'on'|'off')
# room: living_room | bedroom | kitchen | bathroom | office | hallway
toggle_all_lights(state: 'on'|'off')
lock_door(door: str, state: 'lock'|'unlock')
# door: front | back | garage | side | bedroom | bathroom | office | kitchen | living_room
lock_all_doors(state: 'lock'|'unlock')
set_thermostat(temperature: int, mode: 'heat'|'cool'|'auto')
# temperature range: 60–80°F
set_scene(scene: 'movie_night'|'bedtime'|'morning'|'away'|'party')
control_tv(room: str, state: 'on'|'off')
# room: living_room | bedroom | office
control_fan(room: str, state: 'on'|'off', speed: 'low'|'medium'|'high' = optional)
# room: living_room | bedroom | kitchen | office
control_speaker(room: str, action: 'play'|'pause'|'stop'|'next'|'previous', media: str = optional)
# room: living_room | bedroom | kitchen | office | hallway
intent_unclear(reason: 'off_topic'|'incomplete'|'unsupported_device'|'unsupported_feature')
State Format
Every user message must be prefixed with a [STATE:] block.
[STATE: lights={bedroom:on, kitchen:off, living_room:on}, doors={back:locked, front:unlocked}, thermostat=70F/heat, scene=none, tv={bedroom:off, living_room:on}, speaker={kitchen:stopped}, fan={bedroom:on(low)}, current_user_room=kitchen]
Field breakdown:
| Field | Values | Notes |
|---|---|---|
lights |
room:on|off |
Only include connected rooms |
doors |
door:locked|unlocked |
Only include connected doors |
thermostat |
<temp>F/<mode> |
e.g. 72F/heat |
scene |
scene name or none |
Active scene or none |
tv |
{room:on|off, ...} |
Dictionary of connected TVs |
speaker |
{room:playing|paused|stopped, ...} |
Dictionary of connected speakers |
fan |
{room:on|off(speed), ...} |
Dictionary of connected fans |
current_user_room |
room name or empty | Drives pronoun ("this room") resolution |
Usage
Minimal inference example
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "OrdenWills/LFM2.5-350M-home-assistant-sft"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
)
SYSTEM_PROMPT = """You are a smart home assistant AI. Use tools to control the home.
Output function calls as JSON.
TOOLS:
toggle_lights(room, state='on'|'off')
toggle_all_lights(state='on'|'off')
lock_door(door, state='lock'|'unlock')
lock_all_doors(state='lock'|'unlock')
set_thermostat(temperature=<int>, mode='heat'|'cool'|'auto')
set_scene(scene='movie_night'|'bedtime'|'morning'|'away'|'party')
control_tv(room, state='on'|'off')
control_fan(room, state='on'|'off'[, speed='low'|'medium'|'high'])
control_speaker(room, action='play'|'pause'|'stop'|'next'|'previous'[, media='<str>'])
intent_unclear(reason='off_topic'|'incomplete'|'unsupported_device'|'unsupported_feature')
CONNECTED ROOMS (lights): living_room, bedroom, kitchen, bathroom, office, hallway
CONNECTED DOORS: front, back, garage
CONNECTED TVs: living_room, bedroom
CONNECTED SPEAKERS: living_room
CONNECTED FANS: bedroom
STATE RULES:
[STATE:] shows all current device states.
State already matches request → plain text reply, NO tool call.
Only rooms listed under CONNECTED TVs/SPEAKERS/FANS have those devices.
Requesting a device in an unlisted room → intent_unclear(unsupported_device).
TV / SPEAKER / FAN RESOLUTION when user says 'the TV'/'the fan'/'the speaker':
1. Exactly one connected → use that room automatically.
2. Multiple connected + current_user_room has device → use current_user_room.
3. Multiple connected + exactly ONE is in the eligible state for the action
(e.g. only one TV is on and user says 'turn off the TV') → infer that room.
4. Multiple connected + ambiguous (rule 2 & 3 don't apply) → intent_unclear(incomplete).
LIGHT / DOOR RESOLUTION:
current_user_room set + connected → use current_user_room.
current_user_room set + NOT connected → intent_unclear(unsupported_device).
current_user_room empty → intent_unclear(incomplete).
[RECENT ACTIONS:] → transaction log, newest entry first. Format:
(X mins ago) [call1, call2, ...] -> summary.
Each [...] bracket is ONE command the user previously issued.
For 'undo'/'reverse'/'back': invert ONLY the most recent transaction
(the FIRST [...] block). Older transactions are always ignored.
For pronouns ('it'/'them'): refer to the device(s) in the first [...] block.
Do NOT use recent actions to infer which room 'the light' or 'the door'
refers to when current_user_room is explicitly set — current_user_room wins.
For 'all lights' / 'all doors': use toggle_all_lights / lock_all_doors
regardless of current_user_room.
SYNONYMS: 'open'='unlock'; 'close'/'shut'='lock'; 'skip'='next';
'back'='previous' (for speaker track navigation), but can also mean 'undo' for
reverting device states based on [RECENT ACTIONS].
'continue'/'resume'/'on the music'='play'; 'play <song/artist>' = action='play' + media='<str>'.
Relative state clauses ('the light that is on', 'the door that is locked')
override current_user_room — check STATE and act on the matching device."""
state = "[STATE: lights={bathroom:off, bedroom:off, hallway:off, kitchen:off, living_room:on, office:off}, doors={back:locked, front:locked, garage:unlocked}, thermostat=70F/heat, scene=none, tv={bedroom:off, living_room:on}, speaker={living_room:stopped}, fan={bedroom:off(medium)}, current_user_room=kitchen]"
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": f"{state}\nTurn off the TV."},
]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
with torch.no_grad():
output = model.generate(
input_ids,
max_new_tokens=256,
temperature=0.1, # low temp for deterministic tool calls
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
response = tokenizer.decode(
output[0][input_ids.shape[-1]:], skip_special_tokens=True
)
print(response)
# Expected behavior: The model will generate a <think> trace noting that only the living_room TV is currently ON, infer the user wants to turn off the living_room TV, conclude with ACTION REQUIRED, and emit the tool call.
GGUF / Ollama
# Pull the recommended q4_k_m quantization
ollama run hf.co/OrdenWills/LFM2.5-350M-home-assistant-sft:Q4_K_M
# Or use the higher precision q8_0 version
ollama run hf.co/OrdenWills/LFM2.5-350M-home-assistant-sft:Q8_0
Training Details ( Stable)
This release was fine-tuned directly on a 70,000-example state-aware synthetic dataset ("v13 Final Merge").
| Parameter | Value |
|---|---|
| Base model | LiquidAI/LFM2.5-350M |
| Dataset Size | 70,000 examples |
| Categories | 33 distinct instruction schemas |
| Think Traces | Extensive (Present in majority of complex scenarios) |
| Hardware | Kaggle T4 (16 GB) |
Training Categories
| Category | Description | Think Trace? |
|---|---|---|
already_satisfied / state_grounding |
Graceful replies if the device is already in the requested state. Forces reading of state arrays. | Yes / No |
action_required / relative_clause |
Standard explicit device triggers and relative logic ("turn off the light that is on"). | Yes / No |
user_room_lights/doors |
Resolves "the light" / "the door" based on current_user_room. |
Yes |
bulk_plus_local_door |
Complex compound logic mixing global state-aware commands with implicit local commands. | Yes |
action_log_* / them_plurality |
"Undo", "repeat", "same for bedroom", and resolving plural pronouns using [RECENT ACTIONS:]. |
Yes |
scenes/thermostat |
Standard NLP triggering for scenes and temperature bounds. | No |
rejections/missing |
"Make me coffee", unconnected rooms, and resolving incomplete vs unsupported_device. |
Yes / No |
compound_count |
Parallel tool calling with forced sub-action counting before generation. | Yes |
status_queries |
"Are the lights on?" / "What is the thermostat set to?" plain text answers. | Yes |
tv/speaker/fan commands |
Multi-device disambiguation logic (State inference via Rule 3 vs implicit fallback). |
Yes |
local_media_commands |
Parsing local track titles via the media parameter for specific song playback. |
Yes |
Selective Thinking (<think>...): For complex tasks (e.g., bulk state updates, parsing action logs, determining which TV the user meant based on state arrays, counting compound actions), the model is trained to output a reasoning trace before making the tool call. For direct explicit commands, it skips thinking entirely for speed. All thinking traces now end with explicitly formulated ACTION REQUIRED. triggers.
Known Limitations
- Temperature range is fixed at 60–80°F. Requests outside this range produce a plain-text explanation, not a tool call.
- No brightness or colour control. Dimming and colour-change requests correctly trigger
intent_unclear(reason='unsupported_feature'). This is by design — the connected lights only support on/off. - Local music library only. The speaker control
mediaparameter maps to specific tracks from a bounded internal list of artists and songs. Out-of-domain conversational queries will likely triggerintent_unclear(reason='off_topic'). - English only. All training data is English. Performance in other languages is untested.
- State must be accurate. The model trusts
[STATE:]completely. If your app sends stale state, the model may incorrectly say a device is already in the requested state or infer the wrong device during disambiguation.
Citation
@misc{lfm2-home-assistant-2026,
author = {OrdenWills},
title = {LFM2.5-350M Home Assistant: A Purpose-Trained Smart Home Automation Model},
year = {2026},
publisher = {HuggingFace},
url = {https://huggingface.co/OrdenWills/LFM2.5-350M-home-assistant-sft}
}
Acknowledgements
- LiquidAI for the incredible and highly capable LFM2.5-350M base model.
- Unsloth for the fine-tuning framework.
- Downloads last month
- 5,233
ollama run hf.co/OrdenWills/LFM2.5-350M-home-assistant-sft: