--- base_model: mistralai/Mistral-7B-Instruct-v0.1 library_name: peft tags: - fake news detection - propaganda - fake news - propaganda detection - manipulative constructions analysis - offensive language analysis - peft - text-generation - summarization license: apache-2.0 --- ## Model Description Fine-tuned Mistral-7B model for detecting and analyzing fake news, propaganda and offensive language in news articles (English language). It was fine-tuned using Peft/LoRA approach with 4-Bit quantization. Given the news text, the model detects and analyses fake news and propaganda, analyses and shows manipulative constructions in the text as well as shows offensive language. ## How to Get Started with the Model Fine-tuned model can be tested on Google Colab using Nvidia A100 or L4 GPU. Pakages installation: ```python pip install transformers bitsandbytes peft ``` Use the code below to get started with the model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel from huggingface_hub import login #Login to Huggingface to load Mistral LLM login("Huggingface access token") model_id = "mistralai/Mistral-7B-Instruct-v0.1" peft_model_name="bpavlsh/Mistral-Fake-News-Detection" tokenizer = AutoTokenizer.from_pretrained(model_id) base_model = AutoModelForCausalLM.from_pretrained( model_id, load_in_4bit=True, device_map="auto", torch_dtype="auto") model = PeftModel.from_pretrained(base_model, peft_model_name) text=""" News text for analysis, from 1Kb to 10Kb """ prompt = f"""[INST] <> You are an expert in analyzing news for fake content, propaganda, and offensive language. <> Please analyze the following text: {text} [/INST]""" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, max_new_tokens=1500) output_result=tokenizer.decode(output[0], skip_special_tokens=True) result=output_result.split('[/INST]')[1] print(f"\n{result}") ``` ## References Pavlyshenko B.M. Analysis of Disinformation and Fake News Detection Using Fine-Tuned Large Language Model. arXiv preprint arXiv:2309.04704. 2023. Download PDF: https://arxiv.org/pdf/2309.04704.pdf Pavlyshenko B.M. Financial News Analytics Using Fine-Tuned Llama 2 GPT Model. arXiv preprint arXiv:2308.13032. 2023. Download PDF: https://arxiv.org/pdf/2308.13032.pdf Pavlyshenko B.M. AI Approaches to Qualitative and Quantitative News Analytics on NATO Unity. arXiv preprint arXiv:2505.06313. 2025. Download PDF: https://arxiv.org/pdf/2505.06313 ## Disclaimer We are sharing a considered model and results for academic purpose only, not any advice or recommendations. ## Contacts B. Pavlyshenko https://www.linkedin.com/in/bpavlyshenko