This is a fine-tuned version of the Qwen3-14B model using the high-quality OpenThoughts2-1M dataset. Fine-tuned with Unsloth’s TRL-compatible framework and LoRA for efficient performance, this model is optimized for advanced reasoning tasks, especially in math, logic puzzles, code generation, and step-by-step problem solving. Training Dataset Dataset: OpenThoughts2-1M Source: A synthetic dataset curated and expanded by the OpenThoughts team Volume: ~1.1M high-quality examples Content Type: Multi-turn reasoning, math proofs, algorithmic code generation, logical deduction, and structured conversations Tools Used: Curator Viewer This dataset builds upon OpenThoughts-114k and integrates strong reasoning-centric data sources like OpenR1-Math and KodCode. Intended Use This model is particularly suited for: Chain-of-thought and step-by-step reasoning Code generation with logical structure Educational tools for math and programming AI agents requiring multi-turn problem-solving