Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: Uniquely support of seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue) within single model, ensuring optimal performance across various scenarios. Significantly enhancement in its reasoning capabilities, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. Superior human preference alignment, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. Expertise in agent capabilities, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. Support of 100+ languages and dialects with strong capabilities for multilingual instruction following and translation. Qwen3-4B has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-training Number of Parameters: 4.0B Number of Paramaters (Non-Embedding): 3.6B Number of Layers: 36 Number of Attention Heads (GQA): 32 for Q and 8 for KV Context Length: 32,768 natively and 131,072 tokens with YaRN.
Esper 3 is a coding, architecture, and DevOps reasoning specialist built on Qwen 3. Finetuned on our DevOps and architecture reasoning and code reasoning data generated with Deepseek R1! Improved general and creative reasoning to supplement problem-solving and general chat performance. Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
We introduce the updated version of the Qwen3-4B non-thinking mode, named Qwen3-4B-Instruct-2507, featuring the following key enhancements: Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage. Substantial gains in long-tail knowledge coverage across multiple languages. Markedly better alignment with user preferences in subjective and open-ended tasks, enabling more helpful responses and higher-quality text generation. Enhanced capabilities in 256K long-context understanding.
Over the past three months, we have continued to scale the thinking capability of Qwen3-4B, improving both the quality and depth of reasoning. We are pleased to introduce Qwen3-4B-Thinking-2507, featuring the following key enhancements: Significantly improved performance on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise. Markedly better general capabilities, such as instruction following, tool usage, text generation, and alignment with human preferences. Enhanced 256K long-context understanding capabilities. NOTE: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
a 4B-sized agentic reasoning model that is finetuned with our 3k Agentic SFT dataset, based on Qwen3-4B-Instruct-2507. In our work, we systematically investigate three dimensions of agentic RL: data, algorithms, and reasoning modes. Our findings reveal 🎯 Data Quality Matters: Real end-to-end trajectories and high-diversity datasets significantly outperform synthetic alternatives ⚡ Training Efficiency: Exploration-friendly techniques like reward clipping and entropy maintenance boost training efficiency 🧠 Reasoning Strategy: Deliberative reasoning with selective tool calls surpasses frequent invocation or verbose self-reasoning We contribute high-quality SFT and RL datasets, demonstrating that simple recipes enable even 4B models to outperform 32B models on the most challenging reasoning benchmarks.
**Model Name:** Qwen3-4B-Thinking-2507-GSPO-Easy **Base Model:** Qwen3-4B (by Alibaba Cloud) **Fine-tuned With:** GRPO (Generalized Reward Policy Optimization) **Framework:** Hugging Face TRL (Transformers Reinforcement Learning) **License:** [MIT](https://huggingface.co/leonMW/Qwen3-4B-Thinking-2507-GSPO-Easy/blob/main/LICENSE) --- ### 📌 Description: A fine-tuned 4-billion-parameter version of **Qwen3-4B**, optimized for **step-by-step reasoning and complex problem-solving** using **GRPO**, a reinforcement learning method designed to enhance mathematical and logical reasoning in language models. This model excels in tasks requiring **structured thinking**, such as solving math problems, logical puzzles, and multi-step reasoning, making it ideal for applications in education, AI assistants, and reasoning benchmarks. ### 🔧 Key Features: - Trained with **TRL 0.23.1** and **Transformers 4.57.1** - Optimized for **high-quality reasoning output** - Part of the **Qwen3-4B-Thinking** series, designed to simulate human-like thought processes - Compatible with Hugging Face `transformers` and `pipeline` API ### 📚 Use Case: Perfect for applications demanding **deep reasoning**, such as: - AI tutoring systems - Advanced chatbots with explanation capabilities - Automated problem-solving in STEM domains ### 📌 Quick Start (Python): ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="leonMW/Qwen3-4B-Thinking-2507-GSPO-Easy", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` > ✅ **Note**: This is the **original, non-quantized base model**. Quantized versions (e.g., GGUF) are available separately under the same repository for efficient inference on consumer hardware. --- 🔗 **Model Page:** [https://huggingface.co/leonMW/Qwen3-4B-Thinking-2507-GSPO-Easy](https://huggingface.co/leonMW/Qwen3-4B-Thinking-2507-GSPO-Easy) 📝 **Training Details & Visualizations:** [WandB Dashboard](https://wandb.ai/leonwenderoth-tu-darmstadt/huggingface/runs/t42skrc7) --- *Fine-tuned using GRPO — a method proven to boost mathematical reasoning in open language models. Cite: Shao et al., 2024 (arXiv:2402.03300)*
Repository: localaiLicense: apache-2.0
Link #1**Model Name:** Spiral-Qwen3-4B-Multi-Env **Base Model:** Qwen3-4B (fine-tuned variant) **Repository:** [spiral-rl/Spiral-Qwen3-4B-Multi-Env](https://huggingface.co/spiral-rl/Spiral-Qwen3-4B-Multi-Env) **Quantized Version:** Available via GGUF (by mradermacher) --- ### 📌 Description: Spiral-Qwen3-4B-Multi-Env is a fine-tuned, instruction-optimized version of the Qwen3-4B language model, specifically enhanced for multi-environment reasoning and complex task execution. Built upon the foundational Qwen3-4B architecture, this model demonstrates strong performance in coding, logical reasoning, and domain-specific problem-solving across diverse environments. The model was developed by **spiral-rl**, with contributions from the community, and is designed to support advanced, real-world applications requiring robust reasoning, adaptability, and structured output generation. It is optimized for use in constrained environments, making it ideal for edge deployment and low-latency inference. --- ### 🔧 Key Features: - **Architecture:** Qwen3-4B (Decoder-only, Transformer-based) - **Fine-tuned For:** Multi-environment reasoning, instruction following, and complex task automation - **Language Support:** English (primary), with strong multilingual capability - **Model Size:** 4 billion parameters - **Training Data:** Proprietary and public datasets focused on reasoning, coding, and task planning - **Use Case:** Ideal for agent-based systems, automated workflows, and intelligent decision-making in dynamic environments --- ### 📦 Availability: While the original base model is hosted at `spiral-rl/Spiral-Qwen3-4B-Multi-Env`, a **quantized GGUF version** is available for efficient inference on consumer hardware: - **Repository:** [mradermacher/Spiral-Qwen3-4B-Multi-Env-GGUF](https://huggingface.co/mradermacher/Spiral-Qwen3-4B-Multi-Env-GGUF) - **Quantizations:** Q2_K to Q8_0 (including IQ4_XS), f16, and Q4_K_M recommended for balance of speed and quality --- ### 💡 Ideal For: - Local AI agents - Edge deployment - Code generation and debugging - Multi-step task planning - Research in low-resource reasoning systems --- > ✅ **Note:** The model card above reflects the *original, unquantized base model*. The quantized version (GGUF) is optimized for performance but may have minor quality trade-offs. For full fidelity, use the base model with full precision.
Repository: localaiLicense: apache-2.0
Link #1