29-08-2025 |
Qwen Chat Web Dev Streamlines Frontend Coding |
Feature |
Qwen Chat Web Dev, introduced by Alibaba, empowers developers to create websites using React or HTML with TailwindCSS, incorporating animations and modern UI patterns. It delivers clean, single-file code with built-in libraries like Framer Motion and Three.js, requiring no external dependencies. Ideal for prototyping or building dynamic interfaces, this AI tool enhances efficiency. Try it now at chat.qwen.ai to elevate your frontend development. |
|
30-07-2025 |
Qwen3-Coder Enhances Anycoder’s AI-Powered Coding |
Feature |
Anycoder, an AI-driven code generator, now uses Qwen3-Coder as its default model to streamline application development. This upgrade boosts productivity by enabling users to create web applications from plain English descriptions, supporting languages like HTML, Python, and JavaScript. With features like multimodal input and one-click deployment to Hugging Face Spaces, it simplifies the coding process for developers. Explore Anycoder to accelerate your next project with this powerful tool. |
|
29-07-2025 |
Qwen3-30B-A3B Boosts AI with Smarter, Faster Performance |
Feature |
Alibaba's Qwen3-30B-A3B, a Mixture-of-Experts model with 3 billion active parameters, delivers enhanced reasoning, coding, and math capabilities, rivaling GPT-4o and larger models. It now supports 256K-token context windows and excels in multilingual tasks and user-aligned responses, all while operating in a streamlined non-thinking mode. Ideal for local deployment, it offers high efficiency for developers and researchers. Explore its capabilities on Qwen Chat or Hugging Face for advanced AI solutions. |
|
25-07-2025 |
Qwen3-MT Unveiled: Alibaba’s Advanced Translation Model |
Company News |
Alibaba Cloud’s Qwen3-MT delivers high-quality translations across 92+ languages, leveraging a Mixture of Experts architecture for speed and cost-efficiency. Trained on trillions of tokens, it offers customizable features like terminology control and domain prompts, ensuring accuracy and fluency. Ideal for global businesses, it supports scalable, low-latency translation via the Qwen API. Explore its capabilities through Alibaba’s Model Studio or demo platforms to enhance cross-lingual communication. |
|
22-07-2025 |
Qwen3-235B-A22B-2507 Boosts AI Performance |
Company News |
Alibaba's Qwen team has released Qwen3-235B-A22B-2507, an advanced large language model replacing its predecessor, focusing on superior instruction following and reasoning. This Mixture-of-Experts model, with 235 billion parameters and 22 billion active per inference, excels in mathematics, coding, and multilingual tasks, supporting a 256K token context window. By abandoning hybrid thinking mode for separate Instruct and Thinking models, it ensures optimal quality for diverse applications. Explore its capabilities on platforms like Hugging Face or Qwen Chat. |
|
13-07-2025 |
Qwen3-Embedding Enhances Accuracy with Automatic Token Addition |
Feature |
The Qwen3-Embedding-0.6B-GGUF model, designed for advanced text embedding tasks, requires appending the <|endoftext|> token to input context for optimal accuracy, as highlighted by Alibaba's Qwen team. Failure to include this token can reduce performance in tasks like text retrieval and classification. An upcoming update to the GGUF model package will automate this process via llama.cpp, simplifying development workflows. Developers can refer to the model card on Hugging Face for detailed guidance. |
|
12-07-2025 |
Qwen Chat for Desktop Enhances Productivity |
Feature |
Alibaba's Qwen Chat for Desktop, powered by the Qwen3 model, introduces Model Context Protocol (MCP) support, enabling smarter and faster AI agents. This desktop application allows users to leverage advanced language processing for tasks like text creation, coding, and data analysis. With seamless integration of tools via MCP, it boosts efficiency for professionals and developers. Explore Qwen Chat for Desktop at qwen.ai to streamline your workflow. |
|
30-06-2025 |
Qwen-TTS Unveiled: Advanced Speech Synthesis Launched |
Feature |
Qwen-TTS, a cutting-edge text-to-speech model, delivers natural and expressive audio through the Qwen API. Trained on extensive speech data, it supports three Chinese dialects and seven bilingual voices, ensuring versatile and high-quality outputs. Ideal for developers seeking premium audio solutions, it excels in prosody and emotional delivery. Explore Qwen-TTS today at the Alibaba Cloud Model Studio. |
|
27-06-2025 |
Qwen-VLo Unveiled: Transform Ideas into Stunning Visuals |
Feature |
Qwen-VLo, a new multimodal AI model from Alibaba’s Qwen team, converts text prompts or sketches into high-resolution visuals with ease. It supports real-time image editing, multilingual image generation, and progressive scene creation, ensuring semantic consistency and creative flexibility. Ideal for designers, marketers, and educators, it simplifies complex visual tasks like style transfers and object modifications. Explore Qwen-VLo at chat.qwen.ai to bring your ideas to life. |
|
16-06-2025 |
Qwen3 Models Now Available in MLX Format |
Company News |
Qwen3, developed by Alibaba Cloud, introduces optimized models in MLX format, supporting four quantization levels: 4bit, 6bit, 8bit, and BF16. These models enhance efficiency for local deployment, offering robust performance in coding, math, and general tasks. Ideal for developers seeking flexible AI solutions, Qwen3 ensures seamless integration into various workflows. Visit Hugging Face or ModelScope to explore and deploy Qwen3 today. |
|
13-06-2025 |
Alibaba’s Qwen Boosts Open-Source AI Innovation |
Company News |
Alibaba’s Qwen AI model series secured fifth place on Hugging Face’s Heatmap, with 201 models open-sourced last year. Over 130,000 derivative models built on Qwen highlight its global adoption by developers. This success underscores Alibaba’s leadership in fostering collaborative AI development through open-source contributions. Explore Qwen’s impact to stay ahead in AI innovation. |
|
09-06-2025 |
Alibaba’s Qwen3 AI Models Hit 12.5 Million Downloads |
Company News |
Alibaba’s Qwen3, a cutting-edge large language model series, has achieved over 12.5 million global downloads within a month of its launch. Available in four sizes—0.6B, 8B, 30B, and 32B—each model has surpassed one million downloads on platforms like Hugging Face, where over 130,000 Qwen-based derivative models have been created. Its hybrid reasoning and multilingual capabilities make it a top choice for developers worldwide. Visit Alibaba Cloud’s ModelScope to explore Qwen3’s potential for your projects. |
|
05-06-2025 |
Qwen3 Debuts as Alibaba’s Advanced Multilingual AI Solution |
Company News |
Alibaba’s Qwen3 series, newly launched, sets a benchmark in AI with its Qwen3-Embedding and Qwen3-Reranker models, supporting 119 languages. Available in 0.6B, 4B, and 8B variants, it excels in tasks like document retrieval and code search, topping MMTEB and MTEB-Code benchmarks. Open-source on Hugging Face, GitHub, and ModelScope, Qwen3 empowers developers with scalable, high-performance tools |
|
19-05-2025 |
Qwen Web Dev Simplifies Website Creation |
Feature |
Alibaba’s Qwen Web Dev, integrated into Qwen Chat, enables users to build and deploy websites using a single prompt, eliminating the need for coding expertise. This tool generates functional front-end code for web applications, such as social media platforms or contact forms, and now offers one-click deployment for instant sharing. Powered by the Qwen3 model series, it supports creative and professional projects with ease. Visit chat.qwen.ai to start building your website today. |
|
17-05-2025 |
Qwen Unveils Scalable World Preference Modeling |
AI Innovation Update |
Alibaba's Qwen team introduces Modeling World Preference, a scalable approach to human preference modeling, supported by experiments with 15 million preference pairs on Qwen2.5 models (1.5B to 72B parameters). The research, revealing logarithmic loss reduction and emergent scaling trends in larger models, establishes WorldPM as a robust foundation for preference fine-tuning. Open-source resources, including the WorldPM-72B model, are now available. Access the paper and models to explore this innovative framework. |
|
17-05-2025 |
Qwen2.5-Omni-7B Quantized Models Released Today |
Feature |
The Qwen team at Alibaba Cloud has launched quantized versions of the Qwen2.5-Omni-7B model, enhancing accessibility for developers. These models, available on Hugging Face and ModelScope, support multimodal inputs like text, images, audio, and video, delivering efficient real-time responses. They are designed to reduce VRAM consumption, making them ideal for lightweight GPU setups. Explore the Qwen2.5-Omni collection to experience its versatile capabilities. |
|
13-05-2025 |
Alibaba’s Qwen Chat Launches Deep Research Tool for All Users |
Feature |
Alibaba’s Qwen Chat has officially rolled out its Deep Research feature, enabling users to explore topics like robotics with tailored, in-depth reports. Simply ask a question, refine your focus when prompted, and Qwen generates a clear, concise report while you take a break. Designed to simplify learning and curiosity-driven exploration, this free tool enhances work and personal projects with AI-driven insights. Try Deep Research on Qwen Chat today and uncover something new! |
|
13-05-2025 |
Alibaba Releases Qwen3 Technical Report for Advanced AI Insights |
Reports |
Alibaba's Qwen team has published the Qwen3 Technical Report, detailing the latest advancements in their large language model series. The report highlights Qwen3’s enhanced reasoning, coding, and multilingual capabilities, offering valuable insights for developers and researchers. Available on GitHub, it showcases the model’s competitive performance against leading AI models. Explore the report to understand Qwen3’s potential in driving AI innovation. |
|
12-05-2025 |
Qwen3 Quantized Models Released for Easy Local Deployment |
New Releases |
Alibaba's Qwen team has launched quantized Qwen3 models, now available in GGUF, AWQ, and GPTQ formats. These models support local deployment via platforms like Ollama, LM Studio, SGLang, and vLLM. Developers can access them on Hugging Face and ModelScope for enhanced AI integration. Explore the Qwen3 collection to streamline your AI projects. |
|
09-05-2025 |
Qwen Chat’s Web Dev Tool Simplifies Frontend Webpage Creation |
Feature |
Qwen Chat, powered by Alibaba Cloud, has launched Web Dev, a groundbreaking feature that generates fully functional frontend webpages from a single text prompt, like “create a Twitter website.” This no-code solution delivers instant, editable code, making web development accessible to all skill levels. Integrated into Qwen Chat’s intuitive platform, Web Dev streamlines app creation with stunning designs. Try it now to bring your web ideas to life effortlessly. |
|
06-05-2025 |
Unsloth AI Simplifies Qwen3 Fine-Tuning with Free Colab Notebook |
Service |
Unsloth AI’s new notebook enables free fine-tuning of Qwen3 (14B) on a Tesla T4, offering 2x faster training and 70% less VRAM usage. The guide supports data prep, model training, and saving, using datasets like Open Math Reasoning for optimal performance. Achieve high accuracy with Unsloth’s Dynamic 2.0 quants and 8x longer context lengths. Start fine-tuning now with Unsloth’s Colab resources at docs.unsloth.ai. |
|
02-05-2025 |
Alibaba’s Qwen3 Releases Quantized Models for Efficient AI Deployment |
AI Innovation Update |
Alibaba’s Qwen team has launched quantized Qwen3-14B and Qwen3-32B models in AWQ and GGUF formats, optimizing performance for devices with limited GPU memory. These models, available on Hugging Face, support seamless switching between thinking and non-thinking modes using a simple /no_think token in tools like Ollama and LMStudio. Ideal for developers, they enhance coding, reasoning, and multilingual tasks with reduced resource demands. Download from Hugging Face to integrate Qwen3 into your projects efficiently. |
|
30-04-2025 |
Qwen2.5-Omni-3B Debuts with Lightweight GPU Access for Multimodal AI |
Feature |
Alibaba Cloud’s Qwen2.5-Omni-3B, a compact multimodal AI model, slashes VRAM usage by over 50% compared to the 7B version, enabling seamless 30-second audio-video interactions on 24GB consumer GPUs. Retaining over 90% of the 7B model’s multimodal comprehension, it delivers comparable accuracy and stability in natural speech output. Ideal for developers, this model supports text, image, audio, and video processing with enhanced efficiency. |
|
29-04-2025 |
Qwen3-235B-A22B Excels in Coding with 34.4% on SWE-Bench Verified |
AI Tool Benchmarking |
Alibaba’s Qwen3-235B-A22B, a leading open-weight AI model, achieves a 34.4% resolve rate on SWE-Bench Verified using the OpenHands coding agent, rivaling top models with fewer parameters. This performance highlights Qwen3’s efficiency in real-world software tasks. The open-source model and agent combo offers developers a powerful, accessible toolset. Test Qwen3-235B-A22B on OpenHands to enhance your coding projects. |
|
29-04-2025 |
Alibaba Unveils Qwen3: Powerful Open-Weight AI Models for Coding and Math |
New Releases |
Alibaba’s Qwen3 series, featuring eight open-weight large language models from 0.6B to 235B parameters, delivers top-tier performance in coding, math, and general tasks. The flagship Qwen3-235B-A22B rivals models like DeepSeek-R1 and Grok-3, while the efficient Qwen3-30B-A3B outperforms larger competitors. Available on Hugging Face, GitHub, and ModelScope, Qwen3 supports easy integration with tools like SGLang and Ollama. Try Qwen3 on Qwen Chat to enhance your AI workflows. |
|