Historic rebrand from Fish Speech to OpenAudio. #1 ranking on TTS-Arena2 with industry-leading performance.S1 (4B params): 0.008 WER, 0.004 CER - Available on Fish Audio Playground
S1-mini (0.5B params): 0.011 WER, 0.005 CER - Open source on Hugging Face48+ emotional expressions with RLHF integration and multilingual support for English, Chinese, Japanese, and more.Read More about S1
Fixed critical PyTorch security settings and improved inference speed significantly. Added ONNX export support for better deployment options and enhanced text processing for Arabic and Hebrew languages. Includes bug fixes for Apple Silicon (MPS) compatibility and reorganized library structure for cleaner codebase.
Introduced v1.5 model architecture with improved dataset handling and bearer token authentication for APIs.Added reference audio caching by hash for faster performance and better Apple Silicon support. Includes OpenAPI documentation refactoring and base64 reference data support in JSON format.
Introduced Fish Agent for conversational AI with streaming capabilities and real-time interactions.Added comprehensive Korean language documentation and fixed critical non-English speech issues. Improved WebUI streaming functionality and PyTorch version compatibility.
Documentation-focused release with comprehensive updates for v1.4, macOS support, and multiple language translations.Improved Docker support and API enhancements for JSON format handling. Added audio selection to WebUI and fixed various stability issues including cache handling and backend performance.
Infrastructure improvements focused on Docker optimization and multi-platform builds.Updated PyTorch version and replaced audio backend from sox for better performance. Enhanced CI/CD pipeline with buildx support and fixed various Docker-related issues.
Major release with new VQGAN architecture for improved audio quality and faster inference.Updated WebUI with enhanced interface and better language switching. Added Japanese documentation translation and fixed inference warmup issues for better performance.
Replaced Whisper with SenseVoice for better ASR and added native Apple Silicon support.Includes Portuguese (Brazil) localization, streaming audio functionality, and CPU-only inference improvements. Pinned PyTorch to 2.3.1 to fix inference speed issues and aligned API with official closed-source version.
Introduced auto-reranking system for better results along with bilingual support and model quantization.Replaced standard Whisper with Faster Whisper for improved speed and added Japanese documentation. Enhanced model stability and inference performance with optimized v1.2 architecture.
Minor release adding Chinese text normalization support and a streaming audio download button in the WebUI.Fixed LoRA merging issues and improved Firefly performance.
Breaking changes: Replaced zibai with uvicorn for API server, new text-splitter with byte-based length calculation, and license change to CC-BY-NC-SA 4.0.Added Apple Silicon (MPS) support, Windows one-click installation, and automatic model downloading with resume capability. Improved WebUI with better file selection and download progress indicators.
Added VITS decoder integration with full streaming support and queue management for real-time audio generation.Introduced internationalization (i18n) with Spanish translation and improved Windows packaging. Optimized GPU memory usage and CPU-only inference performance while adding LoRA support to the Gradio UI.
Major milestone release introducing new VQ-GAN architecture with VITS decoder support, LoRA fine-tuning, and streaming inference capabilities.Breaking changes include removal of the Rust-based data server, new tokenizer replacing phonemizer, and updated model architecture (VQ + DiT + Reflow). Achieved 4x memory reduction during loading and added WebUI for training and annotation.
First public release of Fish Speech featuring a complete text-to-speech pipeline with VQ-GAN audio codec and LLAMA-based language model.Includes multi-language support (Chinese, English, Japanese), Gradio WebUI for inference, HTTP API server, and Docker support. Added special optimizations for Chinese users including mirror downloads and localized documentation.