Beyond chat completion, Bella-openapi integrates multiple AI capabilities including text vectorization, speech recognition, speech synthesis, text-to-image, and image-to-image generation, equipped with comprehensive billing, rate limiting, and resource management features. All capabilities have been thoroughly tested in large-scale production environments, ensuring stability and reliability.
Leveraging Bella-knowledge data sources, provides document parsing services, offering foundational capabilities for RAG and other projects, and can also be used independently as a library.
Focused on unified storage and management of knowledge, elegantly handling multiple knowledge sources including files, datasets, and QA pairs. Provides powerful knowledge support for intelligent applications with OpenAI File API compatibility and private deployment support.
An open-source implementation compatible with OpenAI Assistants API and Responses API, breaking through native ecosystem limitations, supporting flexible switching between various vendor models, truly achieving develop once, use everywhere.
Based on the Bella-knowledge data source, it provides a unified search and question-answering capability, supporting advanced features such as hybrid retrieval, small-to-big, and contextual RAG. It also incorporates an industry-leading Deep RAG intelligent agent mode, enabling AI to deliver more accurate and reliable answers.
Similar to DIFY but with many differentiated capabilities, such as callback mode, Groovy script support, batch processing capabilities, third-party data source registration, while delivering superior performance.
A centralized queue system that enables various basic capabilities to easily support batch processing mode, significantly improving processing efficiency and resource utilization.
Known for ultra-low latency and high flexibility, it supports free combination of different ASR, LLM, and TTS components to create the best user experience, while supporting cutting-edge features like multi-agent collaboration.
Supports multiple models including LLM, ASR, TTS, Embedding, Rerank, compatible with mainstream inference backends such as Transformers, vLLM, SGLang, Faster-Whisper, providing one-stop solution for various AI inference needs.
A Whisper model carefully fine-tuned with domain data, possessing excellent Simplified Chinese recognition capabilities, providing more accurate support for voice applications.