Run ML models on-device or in the cloud with intelligent routing based on device capabilities. Built for Flutter, Rust, and native platforms.
From speech recognition to text-to-speech, run powerful ML models anywhere.
Run ASR and TTS models directly on-device for low latency and offline support.
Seamlessly route to cloud APIs when device capabilities are insufficient.
Chain models together with YAML pipelines: ASR → LLM → TTS in one config.
Flutter, iOS, Android, and Rust SDKs with unified API.
Leverage CoreML, QNN, and Metal for maximum performance.
Track inference metrics and device capabilities across your fleet.
Run inference in just a few lines of code.
final xybrid = Xybrid.simple(apiKey: 'sk_...');
// Load a pipeline
final pipeline = await xybrid.loadPipeline('voice-assistant.yaml');
// Run inference
final result = await pipeline.run(
Envelope.audio(audioBytes)
);
// Get the response audio
final responseAudio = result.audio;Join developers building the next generation of voice-enabled applications.