Knut
Enable users to integrate and deploy their own AI models within the Cinclus platform, providing complete flexibility and control over their AI infrastructure. This feature will allow users to:
- Import custom models: Support for GGUF, ONNX, SafeTensors, and HuggingFace formats
- Local model hosting: Run models on-premise or in private cloud without external dependencies
- Model management interface: Upload, version, and manage multiple models through intuitive UI
- Performance optimization: Automatic quantization options and hardware acceleration support
- Seamless integration: Use custom models alongside Cinclus native models in the same interface
- API compatibility: OpenAI-compatible endpoints for easy migration and integration
- Resource allocation: Configure GPU/CPU usage, memory limits, and concurrent request handling
- Security & compliance: Full control over model weights, no data leaves your infrastructure
Bring Your Own Model (BYOM) Support
-
Knut moved item to board Under review
2 days ago -
Knut moved item to project Cinclus
2 days ago -
Knut unpinned the item
2 days ago -
Knut made item public
2 days ago -
Knut created the item
2 days ago