Running GenAI Models on Edge Devices with LlamaEdge
May 6
•
13:00 -13:15
Location: Open Platform
Run GenAI models on edge devices with LlamaEdge: multi-runtime support (llama.cpp, Torch, ONNX, OpenVINO), small footprint, GPU support, Rust SDK embeddability. Learn how it outperforms tools like Ollama for flexible, fast, on-device AI deployment.