✨ Thank you all for participating in GOSIM AI Paris! ✨ See you in September at GOSIM Hangzhou! ✨
Filter

Schedule: May 6

Filter:

All
AI Model
AI Infra
AI Apps
Embodied AI
PyTorch Day France
GOSIM AI Spotlight
All
AI Model
AI Infra
AI Apps
Embodied AI
PyTorch Day France
GOSIM AI Spotlight
  • May 6

    8:30 - 9:30

    Breakfast and Registration

    All Tracks

    Entrance Lobby

    Check-In and Badge Pickup
  • May 6

    9:30 - 9:45

    Keynote: Open-Source has Caught Up, What's Next?

    All Tracks

    Master Stage

    Two years after ChatGPT, open-source LLMs have caught up with or even exceeded the leading edge closed-source LLMs across almost all benchmarks. Furthermore, the open-source models have clear advantages in emerging agent applications — as they are much easier to fine-tune and customize, safer and easier to deploy on private devices. Throughout the GOSIM conference, we will highlight and introduce open-source LLMs and their applications.
  • May 6

    9:45 - 10:00

    Keynote: Global Digital Collaboration

    All Tracks

    Master Stage

    The Global Digital Collaboration is a joint initiative of Intergovernmental Organizations (IGOs), Standard Development Organizations (SDOs), Open Source Organizations (OSOs) and Trade Associations to help fulfill the goals of the Compact. These organizations act as Co-Conveners and invite the public sector, private sector and the global technical community to participate in the Global Digital Collaboration.
  • May 6

    10:00 - 10:30

    Morning Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    10:30 - 11:10

    Open-R1: A Fully Open Reproduction of DeepSeek-R1

    AI Model

    Master Stage

    The recipe behind OpenAI’s reasoning models has been a well kept secret. That is, until earlier this year, when DeepSeek released their DeepSeek-R1 model and promptly broke the internet. While a detailed technical report was published, many open questions remain, chief among them the training data, which was not released. Open-R1 is Hugging Face's fully open effort to replicate DeepSeek-R1, with a strong focus on reasoning data curation.
  • May 6

    10:30 - 11:10

    AI Open Source for Good: Inclusive Access, Equitable Data, and Accessible Compute

    AI Infra

    Founders Café (Updated)

    This talk unveils how open source technologies act as catalysts for equitable AI across three pillars. First, inclusive access: We open-source voice datasets tailored for underrepresented groups—such as children and the elderly—to ensure multimodal AI systems understand diverse linguistic patterns and bridge generational divides. Second, equitable data: we have released nearly 100 globally accessible datasets, amassing over 680,000 downloads, empowering developers from any countries to innovate freely. Third, accessible compute: We present FlagOS, an open-source system software that facilitates AI development and deployment across diverse hardware ecosystems—including legacy GPUs and emerging accelerators—while significantly lowering the cost barrier to AI innovation. Collectively, these open-source efforts transform 'AI for Good' into a shared mission—breaking barriers of age, location, and resources to empower anyone to create and benefit from AI.
  • May 6

    10:30 - 11:10

    TONGYI Lingma: from Coding Copilot to Coding Agent

    AI Apps

    Open Stage

    This presentation will take the perspective of intelligent development in the software engineering to outline and introduce the latest technological advancements and product applications of Code LLMs, Coding Copilot, and Coding Agents, as well as analyze and forecast future development trends.
  • May 6

    10:30 - 11:10

    Mind, Body and Zenoh

    Embodied AI

    Junior Stage

    As Robotics and Artificial Intelligence continue to evolve at an unprecedented pace, a critical gap has emerged in their ability to scale and operate seamlessly across diverse environments: the absence of an efficient, intelligent "nervous system." Much like biological organisms rely on their nervous system to connect the body and the brain, autonomous systems require a foundational layer that enables real-time communication, adaptability, and distributed cognition. This talk introduces Zenoh, a cutting-edge Open Source protocol that is rapidly gaining traction in the robotics community and beyond. Zenoh bridges the traditional divide between data in motion, data at rest, and computation, enabling seamless data exchange from the edge to the cloud. Zenoh it's the missing link to unify sensing, actuation, and cognition. It is, in essence, the nervous system for the intelligent robots age.
  • May 6

    11:10 - 11:50

    OpenSeek: Collaborative Innovation for Next-Gen Models

    AI Model

    Master Stage

    OpenSeek aims to unite the global open-source community to drive collaborative innovation in algorithms, data, and systems to develop next-generation models that surpass DeepSeek.
  • May 6

    11:10 - 11:50

    The Best Practice of Training and Inferencing on Ascend CANN

    AI Infra

    Founders Café (Updated)

    The AI-oriented, heterogeneous Compute Architecture for Neural Networks (CANN) is a key platform for improving the computing efficiency of Ascend AI processors. It serves as a bridge between upper-layer AI frameworks and lower-layer AI processors and programming. This topic will focus on OpenSource ecosystem about CANN, shows how CANN helps AI sofeware, such as pytorch, vllm and so on, efficiently running on Ascend.
  • May 6

    11:10 - 11:50

    CangjieMagic : New Choices for Developers in the Age of Large Models

    AI Apps

    Open Stage

    With the surge in popularity of various AI large models, the trend of agent-oriented development in large model applications has become increasingly evident. Agents are gradually becoming a core element in the development of large model applications. This topic shares an AI large model Agent development framework based on the Cangjie programming language. This framework supports Agent-oriented programming, providing developers with an efficient Agent DSL for Agent programming. Its main features include: support for the MCP protocol to facilitate mutual invocation between Agents and tools, support for modular invocation, and support for intelligent task planning. It enhances the efficiency of developers in creating smart HarmonyOS applications, delivers an exceptional development experience, and explores new paradigms for future large model application development.
  • May 6

    11:10 - 11:50

    Distributed Dataflows in Dora Using Zenoh

    Embodied AI

    Junior Stage

    The Dora framework makes it easy to create dataflows for robotics and AI. In this talk, we look into distributed dataflows that run on multiple machines and communicate through the network. Dora supports complex network topologies by using the Zenoh protocol. This way, it is possible to split Dora dataflows across private networks and cloud machines with minimal configuration.
  • May 6

    12:00 - 14:00

    Lunch Break

    All Tracks

    Open Platform

    Food & drinks
  • May 6

    12:30 - 13:50

    Spotlight Talks Day 1

    All Tracks
  • May 6

    12:30 -12:45

    Aidge

    GOSIM AI Spotlight

    Open Platform

    In an era increasingly defined by intelligent systems at the edge, Eclipse Aidge emerges as a crucial open-source initiative to democratize the development and deployment of efficient Deep Neural Networks (DNN) on embedded platforms. Recognizing the unique constraints and opportunities of edge computing – limited resources, real-time requirements, and the need for robust, low-power AI – Aidge provides a comprehensive framework for addressing this complex task. By offering end-to-end tools for DNN design, analysis, simulation, optimization and hardware-aware deployment across diverse targets (CPUs, GPUs, MCUs), Aidge empowers developers to build sophisticated AI applications for IoT, robotics, automotive, and beyond. This project fosters innovation by enabling rapid prototyping, performance benchmarking, and the creation of tailored solutions, ultimately accelerating the adoption of intelligent edge devices while promoting interoperability, transparency and collaboration within the open-source ecosystem. Key features : - High interoperability with the ONNX standard (import and export of the DNN model) - Rich tools for static analysis of the DNN model - State-of-the-art quantization and compression techniques to reduce the memory needs - Standalone C++ code generation of the DNN Some planned features : - A certification-aware code generation module. - Integration of a scalable and efficient ML compilation tool chain based on MLIR or TVM - Extension of Spike Neural Networks to be the first multi-paradigm framework natively addressing neuromorphic computing. Eclipse Aidge is hosted on The Eclipse Foundation's GitLab instance.
  • May 6

    12:45 -13:00

    The Rise of Bielik.AI

    GOSIM AI Spotlight

    Open Platform

    Community powered effort to achieve independent Polish AI. The Bielik Project delivers not only a family of open-source language models (1.5B, 4.5B, and 11B parameters) but also a complete tooling ecosystem designed to empower others to train, fine-tune, and evaluate LLMs with ease. One of the project’s core features is its end-to-end toolchain—spanning datasets, benchmarking, training, and fine-tuning frameworks—which enables any research group or organization to build their own models through base model fine-tuning or continuous pretraining.
  • May 6

    13:00 -13:15

    Running GenAI Models on Edge Devices with LlamaEdge

    GOSIM AI Spotlight

    Open Platform

    Run GenAI models on edge devices with LlamaEdge: multi-runtime support (llama.cpp, Torch, ONNX, OpenVINO), small footprint, GPU support, Rust SDK embeddability. Learn how it outperforms tools like Ollama for flexible, fast, on-device AI deployment.
  • May 6

    13:15 -13:30

    Automated Proof Generation for Rust Code Via Self-Evolution

    GOSIM AI Spotlight

    Open Platform

    Ensuring correctness is crucial for code generation. Formal verification offers a definitive assurance of correctness, but demands substantial human effort in proof construction and hence raises a pressing need for automation. The primary obstacle lies in the severe lack of data—there are much fewer proofs than code snippets for Large Language Models (LLMs) to train upon. In this paper, we introduce SAFE, a framework that overcomes the lack of human-written proofs to enable automated proof generation of Rust code. SAFE establishes a self-evolving cycle where data synthesis and fine-tuning collaborate to enhance the model capability, leveraging the definitive power of a symbolic verifier in telling correct proofs from incorrect ones. SAFE also re-purposes the large number of synthesized incorrect proofs to train the self-debugging capability of the fine-tuned models, empowering them to fix incorrect proofs based on the verifier’s feedback. SAFE demonstrates superior efficiency and precision compared to GPT-4o. Through tens of thousands of synthesized proofs and the self-debugging mechanism, we improve the capability of open-source models, initially unacquainted with formal verification, to automatically write proofs for Rust code. This advancement leads to a significant improvement in performance, achieving a 52.52% accuracy rate in a benchmark crafted by human experts, a significant leap over GPT-4o’s performance of 14.39%.
  • May 6

    13:30 -13:45

    ColPali: Efficient Document Retrieval with Vision Language Models

    GOSIM AI Spotlight

    Open Platform

    This repository contains the code used for training the vision retrievers in the ColPali: Efficient Document Retrieval with Vision Language Models paper. In particular, it contains the code for training the ColPali model, which is a vision retriever based on the ColBERT architecture and the PaliGemma model.
  • May 6

    13:45 -14:00

    Unlocking Heterogeneous AI Infrastructure K8s Cluster: Leveraging the Power of HAMi

    GOSIM AI Spotlight

    Open Platform

    Unlocking the full potential of AI Infra heterogeneous GPUs with HAMi for efficient scheduling, unified management, observability, and utilization improvement
  • May 6

    14:00 - 14:40

    Decode DeepSeek: the Technological Innovation and Its Influence on AI Ecosystem

    AI Model

    Master Stage

    Recently, DeepSeek has attracted a great deal of attention with its outstanding technological innovations and is set to have a profound and extensive impact on the AI industry. This speech is divided into two parts. In the first part, we will top-down walkthrough of DeepSeek’s technological innovations, including the paradigm shift in inference computing led by its open-source reinforcement learning solution, innovations in model architecture such as MLA and MOE, and performance optimizations in system engineering. In the second part, we will explore the transformations and impacts on global AI ecosystem triggered by DeepSeek, including its key influences on aspects such as AI applications、AI Agents, the computing power landscape, and AI open-source initiatives.
  • May 6

    14:00 - 14:40

    Make Your LLMs Serverless

    AI Infra

    Founders Café (Updated)

    LLMs require GPUs, causing scarcity. Overprovisioning them is expensive and a waste. Google Cloud Run now offers serverless GPU support, enabling cost-effective LLM deployment. A live demo will compare Gemma model performance with and without GPUs.
  • May 6

    14:00 - 14:40

    Tech Together, Powered by Dify

    AI Apps

    Open Stage

    Dify is a next-generation AI-native application development platform that bridges cutting-edge technology with real-world business value. Built on a robust open-source foundation, Dify integrates modern tech stacks—including LLM orchestration, RAG (Retrieval-Augmented Generation), fine-tuning tools, and multi-agent workflows—to simplify the creation, deployment, and scaling of AI applications. Our global developer community has become a hub for innovation, with thousands of contributors and enterprise adopters leveraging Dify to build everything from intelligent chatbots to enterprise-grade automation systems. This talk will highlight: Dify’s open-source ecosystem and its role in accelerating AI adoption; Key technologies powering the platform and how they solve real-world challenges; Success stories from developers and enterprises.
  • May 6

    14:00 - 14:40

    On-demand Scenario Generation for Testing Automated Driving Systems

    Embodied AI

    Junior Stage

    Valuating the decision-making system is indispensable in developing autonomous vehicles, while realistic and challenging safety-critical test scenarios play a crucial role. Obtaining these scenarios is non-trivial due to the long-tailed distribution, sparsity, and rarity in real-world data sets. To tackle this problem, we introduce a natural adversarial scenario generation solution using naturalistic human driving priors and reinforcement learning. Our experiments on public data sets demonstrate that our proposed model can generate realistic safety critical test scenarios covering both naturalness and adversariality with 44% efficiency gain over the baseline model.
  • May 6

    14:40 - 15:20

    Linear Next: The Evolution of LLM Architecture

    AI Model

    Master Stage

    The Transformer architecture, despite its popularity, suffers from quadratic computational complexity. Recent advances in computing hardware, such as the V100 to H200 series, have temporarily alleviated this issue, reducing the immediate need for alternatives in the industry. Linear-complexity solutions for large models are still in the research phase, lacking widespread validation in practical applications. Consequently, Transformer remains the preferred choice. However, as improvements in computing power slow down, the demand for architectures that surpass Transformer in efficiency will grow. Our team has developed Lightning Attention, a novel mechanism based on linear attention. By rearranging the QKV multiplication order (Q(KV)), Lightning Attention achieves linear computational complexity relative to sequence length. Experiments show it significantly outperforms the latest Transformers in both efficiency and performance, validated on a 456B MoE model (MiniMax 01). This innovation paves the way for more efficient large language models, offering new possibilities for future development.
  • May 6

    14:40 - 15:20

    Open-source Intelligent Computing Integrated Management and Utilization Foundational Software - SCOW and CraneSched

    AI Infra

    Founders Café (Updated)

    The Peking University Computing Center is dedicated to developing general foundational software for both supercomputing (HPC) and intelligent computing (AI computing). In the field of HPC and AI computing, it has developed several flagship foundational software systems, including SCOW and CraneSched. OpenSCOW (https://github.com/PKUHPC/OpenSCOW) provides a graphical user interface (GUI) that allows developers to flexibly manage supercomputing and AI computing resources for AI model training and inference. It has already been deployed across 56 computing centers, including 34 universities and 12 enterprises in China. CraneSched ( https://github.com/PKUHPC/CraneSched) is a high-performance scheduling and orchestration system for HPC and AI computing tasks. It supports large-scale model training with exceptional performance and has been adopted by 8 universities and 1 enterprise in China.
  • May 6

    14:40 - 15:20

    Using AI to Vibe Code Rust UI's for Mobile, Web and Mixed Reality

    AI Apps

    Open Stage

    In this talk i will show vibecoding makepad UIs and UI shaders with Makepad Studio and an LLM. Makepad Studio is our visual design / code environment and the vision is to bring back Visual Basic, but now for a modern language: Rust.
  • May 6

    14:40 - 15:20

    RoboBrain: A Unified Brain Model for Robotic Manipulation & RoboOS: A Hierarchical Collaborative Framework for RoboBrain and Robot Agents

    Embodied AI

    Junior Stage

    RoboBrain is an MLLM-based model that enhances robotic manipulation by integrating task planning, object affordance, and trajectory prediction, addressing the limitations of MLLMs in robotic scenarios—particularly in long-horizon tasks—while achieving state-of-the-art performance. Building on RoboBrain’s planning capabilities, we propose the RoboOS hierarchical collaborative framework, which integrates the efficient execution of the robotic skills to enable cross-embodiment collaborative control of multiple robots.
  • May 6

    15:20 - 15:40

    Afternoon Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    15:40 - 16:20

    The Curse of Depth in Large Language Models

    AI Model

    Master Stage

    Large Language Models (LLMs) have demonstrated impressive achievements. However, recent research has shown that their deeper layers often contribute minimally, with effectiveness diminishing as layer depth increases. This pattern presents significant opportunities for model compression. In the first part of this seminar, we will explore how this phenomenon can be harnessed to improve the efficiency of LLM compression. Despite these opportunities, the underutilization of deeper layers leads to inefficiencies, wasting resources that could be better used to enhance model performance. The second part of the talk will address the root cause of this ineffectiveness in deeper layers and propose a solution. We identify the issue as stemming from the prevalent use of Pre-Layer Normalization (Pre-LN) and introduce LayerNorm Scaling to solve this issue.
  • May 6

    15:40 - 16:20

    verl: Hybrid Controller-based RLHF System

    AI Infra

    Founders Café (Updated)

    verl is a flexible, efficient and production-ready RL training library for LLMs. This talk will share the ideas in designing a hybrid-controller system and the benefits of this system in efficient large-scale RL training.
  • May 6

    15:40 - 16:20

    Unifying AI Integration with Model Context Protocol

    AI Apps

    Open Stage

    The Model Context Protocol (MCP) standardizes how AI models interact with external tools and resources through a structured client-server architecture, facilitating robust agent development. The engineering community worldwide keeps sharing MCP servers that enable client interactions that open truly remarkable and innovative applications. This talk delves into MCP's core capabilities and how the MCP Java SDK combined with Spring AI MCP can integrate AI with your existing resources and applications. Today's intelligent agents can understand context, guide decisions, and integrate seamlessly with external services. Through live coding and practical examples, we will illustrate how to implement both client and server components. By attending this session, you will gain a practical understanding of MCP's standardized interfaces and architectural best practices, empowering you to build and extend AI-powered applications with agent-like capabilities. Whether you're developing new AI-driven solutions or enhancing existing systems, this talk will equip you with the tools and strategies needed to leverage MCP effectively.
  • May 6

    15:40 - 16:20

    Building Robotic Applications with Open-source VLA Models

    Embodied AI

    Junior Stage

    Ville shares the successes and challenges in using open-source Vision-Language-Action (VLA) models on robots, and provides a full-stack "starter guide" for building VLA-powered robotic applications in 2025.
  • May 6

    16:20 - 17:00

    Going Beyond Tokens for Code Large Language Models

    AI Model

    Master Stage

    Tokenization in LLMs is the last bit of clunkiness in an otherwise elegant, highly-optimized architecture. This talk presents interesting avenues in tokenizer-free architecture to go "beyond tokens" in order to reduce latency and improve performance.
  • May 6

    16:20 - 17:00

    Datasets and Infrastructure for DeepSeek-R1 Style Reinforcement Learning (GRPO)

    AI Infra

    Founders Café (Updated)

    We will walk through everything you need to know about the latest in reinforcement learning for LLMs, datasets and infrastructure, down to training your own small reasoning LLM that can write code locally.
  • May 6

    16:20 - 17:00

    The "T" in MCP and A2A Stands for Trust

    AI Apps

    Open Stage

    This talk follows the spirit of Elena Cross's blog titled "The "S" in MCP Stands for Security". The challenges we face in AI agent deployment in the real world exceed what current security solutions can address. That is true even if you incorporate "best practice" security solutions by design and implement them competently. It's time that we move beyond security to think about trust instead. In this talk, we will examine what are still missing even if we implement common security in MCP and A2A, discuss what these new trust primitives are, and show how developers can adopt strong trust primitives integrated with MCP and A2A in agent based applications.
  • May 6

    16:20 - 17:00

    Spatial Reasoning LLM: Enhancing 2D & 3D Understanding for Robotic Manipulation and Navigation

    Embodied AI

    Junior Stage

    Robotic systems require advanced spatial reasoning for navigation and manipulation. We introduce a research project enhancing LLMs for spatial intelligence: AlphaMaze, solving 2D mazes with self-correction; AlphaSpace, interpreting object positions for robot hand manipulation via language; and AlphaVoxel, using 3D voxel space for object recognition and robot navigation.
  • May 6

    17:00 - 17:40

    How to Build Competitive Large Language Models "Made in Europe"?

    AI Model

    Master Stage

    How to Build Competitive Large Language Models “Made in Europe” As Europe accelerates its efforts to develop sovereign AI capabilities, initiatives like OpenEuroLLM, OpenGPT-X, TrustLLM, and EuroLingua-GPT are leading the charge in creating multilingual, open-source large language models (LLMs) that reflect European values. This talk will delve into the insights gained from these projects, highlighting the challenges and breakthroughs in training competitive LLMs within Europe's unique regulatory and linguistic landscape.​ We'll explore how research initiatives are addressing key issues such as data diversity, computational infrastructure, and model transparency with the goal of gaining an understanding of the lay of the land of Europe's AI advancements.​ Join this talk to get a glimpse of how Europe's commitment to openness, trustworthiness, and cultural inclusivity is shaping the next generation of AI, and learn how you can contribute to this transformative journey.
  • May 6

    17:00 - 18:00

    Spotlight Demos

    All Tracks
  • May 6

    17:00 - 17:40

    Cegid Pulse, Multi Agents Business Management Platform

    AI Apps

    Open Stage

    Cegid is a global leader in cloud-based solutions for finance, HR, and retail, serving businesses of all sizes with tools to enhance efficiency and compliance. This talk introduces Cegid Pulse OS, a next-generation business management platform that enables seamless collaboration between human users and AI-powered agents. Leveraging large language models, Pulse OS dynamically generates and executes tasks through multi-agent conversations, where each agent—built with code or natural language—can interpret, act on, and respond to user intent. Designed to streamline operations, Pulse OS represents a shift toward intelligent, conversation-driven business workflows.
  • May 6

    8:30 - 9:30

    Breakfast and Registration

    All Tracks

    Entrance Lobby

    Check-In and Badge Pickup
  • May 6

    9:30 - 9:45

    Keynote: Open-Source has Caught Up, What's Next?

    All Tracks

    Master Stage

    Two years after ChatGPT, open-source LLMs have caught up with or even exceeded the leading edge closed-source LLMs across almost all benchmarks. Furthermore, the open-source models have clear advantages in emerging agent applications — as they are much easier to fine-tune and customize, safer and easier to deploy on private devices. Throughout the GOSIM conference, we will highlight and introduce open-source LLMs and their applications.
  • May 6

    9:45 - 10:00

    Keynote: Global Digital Collaboration

    All Tracks

    Master Stage

    The Global Digital Collaboration is a joint initiative of Intergovernmental Organizations (IGOs), Standard Development Organizations (SDOs), Open Source Organizations (OSOs) and Trade Associations to help fulfill the goals of the Compact. These organizations act as Co-Conveners and invite the public sector, private sector and the global technical community to participate in the Global Digital Collaboration.
  • May 6

    10:00 - 10:30

    Morning Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    10:30 - 11:10

    Open-R1: A Fully Open Reproduction of DeepSeek-R1

    AI Model

    Master Stage

    The recipe behind OpenAI’s reasoning models has been a well kept secret. That is, until earlier this year, when DeepSeek released their DeepSeek-R1 model and promptly broke the internet. While a detailed technical report was published, many open questions remain, chief among them the training data, which was not released. Open-R1 is Hugging Face's fully open effort to replicate DeepSeek-R1, with a strong focus on reasoning data curation.
  • May 6

    11:10 - 11:50

    OpenSeek: Collaborative Innovation for Next-Gen Models

    AI Model

    Master Stage

    OpenSeek aims to unite the global open-source community to drive collaborative innovation in algorithms, data, and systems to develop next-generation models that surpass DeepSeek.
  • May 6

    12:00 - 14:00

    Lunch Break

    All Tracks

    Open Platform

    Food & drinks
  • May 6

    12:30 - 13:50

    Spotlight Talks Day 1

    All Tracks
  • May 6

    14:00 - 14:40

    Decode DeepSeek: the Technological Innovation and Its Influence on AI Ecosystem

    AI Model

    Master Stage

    Recently, DeepSeek has attracted a great deal of attention with its outstanding technological innovations and is set to have a profound and extensive impact on the AI industry. This speech is divided into two parts. In the first part, we will top-down walkthrough of DeepSeek’s technological innovations, including the paradigm shift in inference computing led by its open-source reinforcement learning solution, innovations in model architecture such as MLA and MOE, and performance optimizations in system engineering. In the second part, we will explore the transformations and impacts on global AI ecosystem triggered by DeepSeek, including its key influences on aspects such as AI applications、AI Agents, the computing power landscape, and AI open-source initiatives.
  • May 6

    14:40 - 15:20

    Linear Next: The Evolution of LLM Architecture

    AI Model

    Master Stage

    The Transformer architecture, despite its popularity, suffers from quadratic computational complexity. Recent advances in computing hardware, such as the V100 to H200 series, have temporarily alleviated this issue, reducing the immediate need for alternatives in the industry. Linear-complexity solutions for large models are still in the research phase, lacking widespread validation in practical applications. Consequently, Transformer remains the preferred choice. However, as improvements in computing power slow down, the demand for architectures that surpass Transformer in efficiency will grow. Our team has developed Lightning Attention, a novel mechanism based on linear attention. By rearranging the QKV multiplication order (Q(KV)), Lightning Attention achieves linear computational complexity relative to sequence length. Experiments show it significantly outperforms the latest Transformers in both efficiency and performance, validated on a 456B MoE model (MiniMax 01). This innovation paves the way for more efficient large language models, offering new possibilities for future development.
  • May 6

    15:20 - 15:40

    Afternoon Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    15:40 - 16:20

    The Curse of Depth in Large Language Models

    AI Model

    Master Stage

    Large Language Models (LLMs) have demonstrated impressive achievements. However, recent research has shown that their deeper layers often contribute minimally, with effectiveness diminishing as layer depth increases. This pattern presents significant opportunities for model compression. In the first part of this seminar, we will explore how this phenomenon can be harnessed to improve the efficiency of LLM compression. Despite these opportunities, the underutilization of deeper layers leads to inefficiencies, wasting resources that could be better used to enhance model performance. The second part of the talk will address the root cause of this ineffectiveness in deeper layers and propose a solution. We identify the issue as stemming from the prevalent use of Pre-Layer Normalization (Pre-LN) and introduce LayerNorm Scaling to solve this issue.
  • May 6

    16:20 - 17:00

    Going Beyond Tokens for Code Large Language Models

    AI Model

    Master Stage

    Tokenization in LLMs is the last bit of clunkiness in an otherwise elegant, highly-optimized architecture. This talk presents interesting avenues in tokenizer-free architecture to go "beyond tokens" in order to reduce latency and improve performance.
  • May 6

    17:00 - 17:40

    How to Build Competitive Large Language Models "Made in Europe"?

    AI Model

    Master Stage

    How to Build Competitive Large Language Models “Made in Europe” As Europe accelerates its efforts to develop sovereign AI capabilities, initiatives like OpenEuroLLM, OpenGPT-X, TrustLLM, and EuroLingua-GPT are leading the charge in creating multilingual, open-source large language models (LLMs) that reflect European values. This talk will delve into the insights gained from these projects, highlighting the challenges and breakthroughs in training competitive LLMs within Europe's unique regulatory and linguistic landscape.​ We'll explore how research initiatives are addressing key issues such as data diversity, computational infrastructure, and model transparency with the goal of gaining an understanding of the lay of the land of Europe's AI advancements.​ Join this talk to get a glimpse of how Europe's commitment to openness, trustworthiness, and cultural inclusivity is shaping the next generation of AI, and learn how you can contribute to this transformative journey.
  • May 6

    17:00 - 18:00

    Spotlight Demos

    All Tracks
  • May 6

    8:30 - 9:30

    Breakfast and Registration

    All Tracks

    Entrance Lobby

    Check-In and Badge Pickup
  • May 6

    9:30 - 9:45

    Keynote: Open-Source has Caught Up, What's Next?

    All Tracks

    Master Stage

    Two years after ChatGPT, open-source LLMs have caught up with or even exceeded the leading edge closed-source LLMs across almost all benchmarks. Furthermore, the open-source models have clear advantages in emerging agent applications — as they are much easier to fine-tune and customize, safer and easier to deploy on private devices. Throughout the GOSIM conference, we will highlight and introduce open-source LLMs and their applications.
  • May 6

    9:45 - 10:00

    Keynote: Global Digital Collaboration

    All Tracks

    Master Stage

    The Global Digital Collaboration is a joint initiative of Intergovernmental Organizations (IGOs), Standard Development Organizations (SDOs), Open Source Organizations (OSOs) and Trade Associations to help fulfill the goals of the Compact. These organizations act as Co-Conveners and invite the public sector, private sector and the global technical community to participate in the Global Digital Collaboration.
  • May 6

    10:00 - 10:30

    Morning Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    10:30 - 11:10

    AI Open Source for Good: Inclusive Access, Equitable Data, and Accessible Compute

    AI Infra

    Founders Café (Updated)

    This talk unveils how open source technologies act as catalysts for equitable AI across three pillars. First, inclusive access: We open-source voice datasets tailored for underrepresented groups—such as children and the elderly—to ensure multimodal AI systems understand diverse linguistic patterns and bridge generational divides. Second, equitable data: we have released nearly 100 globally accessible datasets, amassing over 680,000 downloads, empowering developers from any countries to innovate freely. Third, accessible compute: We present FlagOS, an open-source system software that facilitates AI development and deployment across diverse hardware ecosystems—including legacy GPUs and emerging accelerators—while significantly lowering the cost barrier to AI innovation. Collectively, these open-source efforts transform 'AI for Good' into a shared mission—breaking barriers of age, location, and resources to empower anyone to create and benefit from AI.
  • May 6

    11:10 - 11:50

    The Best Practice of Training and Inferencing on Ascend CANN

    AI Infra

    Founders Café (Updated)

    The AI-oriented, heterogeneous Compute Architecture for Neural Networks (CANN) is a key platform for improving the computing efficiency of Ascend AI processors. It serves as a bridge between upper-layer AI frameworks and lower-layer AI processors and programming. This topic will focus on OpenSource ecosystem about CANN, shows how CANN helps AI sofeware, such as pytorch, vllm and so on, efficiently running on Ascend.
  • May 6

    12:00 - 14:00

    Lunch Break

    All Tracks

    Open Platform

    Food & drinks
  • May 6

    12:30 - 13:50

    Spotlight Talks Day 1

    All Tracks
  • May 6

    14:00 - 14:40

    Make Your LLMs Serverless

    AI Infra

    Founders Café (Updated)

    LLMs require GPUs, causing scarcity. Overprovisioning them is expensive and a waste. Google Cloud Run now offers serverless GPU support, enabling cost-effective LLM deployment. A live demo will compare Gemma model performance with and without GPUs.
  • May 6

    14:40 - 15:20

    Open-source Intelligent Computing Integrated Management and Utilization Foundational Software - SCOW and CraneSched

    AI Infra

    Founders Café (Updated)

    The Peking University Computing Center is dedicated to developing general foundational software for both supercomputing (HPC) and intelligent computing (AI computing). In the field of HPC and AI computing, it has developed several flagship foundational software systems, including SCOW and CraneSched. OpenSCOW (https://github.com/PKUHPC/OpenSCOW) provides a graphical user interface (GUI) that allows developers to flexibly manage supercomputing and AI computing resources for AI model training and inference. It has already been deployed across 56 computing centers, including 34 universities and 12 enterprises in China. CraneSched ( https://github.com/PKUHPC/CraneSched) is a high-performance scheduling and orchestration system for HPC and AI computing tasks. It supports large-scale model training with exceptional performance and has been adopted by 8 universities and 1 enterprise in China.
  • May 6

    15:20 - 15:40

    Afternoon Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    15:40 - 16:20

    verl: Hybrid Controller-based RLHF System

    AI Infra

    Founders Café (Updated)

    verl is a flexible, efficient and production-ready RL training library for LLMs. This talk will share the ideas in designing a hybrid-controller system and the benefits of this system in efficient large-scale RL training.
  • May 6

    16:20 - 17:00

    Datasets and Infrastructure for DeepSeek-R1 Style Reinforcement Learning (GRPO)

    AI Infra

    Founders Café (Updated)

    We will walk through everything you need to know about the latest in reinforcement learning for LLMs, datasets and infrastructure, down to training your own small reasoning LLM that can write code locally.
  • May 6

    17:00 - 18:00

    Spotlight Demos

    All Tracks
  • May 6

    8:30 - 9:30

    Breakfast and Registration

    All Tracks

    Entrance Lobby

    Check-In and Badge Pickup
  • May 6

    9:30 - 9:45

    Keynote: Open-Source has Caught Up, What's Next?

    All Tracks

    Master Stage

    Two years after ChatGPT, open-source LLMs have caught up with or even exceeded the leading edge closed-source LLMs across almost all benchmarks. Furthermore, the open-source models have clear advantages in emerging agent applications — as they are much easier to fine-tune and customize, safer and easier to deploy on private devices. Throughout the GOSIM conference, we will highlight and introduce open-source LLMs and their applications.
  • May 6

    9:45 - 10:00

    Keynote: Global Digital Collaboration

    All Tracks

    Master Stage

    The Global Digital Collaboration is a joint initiative of Intergovernmental Organizations (IGOs), Standard Development Organizations (SDOs), Open Source Organizations (OSOs) and Trade Associations to help fulfill the goals of the Compact. These organizations act as Co-Conveners and invite the public sector, private sector and the global technical community to participate in the Global Digital Collaboration.
  • May 6

    10:00 - 10:30

    Morning Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    10:30 - 11:10

    TONGYI Lingma: from Coding Copilot to Coding Agent

    AI Apps

    Open Stage

    This presentation will take the perspective of intelligent development in the software engineering to outline and introduce the latest technological advancements and product applications of Code LLMs, Coding Copilot, and Coding Agents, as well as analyze and forecast future development trends.
  • May 6

    11:10 - 11:50

    CangjieMagic : New Choices for Developers in the Age of Large Models

    AI Apps

    Open Stage

    With the surge in popularity of various AI large models, the trend of agent-oriented development in large model applications has become increasingly evident. Agents are gradually becoming a core element in the development of large model applications. This topic shares an AI large model Agent development framework based on the Cangjie programming language. This framework supports Agent-oriented programming, providing developers with an efficient Agent DSL for Agent programming. Its main features include: support for the MCP protocol to facilitate mutual invocation between Agents and tools, support for modular invocation, and support for intelligent task planning. It enhances the efficiency of developers in creating smart HarmonyOS applications, delivers an exceptional development experience, and explores new paradigms for future large model application development.
  • May 6

    12:00 - 14:00

    Lunch Break

    All Tracks

    Open Platform

    Food & drinks
  • May 6

    12:30 - 13:50

    Spotlight Talks Day 1

    All Tracks
  • May 6

    14:00 - 14:40

    Tech Together, Powered by Dify

    AI Apps

    Open Stage

    Dify is a next-generation AI-native application development platform that bridges cutting-edge technology with real-world business value. Built on a robust open-source foundation, Dify integrates modern tech stacks—including LLM orchestration, RAG (Retrieval-Augmented Generation), fine-tuning tools, and multi-agent workflows—to simplify the creation, deployment, and scaling of AI applications. Our global developer community has become a hub for innovation, with thousands of contributors and enterprise adopters leveraging Dify to build everything from intelligent chatbots to enterprise-grade automation systems. This talk will highlight: Dify’s open-source ecosystem and its role in accelerating AI adoption; Key technologies powering the platform and how they solve real-world challenges; Success stories from developers and enterprises.
  • May 6

    14:40 - 15:20

    Using AI to Vibe Code Rust UI's for Mobile, Web and Mixed Reality

    AI Apps

    Open Stage

    In this talk i will show vibecoding makepad UIs and UI shaders with Makepad Studio and an LLM. Makepad Studio is our visual design / code environment and the vision is to bring back Visual Basic, but now for a modern language: Rust.
  • May 6

    15:20 - 15:40

    Afternoon Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    15:40 - 16:20

    Unifying AI Integration with Model Context Protocol

    AI Apps

    Open Stage

    The Model Context Protocol (MCP) standardizes how AI models interact with external tools and resources through a structured client-server architecture, facilitating robust agent development. The engineering community worldwide keeps sharing MCP servers that enable client interactions that open truly remarkable and innovative applications. This talk delves into MCP's core capabilities and how the MCP Java SDK combined with Spring AI MCP can integrate AI with your existing resources and applications. Today's intelligent agents can understand context, guide decisions, and integrate seamlessly with external services. Through live coding and practical examples, we will illustrate how to implement both client and server components. By attending this session, you will gain a practical understanding of MCP's standardized interfaces and architectural best practices, empowering you to build and extend AI-powered applications with agent-like capabilities. Whether you're developing new AI-driven solutions or enhancing existing systems, this talk will equip you with the tools and strategies needed to leverage MCP effectively.
  • May 6

    16:20 - 17:00

    The "T" in MCP and A2A Stands for Trust

    AI Apps

    Open Stage

    This talk follows the spirit of Elena Cross's blog titled "The "S" in MCP Stands for Security". The challenges we face in AI agent deployment in the real world exceed what current security solutions can address. That is true even if you incorporate "best practice" security solutions by design and implement them competently. It's time that we move beyond security to think about trust instead. In this talk, we will examine what are still missing even if we implement common security in MCP and A2A, discuss what these new trust primitives are, and show how developers can adopt strong trust primitives integrated with MCP and A2A in agent based applications.
  • May 6

    17:00 - 17:40

    Cegid Pulse, Multi Agents Business Management Platform

    AI Apps

    Open Stage

    Cegid is a global leader in cloud-based solutions for finance, HR, and retail, serving businesses of all sizes with tools to enhance efficiency and compliance. This talk introduces Cegid Pulse OS, a next-generation business management platform that enables seamless collaboration between human users and AI-powered agents. Leveraging large language models, Pulse OS dynamically generates and executes tasks through multi-agent conversations, where each agent—built with code or natural language—can interpret, act on, and respond to user intent. Designed to streamline operations, Pulse OS represents a shift toward intelligent, conversation-driven business workflows.
  • May 6

    17:00 - 18:00

    Spotlight Demos

    All Tracks
  • May 6

    8:30 - 9:30

    Breakfast and Registration

    All Tracks

    Entrance Lobby

    Check-In and Badge Pickup
  • May 6

    9:30 - 9:45

    Keynote: Open-Source has Caught Up, What's Next?

    All Tracks

    Master Stage

    Two years after ChatGPT, open-source LLMs have caught up with or even exceeded the leading edge closed-source LLMs across almost all benchmarks. Furthermore, the open-source models have clear advantages in emerging agent applications — as they are much easier to fine-tune and customize, safer and easier to deploy on private devices. Throughout the GOSIM conference, we will highlight and introduce open-source LLMs and their applications.
  • May 6

    9:45 - 10:00

    Keynote: Global Digital Collaboration

    All Tracks

    Master Stage

    The Global Digital Collaboration is a joint initiative of Intergovernmental Organizations (IGOs), Standard Development Organizations (SDOs), Open Source Organizations (OSOs) and Trade Associations to help fulfill the goals of the Compact. These organizations act as Co-Conveners and invite the public sector, private sector and the global technical community to participate in the Global Digital Collaboration.
  • May 6

    10:00 - 10:30

    Morning Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    10:30 - 11:10

    Mind, Body and Zenoh

    Embodied AI

    Junior Stage

    As Robotics and Artificial Intelligence continue to evolve at an unprecedented pace, a critical gap has emerged in their ability to scale and operate seamlessly across diverse environments: the absence of an efficient, intelligent "nervous system." Much like biological organisms rely on their nervous system to connect the body and the brain, autonomous systems require a foundational layer that enables real-time communication, adaptability, and distributed cognition. This talk introduces Zenoh, a cutting-edge Open Source protocol that is rapidly gaining traction in the robotics community and beyond. Zenoh bridges the traditional divide between data in motion, data at rest, and computation, enabling seamless data exchange from the edge to the cloud. Zenoh it's the missing link to unify sensing, actuation, and cognition. It is, in essence, the nervous system for the intelligent robots age.
  • May 6

    11:10 - 11:50

    Distributed Dataflows in Dora Using Zenoh

    Embodied AI

    Junior Stage

    The Dora framework makes it easy to create dataflows for robotics and AI. In this talk, we look into distributed dataflows that run on multiple machines and communicate through the network. Dora supports complex network topologies by using the Zenoh protocol. This way, it is possible to split Dora dataflows across private networks and cloud machines with minimal configuration.
  • May 6

    12:00 - 14:00

    Lunch Break

    All Tracks

    Open Platform

    Food & drinks
  • May 6

    12:30 - 13:50

    Spotlight Talks Day 1

    All Tracks
  • May 6

    14:00 - 14:40

    On-demand Scenario Generation for Testing Automated Driving Systems

    Embodied AI

    Junior Stage

    Valuating the decision-making system is indispensable in developing autonomous vehicles, while realistic and challenging safety-critical test scenarios play a crucial role. Obtaining these scenarios is non-trivial due to the long-tailed distribution, sparsity, and rarity in real-world data sets. To tackle this problem, we introduce a natural adversarial scenario generation solution using naturalistic human driving priors and reinforcement learning. Our experiments on public data sets demonstrate that our proposed model can generate realistic safety critical test scenarios covering both naturalness and adversariality with 44% efficiency gain over the baseline model.
  • May 6

    14:40 - 15:20

    RoboBrain: A Unified Brain Model for Robotic Manipulation & RoboOS: A Hierarchical Collaborative Framework for RoboBrain and Robot Agents

    Embodied AI

    Junior Stage

    RoboBrain is an MLLM-based model that enhances robotic manipulation by integrating task planning, object affordance, and trajectory prediction, addressing the limitations of MLLMs in robotic scenarios—particularly in long-horizon tasks—while achieving state-of-the-art performance. Building on RoboBrain’s planning capabilities, we propose the RoboOS hierarchical collaborative framework, which integrates the efficient execution of the robotic skills to enable cross-embodiment collaborative control of multiple robots.
  • May 6

    15:20 - 15:40

    Afternoon Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    15:40 - 16:20

    Building Robotic Applications with Open-source VLA Models

    Embodied AI

    Junior Stage

    Ville shares the successes and challenges in using open-source Vision-Language-Action (VLA) models on robots, and provides a full-stack "starter guide" for building VLA-powered robotic applications in 2025.
  • May 6

    16:20 - 17:00

    Spatial Reasoning LLM: Enhancing 2D & 3D Understanding for Robotic Manipulation and Navigation

    Embodied AI

    Junior Stage

    Robotic systems require advanced spatial reasoning for navigation and manipulation. We introduce a research project enhancing LLMs for spatial intelligence: AlphaMaze, solving 2D mazes with self-correction; AlphaSpace, interpreting object positions for robot hand manipulation via language; and AlphaVoxel, using 3D voxel space for object recognition and robot navigation.
  • May 6

    17:00 - 18:00

    Spotlight Demos

    All Tracks
  • May 6

    8:30 - 9:30

    Breakfast and Registration

    All Tracks

    Entrance Lobby

    Check-In and Badge Pickup
  • May 6

    9:30 - 9:45

    Keynote: Open-Source has Caught Up, What's Next?

    All Tracks

    Master Stage

    Two years after ChatGPT, open-source LLMs have caught up with or even exceeded the leading edge closed-source LLMs across almost all benchmarks. Furthermore, the open-source models have clear advantages in emerging agent applications — as they are much easier to fine-tune and customize, safer and easier to deploy on private devices. Throughout the GOSIM conference, we will highlight and introduce open-source LLMs and their applications.
  • May 6

    9:45 - 10:00

    Keynote: Global Digital Collaboration

    All Tracks

    Master Stage

    The Global Digital Collaboration is a joint initiative of Intergovernmental Organizations (IGOs), Standard Development Organizations (SDOs), Open Source Organizations (OSOs) and Trade Associations to help fulfill the goals of the Compact. These organizations act as Co-Conveners and invite the public sector, private sector and the global technical community to participate in the Global Digital Collaboration.
  • May 6

    10:00 - 10:30

    Morning Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    12:00 - 14:00

    Lunch Break

    All Tracks

    Open Platform

    Food & drinks
  • May 6

    15:20 - 15:40

    Afternoon Coffee

    All Tracks

    Open Platform

    Coffee & hors d’oeuvres
  • May 6

    12:30 -12:45

    Aidge

    GOSIM AI Spotlight

    Open Platform

    In an era increasingly defined by intelligent systems at the edge, Eclipse Aidge emerges as a crucial open-source initiative to democratize the development and deployment of efficient Deep Neural Networks (DNN) on embedded platforms. Recognizing the unique constraints and opportunities of edge computing – limited resources, real-time requirements, and the need for robust, low-power AI – Aidge provides a comprehensive framework for addressing this complex task. By offering end-to-end tools for DNN design, analysis, simulation, optimization and hardware-aware deployment across diverse targets (CPUs, GPUs, MCUs), Aidge empowers developers to build sophisticated AI applications for IoT, robotics, automotive, and beyond. This project fosters innovation by enabling rapid prototyping, performance benchmarking, and the creation of tailored solutions, ultimately accelerating the adoption of intelligent edge devices while promoting interoperability, transparency and collaboration within the open-source ecosystem. Key features : - High interoperability with the ONNX standard (import and export of the DNN model) - Rich tools for static analysis of the DNN model - State-of-the-art quantization and compression techniques to reduce the memory needs - Standalone C++ code generation of the DNN Some planned features : - A certification-aware code generation module. - Integration of a scalable and efficient ML compilation tool chain based on MLIR or TVM - Extension of Spike Neural Networks to be the first multi-paradigm framework natively addressing neuromorphic computing. Eclipse Aidge is hosted on The Eclipse Foundation's GitLab instance.
  • May 6

    12:45 -13:00

    The Rise of Bielik.AI

    GOSIM AI Spotlight

    Open Platform

    Community powered effort to achieve independent Polish AI. The Bielik Project delivers not only a family of open-source language models (1.5B, 4.5B, and 11B parameters) but also a complete tooling ecosystem designed to empower others to train, fine-tune, and evaluate LLMs with ease. One of the project’s core features is its end-to-end toolchain—spanning datasets, benchmarking, training, and fine-tuning frameworks—which enables any research group or organization to build their own models through base model fine-tuning or continuous pretraining.
  • May 6

    13:00 -13:15

    Running GenAI Models on Edge Devices with LlamaEdge

    GOSIM AI Spotlight

    Open Platform

    Run GenAI models on edge devices with LlamaEdge: multi-runtime support (llama.cpp, Torch, ONNX, OpenVINO), small footprint, GPU support, Rust SDK embeddability. Learn how it outperforms tools like Ollama for flexible, fast, on-device AI deployment.
  • May 6

    13:15 -13:30

    Automated Proof Generation for Rust Code Via Self-Evolution

    GOSIM AI Spotlight

    Open Platform

    Ensuring correctness is crucial for code generation. Formal verification offers a definitive assurance of correctness, but demands substantial human effort in proof construction and hence raises a pressing need for automation. The primary obstacle lies in the severe lack of data—there are much fewer proofs than code snippets for Large Language Models (LLMs) to train upon. In this paper, we introduce SAFE, a framework that overcomes the lack of human-written proofs to enable automated proof generation of Rust code. SAFE establishes a self-evolving cycle where data synthesis and fine-tuning collaborate to enhance the model capability, leveraging the definitive power of a symbolic verifier in telling correct proofs from incorrect ones. SAFE also re-purposes the large number of synthesized incorrect proofs to train the self-debugging capability of the fine-tuned models, empowering them to fix incorrect proofs based on the verifier’s feedback. SAFE demonstrates superior efficiency and precision compared to GPT-4o. Through tens of thousands of synthesized proofs and the self-debugging mechanism, we improve the capability of open-source models, initially unacquainted with formal verification, to automatically write proofs for Rust code. This advancement leads to a significant improvement in performance, achieving a 52.52% accuracy rate in a benchmark crafted by human experts, a significant leap over GPT-4o’s performance of 14.39%.
  • May 6

    13:30 -13:45

    ColPali: Efficient Document Retrieval with Vision Language Models

    GOSIM AI Spotlight

    Open Platform

    This repository contains the code used for training the vision retrievers in the ColPali: Efficient Document Retrieval with Vision Language Models paper. In particular, it contains the code for training the ColPali model, which is a vision retriever based on the ColBERT architecture and the PaliGemma model.
  • May 6

    13:45 -14:00

    Unlocking Heterogeneous AI Infrastructure K8s Cluster: Leveraging the Power of HAMi

    GOSIM AI Spotlight

    Open Platform

    Unlocking the full potential of AI Infra heterogeneous GPUs with HAMi for efficient scheduling, unified management, observability, and utilization improvement

Schedules are subject to change

Paris

Grab your GOSIM AI Paris ticket

Paris, France

May 6-7, 2025

Paris, the City of Light, transforms into the City of Artificial Brilliance this May. GOSIM AI 2025 invites visionaries, disruptors, and pioneers to converge at Station F—a crucible of innovation—to shape the next frontier of AI.