The DeOS for AI Workload Orchestration
Our vision is to make AI affordable and approachable by creating an "Uber for AI" platform to match computing demands, in terms of jobs and workload with global infrastructure. Through our DeOS framework, we aim to achieve Yottascale computing with commodity GPUs, unlocking the power of community-driven AI development for greater collaboration and innovation.
team
Team
Our team comprises experts from industry and academia, with a wealth of experience and a proven track record in AI and system infrastructure.
We have published in top-tier AI and system conferences, including co-authoring papers with Microsoft DeepSpeed, and multiple DOE highlights for our innovation of AI in HPC. Our members have worked on large-scale high-performance computing (HPC) and LLM-based AI systems at leading organizations such as Meta, TikTok, Amazon, and renowned national labs like Berkeley, Argonne, and Oak Ridge.
Our goal is to unlock the maximal computation power through cutting-edge approaches including modularized AI software stacks, decentralization technology, and innovative LLM parallelism.