Skip to main content
Docs/Train (Coming Soon)

Train (Coming Soon)

Documentation for VoidAI Train - distributed AI training.

Coming Q2 2025

VoidAI Train is our distributed AI training network, currently in development.

What It Will Do

  • Train AI models on consumer GPUs worldwide
  • 90% cost reduction vs cloud providers
  • No hardware waitlist
  • Fault-tolerant, nodes can drop

How It Will Work

Based on DiLoCo (Distributed Low-Communication):

  1. Join: Connect your GPU to the network
  2. Receive: Get a model shard and data
  3. Train: Run local training steps
  4. Sync: Periodically sync compressed gradients
  5. Repeat: Model improves globally

For GPU Owners

Earn from your idle GPU: - One-click setup - Set your availability hours - No impact on gaming when you need it - Estimated $50-200/month per GPU

For AI Teams

Access distributed compute: - PyTorch compatible - Elastic scaling - Fault tolerant - 90% cost savings

Join the Waitlist

Sign up to get early access.

Technical Preview

Architecture docs and research papers available soon.