Home
Communities
Airdrops
Leaderboard
Meme Coins
AboutFAQ
The Ultimate List of Compute Providers

1] Hyperbolic


Focus: AI inference & model rental

Compute Model: Centralized GPU marketplace

Specialization/Edge: Low-cost GPU rental, Hugging Face integration

Integration Method: REST/API

Website: app.hyperbolic.ai


2] Cherry Servers


Focus: AI/ML training

Compute Model: Bare-metal GPU servers

Specialization/Edge: Fast deploy, customizable Nvidia/AMD hardware

Integration Method: Portal & REST API

Website: cherryservers.com


3] SurferCloud

Focus: AI inference & GPU workloads

Compute Model: GPU UHost (vGPU cloud)

Specialization/Edge: Affordable RTX 4090/Tesla P40, elastic vGPU scaling

Integration Method: Portal/API

Website: surfercloud.com


4] Novita AI


Focus: LLM inference & API

Compute Model: Serverless GPU + VM instances

Specialization/Edge: Spot pricing (~50%) on GPUs, LLM-serving endpoints

Integration Method: API/SDK + hosted UI

Website: novita.ai


5] Vultr


Focus: AI training & inference

Compute Model: Virtual GPU VMs (A100, AMD MI)

Specialization/Edge: Scalable GPU tiers, Kubernetes-ready

Integration Method: Console/CLI/API

Website: vultr.com


6] Immers.cloud


Focus: AI training & inference

Compute Model: Bare-metal GPU VMs

Specialization/Edge: Dedicated Tesla/A100/H100, per-second billing, no oversubscription

Integration Method: Portal/API (OpenStack Yoga)

Website: immers.cloud


7] Database Mart


Focus: AI training & inference

Compute Model: Bare-metal GPU VMs

Specialization/Edge: Wide GPU config (H100/A100/RTX), cost-effective, rapid deployment

Integration Method: Portal, control panel

Website: databasemart.com


8] Vast.ai


Focus: AI model training, LLM inference, 3D rendering

Compute Model: Virtualized GPU instances (serverless-style, pay-as-you-go)

Specialization/Edge: Spot pricing for GPUs, flexible scheduling, lifetime referral rewards

Integration Method: Web dashboard + API/CLI

Website: vast.ai


9] RunPod


Focus: AI/ML workloads, LLM training, batch rendering

Compute Model: GPU pods + serverless GPU instances

Specialization/Edge: Low-latency endpoints, containerized workloads, referral + affiliate program

Integration Method: Web dashboard + API/SDK + container support

Website: runpod.io


More compute providers can be found in this Google Sheet.


Do let us know in the comments area below if we missed something! :)

1
0.00
0 Comments

No Comments Yet