Submit ML tasks to distributed GPU infrastructure. Defect detection, material property prediction — powered by 60 RTX 5070 GPUs across Poland. Mining when idle, AI on demand.
ML Compute Exchange — Interactive Prototype — All data is simulated
ML Compute
Workspace
Tasks
Upload Data
New Task
Results
Account
Billing
JK
Jan Kowalski
ID
Model
Status
Created
Files
Duration
Cost
Server
Under the Hood: Task Pipeline03-compute-agent.md
Dispatch
Cloudflare Queue ml-job-queue
Job Claim
SELECT FOR UPDATE SKIP LOCKED
Real-time
SSE via GET /tasks/:id/stream
Allocation
1 task = 1 server, 1 GPU (MVP)
API endpoints:
POST/tasks
GET/tasks?page=1&limit=20&status=running
GET/tasks/:id/stream(SSE)
POST/tasks/:id/cancel
Status flow:
queued→assigned→running→completed
Atomicity: task completion + usage_record creation + server release in single DB transaction. Rollback if any step fails.
Click to upload or drag and drop
ZIP, JPG, PNG, CSV, PDF — up to 2GB
Recent Uploads
parsed
Under the Hood: Data Pipeline02-data-pipeline.md
Upload Method
Presigned R2 URLs 60min TTL
Storage
Cloudflare R2, tenant-isolated prefixes
Parsing
Async via parse-dispatch-queue
Formats
JPG, PNG, CSV, PDF, DOCX, XML, ZIP
Flow:
Client→POST /uploads/presign→Direct PUT to R2→POST /uploads/confirm→Queue: parse
Files never pass through CF Workers (bypass 128MB memory limit). Multipart presigned URLs for large files. R2 path: {tenant_id}/uploads/{upload_id}/{filename}