Platform architecture

How the control plane, worker nodes, networking, builds, and monitoring fit together.

Written By Zoro

Last updated 3 days ago

This page replaces the legacy marketing Architecture article with terminology that matches the product: Organisation, Application, Environment, Service, Worker Node, and Deployment. For the conceptual stack, read How dFlow is structured first.

Control plane and worker nodes

  • Control plane (dFlow app): Where you manage Organisations, Applications, Environments, Services, and releases. On dFlow Cloud, dFlow hosts this for you. When you self-host, you run this stack on infrastructure you operate (commonly via Docker Compose; see Installation options under Self-Hosting dFlow in the sidebar).
  • Worker Nodes: Machines that execute Services for Environments you attach them to. dFlow prepares nodes over SSH (for example Dokku for container-style app deployments). See Worker Nodes concept.

Server access and provisioning

For nodes you connect directly, dFlow uses SSH so you are not required to install a permanent agent on every server. Provisioning and integrations depend on your path (dFlow Cloud managed nodes vs self-hosted servers you add in the dashboard). Troubleshoot connectivity from Worker Nodes and Compute and Self-hosting troubleshooting under Self-Hosting dFlow in the sidebar when needed.

Proxy, DNS, and TLS

Traffic for your Services is routed through the platform proxy layer (Traefik in the reference Compose stack). You assign domains and TLS per your environment. For application traffic see Domains and SSL (deployments) under Deployments and Operations in the sidebar; for the self-hosted control plane hostname see Domains and SSL (self-hosting) under Self-Hosting dFlow in the sidebar.

Builds and deployments

Application Services are typically built as Docker images. dFlow supports Dockerfiles and build tooling such as Railpack where enabled. Builds run in context of your Worker Nodes and Deployments, not as a separate opaque remote CI requirement; see Deployments concept.

Scaling and additional compute

To run more capacity, attach additional Worker Nodes to the right Environments or provision managed nodes on dFlow Cloud (provider and regions depend on your plan). You stay in the same Application / Environment / Service model instead of hand-rolling orchestration.

Monitoring and logs

Worker Nodes can surface host and workload metrics. The product integrates Netdata for deeper node metrics where installed, alongside built-in views for status and logs; open Logs under Deployments and Operations in the sidebar and Worker Nodes overview under Worker Nodes and Compute in the sidebar for task-level detail.

Advanced: how the dashboard talks to the backend

The control plane UI uses Next.js Server Actions and tRPC over HTTP (plus Payload-backed APIs where applicable). There is no separate versioned customer REST API for the whole dashboard. If you are integrating or self-hosting at a deep level, read Reference overview under API, CLI, and Admin Reference in the sidebar and tRPC and backend reference under API, CLI, and Admin Reference in the sidebar.

Next steps