The problem with infrastructure monitoring is that most of the good tools are designed for large teams or large budgets — often both. Datadog, Grafana Cloud, New Relic — all excellent, all overkill for a small fleet of servers that you own and can SSH into directly.
We wanted something simpler: a single page that told us the state of everything we run, updated every 30 seconds, accessible from any browser on the LAN. No agents to install on each host. No third-party services. No monthly fees. Just a Node.js server that SSHes into each host, runs a few commands, and surfaces the results.
The Architecture
The dashboard is a Node.js/Express server running on Ubuntu 24.04 on our Proxmox cluster. It is managed by PM2 and serves a plain HTML/JavaScript frontend on port 80 via nginx.
Data collection works like this:
- SSH vitals — every 30 seconds, the server SSHes into each host using node-ssh and runs:
top -bn1,free -m,df -h,uptime. Output is parsed and stored in memory. - Pi-hole — the Pi-hole REST API returns query counts, block rates, and status. One HTTP call per cycle.
- ESXi — the ESXi SOAP API returns VM power states and host resource usage. We parse the SOAP XML response and normalise it into the same data shape as the SSH vitals.
- MikroTik — RouterOS API via a raw socket connection. We pull interface traffic stats and DHCP lease counts.
All collected data is pushed to connected browser clients via Server-Sent Events (SSE). The frontend just listens on an event stream — no polling, no WebSocket handshake overhead.
What It Shows
The dashboard is divided into cards — one per host. Each card shows:
- CPU usage (current %, colour-coded by threshold)
- RAM usage (used / total)
- Disk usage (used / total per mount point)
- Uptime
- Last-seen timestamp (so you immediately know if collection has stalled)
There is also a summary row at the top for Pi-hole (queries today, percentage blocked), a network throughput graph for the MikroTik WAN interface, and a VM inventory panel from ESXi showing power state for each VM.
"The last-seen timestamp turned out to be the most useful thing on the page. If a host goes quiet, you know within 30 seconds."
What We Learned
SSH is a surprisingly good monitoring transport. It is authenticated, encrypted, and available on every Linux host by default. The latency overhead per command is negligible on a LAN. We briefly considered installing a lightweight agent (like Netdata or node_exporter) on each host and scraping it, but SSH is simpler to maintain — no agent to update, no port to open.
Server-Sent Events are underrated. For a read-only live data stream from server to browser, SSE is simpler than WebSockets and works through standard HTTP/1.1. No library needed on the client — just new EventSource('/stream').
Parsing shell output is fragile. We have broken the top parser twice by running it against hosts with different locales. The fix was to always set LANG=C before the command and to add defensive parsing with fallbacks rather than assumptions about output format.
PM2 is worth it. The dashboard has been running continuously for over a year. PM2 handles the process lifecycle — restarts on crash, log rotation, startup on boot. We have never had to manually restart it.
Could You Use This
The dashboard is built for our specific infrastructure and is not packaged as a general tool. But the pattern — SSH-based vitals collection, SSE push to browser, plain HTML frontend — is simple enough to replicate for any small fleet of Linux servers.
If you are managing a handful of servers and want a lightweight visibility layer without a monitoring platform subscription, this approach works well. We are happy to help set something similar up for your infrastructure — get in touch.