Nobody plans for a pandemic. But sometimes the problem you solve for an entirely different reason turns out to be exactly the right solution when one arrives.
This is the story of a contact centre infrastructure build we did for a large Cape Town operator. The brief had three parts: scale fast, keep the cost per seat down, and keep the floor running through load shedding. South Africa's rolling power cuts were cutting into billable hours every week. Whatever we deployed had to work when the grid did not. What we built for that problem ended up being far more important than anyone anticipated.
The South African Problem Nobody Talks About
Running a contact centre at scale is an infrastructure problem dressed up as a people problem. You have hundreds of agents, each needing a working station — a device that can reliably place and receive VoIP calls with clear two-way audio, connect to the internal systems, and not fall apart after months of daily use.
In South Africa, there is an extra layer on top of that: load shedding. Running a floor of desktop PCs through rolling power cuts requires serious UPS infrastructure — heavy battery banks, significant upfront cost, and a window of maybe thirty to forty minutes before you are sending everyone home anyway. A desktop PC draws anywhere from 150 to 300 watts under load. Multiply that across hundreds of seats and keeping the operation alive through Stage 4 becomes a costly engineering problem in its own right. The contact centre was losing billable hours every week to power cuts, and the conventional hardware options made the problem worse, not better.
A Raspberry Pi 3 draws around five watts. The entire agent station — Pi, USB audio adapter, monitor — could run for hours on a modest UPS that a desktop PC would drain in minutes. For a contact centre billing clients by the hour, the difference between staying live through a two-hour outage and shutting the floor down is the difference between a profitable day and an expensive one.
The other options were not competitive either. Commercial IP handsets from the main vendors were expensive per unit — fine for a small office, painful at scale. Full desktop PCs were overkill: bulky, power-hungry, slow to image in bulk, and loaded with failure points we did not need. Cheap handsets looked appealing on paper and were unreliable in practice.
The power case alone would have justified the switch. The cost case made it obvious. The portability turned out to be worth more than either — but that part of the story came later.
We needed something silent, small, robust, fast to image and deploy, inexpensive enough to replace without a budget conversation, and capable of running through a load shedding cycle without dropping a single call. The Raspberry Pi 3 was the answer.
Building the Custom Image
The goal was a device that would boot directly into a working agent station — no setup, no configuration, nothing for an agent or floor manager to get wrong. Plug it in, wait thirty seconds, and it is ready to take calls.
We started with a stripped-down Raspbian base, removing every package and service that was not needed. The system was locked down: no desktop file manager, no browser, no terminal access for agents. The softphone — configured with the agent's extension, pointing at the internal FreePBX PBX — launched automatically on boot and registered to the server before the login screen appeared.
Audio was the hardest part to get right. The Pi's onboard audio was not adequate for call centre use — the quality was acceptable but inconsistent, and on a noisy open-plan floor it caused problems. We moved to USB audio adapters, which gave us consistent, clean audio across all units. Getting echo cancellation tuned correctly for a room full of agents all on calls simultaneously took more iterations than expected, but once it was dialled in, the call quality was indistinguishable from a commercial handset.
Each Pi station cost a fraction of commercial IP phone alternatives — and unlike a handset, it was also a full Linux computer that we controlled completely. No vendor firmware, no lock-in, no update that could break the softphone configuration.
Once the master image was ready, deploying a new station meant imaging an SD card from the master — a process that took minutes — and dropping it into a Pi with a USB audio adapter and headset. We built a small imaging rig that could flash multiple cards in parallel. Rolling out forty new stations took a morning, not a week.
The Deployment
The stations were mounted under desks or sat flat next to monitors. The entire unit — Pi, audio adapter, power supply — fit in a small box. The footprint on the desk was minimal. Agents plugged in their headset and that was it.
On the network side, each Pi got a static IP via DHCP reservation, connected to the internal VLAN and routed through the FreePBX PBX. Call routing, IVR, recording and reporting all worked through the same infrastructure we had built for the floor. No special handling needed for the Pi stations versus any other endpoint.
The system ran reliably. When something broke, it was almost always the SD card — a known weak point for the Pi under sustained daily use. Our fix was simple: we kept a stock of pre-imaged spare cards. Swap the card, done in under two minutes, agent back on the floor.
A Desktop Built for One Job
The OS was Ubuntu Server with a lightweight window manager sitting on top — nothing more than the system needed to function. But the detail that mattered most was what we built above the OS: the desktop itself. We designed it from scratch.
A company-branded background. A single toolbar at the bottom of the screen. Three applications — and nothing else.
Pre-configured to the agent's extension. Registered to the PBX before the login screen appeared. Agents picked up calls — they did not set anything up.
Connected to the centralised application server. Every CRM screen, every customer record, every tool the agent used lived there — not on the device in their hands.
For knowledge base access and web-based tools. Locked to permitted domains. Not a path to the open internet, not a download manager, not a vulnerability waiting to happen.
There was nothing else to click. No file manager, no settings panel, no system tray with a hundred notifications. An agent sitting down for the first time would find the environment self-explanatory in under thirty seconds, because there was nothing to explain. The entire surface area of the interface was the job.
Secure From the First Packet
The VPN tunnel was the very first thing the system brought up — before the desktop loaded, before the softphone registered, before the agent touched the keyboard. By the time anyone sat down, the device was already inside the corporate network over an encrypted tunnel. From the agent's perspective, it was invisible. From ours, it meant every call, every screen refresh, every data request travelled through an encrypted channel — regardless of what network the device happened to be sitting on.
This was not a VPN you connected to after logging in. It was not an optional client in the system tray. It was a fundamental part of the boot sequence, handled before the user session started. The device was either connected and secure, or it was not on the network at all. There was no in-between state.
We locked the user account down hard. No administrator access, no package manager, no terminal, no ability to install or run unsigned software. The agent's account had write access to nothing that mattered. There were no local documents, no download folder worth mentioning, no persistent browser history with anything sensitive in it. This was not about distrust of the people using the stations — it was about eliminating a surface for things to go wrong. Viruses need a foothold. We gave them nowhere to land.
We did not run antivirus software on the stations. Not because we forgot — because the architecture made it unnecessary. A locked-down system with no writable application paths, no ability to execute arbitrary code, and no persistent local storage is simply not a meaningful target.
If It Goes Missing, It Goes Nowhere
The security model answered a question nobody had thought to ask at the start of the project: what happens when a device goes missing?
A stolen Pi station was a hardware loss of a few hundred rand. That was it. There was no customer data on the device — call recordings lived on the server, CRM data lived on the server, credentials were never stored locally in any recoverable form. An attacker who got hold of a station had a small ARM computer with a locked-down Ubuntu image on it. Without the VPN certificate — which was tied to the device identity and revocable remotely in seconds — it could not connect to anything meaningful.
We could kill a compromised device from the management server in the time it took to make a phone call. The device became a brick the moment access was revoked. Not a data breach risk. Not an investigation. A brick.
This combination — no local data, encrypted tunnel required to function, hardware access controls tied to revocable certificates — meant that the physical security posture of the station was essentially irrelevant. You could leave one on a bus and the worst outcome was replacing the hardware. Everything that mattered was somewhere else.
March 2020
When the South African government announced lockdown, the contact centre — like every other business in the country — had hours to figure out what to do with hundreds of employees who could no longer come to work. For most businesses, that meant operations stopping. For this one, it meant something different.
Every agent's station was a small box about the size of a thick paperback book. We told them: unplug your headset, unplug your audio adapter, unplug the power cable. Put the Pi in your bag. Take it home.
At home, they plugged back in. The Pi booted, the softphone registered to the PBX over a VPN connection, and they were taking calls from their kitchen table within minutes. The transition that would have taken other businesses weeks of emergency IT work happened in an afternoon — because the infrastructure had been built, without knowing it, to be portable.
Thousands of employees kept working and kept earning during the hardest months of the pandemic — not because of any pandemic planning, but because a decision made to survive load shedding turned out to solve a problem nobody saw coming.
What the Numbers Looked Like
What We Learned
The first lesson is about building for the constraints you actually face. The brief included load shedding resilience — not as a nice-to-have, but as a hard operational requirement. Keeping a contact centre floor live through a Stage 4 outage on affordable UPS infrastructure is simply not possible with conventional desktop hardware. Solving that problem pointed directly to low-power, compact, self-contained stations. Everything else that made the lockdown response possible — the portability, the bag-and-go simplicity — was a downstream consequence of solving the power problem first.
The second lesson is about cost: building on commodity hardware with open-source software keeps the price per seat low and the total cost of ownership manageable. That case held up on day one and it held up through the pandemic.
But the most important lesson is about control. When you own the entire stack — the OS image, the softphone configuration, the PBX, the network — you can respond to situations that nobody planned for. You can push a configuration change to every station simultaneously. You can adapt because you understand every layer. There is no vendor to call, no locked firmware to wait on, no support contract with exclusions buried in the small print.
The contact centre industry in South Africa was slow to move to remote work before COVID. Many operators scrambled in 2020 because their infrastructure assumed agents would always be in a building. This one did not scramble — because years earlier, someone asked a different question: what happens when the lights go out?
We have carried that into every infrastructure project since. Build for the grid you have. Build compact. Build for control. You do not know what your infrastructure will need to survive in two years — but in South Africa, you can be reasonably sure that at some point, it will need to survive without power.
Need to deploy contact centre infrastructure that survives load shedding, supports remote work, and does not cost a fortune per seat? Get in touch — we have done it at scale.