Sometime around mid-2024, after the fifth server that month where we forgot to configure DKIM and spent an hour wondering why confirmation emails were bouncing, we decided to write a deployment script. A proper one. Not a bash file with 40 lines and a prayer, but a Python system that takes a bare Ubuntu VPS and turns it into a production-ready web server in under 20 minutes. With mail. With SSL. With firewall rules. With backups. With a health check that catches misconfigurations before we send any traffic to it.
Eight months and roughly 7,000 lines of code later, that script deploys every server we operate across 9 LATAM markets for our media buying operation. This article explains why we built it, what it does, and what we learned about server configuration that most teams β whether in media buying, SEO, or paid traffic β never think about until something breaks at 2 AM on a Tuesday.
The Real Cost of Manual Server Setup in Media Buying
Our operation runs proprietary digital assets: landing pages, content hubs, comparison sites. Each lives on its own VPS with its own domain, its own SSL certificate, its own mail configuration. When you manage a handful of these, manual setup is annoying but survivable. When you operate dozens across multiple countries, it becomes the bottleneck that slows down everything else.
We tracked our manual deployment process for a month. The numbers were ugly.
Average time to deploy one server from scratch: 2 hours and 40 minutes. That included installing packages, configuring nginx, setting up PHP-FPM, obtaining SSL, configuring Postfix with DKIM, setting up file transfer access, configuring the firewall, and running basic checks. Two hours and forty minutes of an operator's time, doing the same sequence of commands with minor variations, on every single server.
Error rate on manual deploys: roughly 1 in 4 had something misconfigured. Missing DKIM record. Wrong PHP memory limit. Firewall rule that blocked the mail port. SSL certificate requested before DNS propagated, so certbot failed silently and nobody noticed until a week later when the temporary cert expired. Each error cost another 30 to 60 minutes to diagnose and fix.
That math works out to about 15 hours per month on server setup alone. For a small media buying team, that is a full quarter of one person's working time spent on repetitive infrastructure work instead of campaign optimization, content creation, or market research.
The worst part was not the time. It was the inconsistency. Two servers configured by the same person a week apart would have different PHP settings, different firewall rules, different mail configurations. When something went wrong on one server, the debugging process was unique to that server because the configuration was unique. No two servers were identical.
What We Built: One Script, 40 Packages, 15 Minutes
The system is a single Python script using the Fabric library for SSH automation. One configuration block at the top β server IP, domain, DNS provider credentials, a few feature flags β and a single command to run it. The script connects to the VPS over SSH, installs everything, configures everything, tests everything, and produces a deployment report with all credentials and connection details.
Here is what runs, in order, on every deployment:
System preparation. OS update, hostname configuration based on the domain, package installation. We install roughly 40 packages in a single apt command β nginx, PHP-FPM with the specific version we choose, Postfix, OpenDKIM, certbot, fail2ban, UFW, and all PHP extensions a modern site needs. The script pre-configures Postfix through debconf before installation to prevent the interactive prompts that break unattended installs. After installation, the package cache is cleaned automatically β saving 200 to 500 MB of disk space on small VPS instances.
PHP-FPM with auto-tuning. The script reads the server's total RAM and calculates optimal worker counts. A 512 MB VPS gets 5 workers. A 1 GB server gets 10. A 4 GB server gets 50. Memory limit, OPcache, upload limits, execution timeouts β all set conditionally based on whether we are deploying a static PHP site or WordPress. The pool runs under a generated name with its own socket, not the default www pool.
SFTP for file management. We moved away from traditional FTP entirely. SFTP through SSH means one less service running, one less port open, and encrypted file transfer by default. The script creates a dedicated user with the web root as home directory, generates a strong password, and produces a FileZilla-compatible XML config file that we import with one click.
The entire sequence β from SSH connection to deployment report β completes in 14 to 18 minutes depending on the VPS provider's package mirror speed.
DNS Through API and SSL That Does Not Break
DNS configuration by hand means logging into a registrar panel, creating A records, MX records, SPF TXT records, and later going back to add DKIM and DMARC records after the mail server generates its keys. Each of those steps is a copy-paste opportunity for errors. One wrong character in a DKIM TXT record and your outgoing mail fails DKIM verification β silently.
The script supports three DNS modes: Cloudflare API, Namecheap API, and local BIND9. For Cloudflare and Namecheap, it creates all records automatically β A record pointing to the server IP, MX record for mail, SPF record, and later DKIM and DMARC records. No panel logins. No copy-pasting TXT values. The script reads the generated DKIM public key and pushes it directly to the DNS API.
SSL certificates β Let's Encrypt or ZeroSSL β are obtained with retry logic and DNS propagation checks. Before requesting the certificate, the script verifies that the domain actually resolves to the server's IP by querying Google DNS, Cloudflare DNS, and Quad9. If DNS has not propagated yet, the script waits and retries instead of letting certbot fail with a cryptic error. The auto-renewal timer is configured automatically.
This alone saved more failed deployments than any other single feature. In our manual process, about 30% of SSL failures were caused by requesting the certificate before DNS had propagated β a problem that simply disappears when the script checks first.
Mail That Passes Every Inbox Filter
Getting mail to work is easy. Getting mail to actually reach inboxes β that requires four things configured correctly. Most deployment tutorials cover at most two of them.
Postfix handles sending and receiving. The script writes the entire main.cf from a template with the correct domain, hostname, TLS certificates, and relay restrictions. It also adds anti-spam rules: HELO validation, sender domain verification, recipient checks. These four lines reject about 40% of incoming junk before the message body is even transferred. Mailbox size is capped at 500 MB per account to prevent disk overflow from spam floods.
DKIM signing through OpenDKIM. The script generates a 2048-bit RSA key pair, configures the signing tables, and pushes the public key to DNS automatically. Every outgoing email gets a cryptographic signature that receiving servers can verify. Without DKIM, Gmail and Outlook route your messages straight to spam.
SPF and DMARC records go into DNS alongside DKIM. SPF tells receiving servers which IPs are authorized to send mail for the domain. DMARC tells them what to do when SPF or DKIM fails. Together with DKIM, they form the authentication trio that modern email infrastructure requires.
Dovecot IMAP with submission ports allows sending and receiving email through a standard mail client like Thunderbird. The script configures IMAPS on port 993, submission on port 587 with STARTTLS, and SMTPS on port 465. It generates a Thunderbird autoconfig XML file so the mail client picks up all settings automatically β just enter email and password.
After setup, the script tests mail delivery to each mailbox and checks that messages arrive in the Maildir. It also tests whether outbound port 25 is open β many cloud providers block it by default to prevent spam. If it is blocked, the deployment report includes a ready-to-send request to the hosting provider asking them to unblock it.
Nginx and PHP: The Configuration Nobody Talks About
Most deployment guides stop at installing nginx and PHP, maybe setting the memory limit. There are several production settings that matter for both security and stability that rarely get mentioned in tutorials.
Nginx production configuration. Not the Ubuntu default with server_tokens on and no security headers. Our setup includes HTTPS redirect with HSTS, HTTP/2, X-Frame-Options, X-Content-Type-Options, gzip or brotli compression, and a catch-all server block that rejects requests made directly to the IP address instead of the domain. The SSL configuration includes session tickets, DH parameters, and modern cipher suites for both TLS 1.2 and 1.3. Certbot's auto-renewal path is explicitly whitelisted so certificate renewal does not break on the dotfile deny rule.
PHP open_basedir restricts which directories PHP can access. We lock it down to the web root, /tmp, and the PHP system library path. If someone exploits a vulnerability in a PHP script, they cannot read system files like /etc/passwd, mail server configs, or SSH keys. One line in the FPM pool config, zero performance cost, significant reduction in damage radius.
disable_functions blocks dangerous PHP functions. For static sites: exec, system, shell_exec, passthru, proc_open, and popen are all disabled. For WordPress deployments: proc_open and popen stay enabled because WP-CLI and several plugins need them. The rest stay blocked. This prevents a compromised PHP script from executing system commands.
Session isolation. PHP sessions go into a per-pool directory under /var/lib/php/sessions/ with 700 permissions. Each site has its own session storage. Cross-site session leakage is impossible even if multiple sites share a server.
Upload and execution limits are set differently depending on the site type. WordPress gets 128 MB upload limit, 120-second execution timeout, and 5000 max_input_vars (WordPress menus with many items need this). Static PHP sites get 32 MB uploads, 30-second timeout, and 1000 max_input_vars β tighter limits mean a smaller attack surface.
OPcache gets 128 MB of shared memory, validates file timestamps every 2 seconds instead of every request, and caches up to 10,000 files. The realpath cache is set to 4 MB instead of the PHP default of 4 KB β this change alone cuts include/require overhead measurably on WordPress sites with dozens of plugins.
Firewall, Fail2ban, and File Access Isolation
UFW firewall starts with a deny-all incoming policy. Only ports actually needed are opened: SSH, HTTP, HTTPS, SMTP. If the full mail stack is enabled, submission (587), SMTPS (465), and IMAPS (993) are added. If using traditional FTP instead of SFTP, port 21 and the passive range open. Nothing else. The firewall activates early in the deployment β before most services are even installed β so the server is never fully open during configuration.
Fail2ban monitors authentication logs and automatically blocks IPs after failed login attempts. Jails are configured for SSH, nginx HTTP auth, and nginx bot detection. When the full mail stack is enabled, additional jails activate for Dovecot (IMAP brute force) and Postfix SASL (SMTP auth brute force). Logs rotate weekly with 7-day retention.
File descriptor limits are raised to 65,535 for both nginx and PHP-FPM through systemd service overrides. The Ubuntu default of 1,024 is sufficient for light traffic but causes "too many open files" errors under load β a problem that is extremely annoying to debug when it happens at scale because the error is intermittent and depends on concurrent connection count.
Swap is auto-sized based on server RAM (2x for servers under 1 GB, 1x for 2 to 4 GB, capped at 4 GB for larger servers). Swappiness is set to 15 instead of the Ubuntu default of 60, which means the kernel keeps PHP-FPM workers in RAM instead of swapping them to disk at 40% memory usage.
36 Automated Checks Before We Send Traffic
The deployment ends with an automated health check that verifies every component. Not just "did nginx start" β functional verification of actual behavior.
Nginx: running and configuration syntax passes nginx -t. PHP-FPM: running and socket file exists. HTTP request to localhost with the correct Host header returns a 200. HTTPS request to the actual domain verifies the SSL certificate. Postfix: running and postfix check validates the mail config and port 25 is listening. DKIM, SPF, and DMARC records queried through Google DNS to confirm they are globally visible.
For SFTP: the SSH subsystem is enabled, the user exists with the correct home directory, and β when sshpass is available β an actual SFTP login and directory listing is performed.
Swap active, swappiness at 15, nginx file descriptor limits at 65,535 (read from /proc, not from config β the actual runtime value), apt cache cleaned, certbot renewal timer active, unattended security upgrades enabled, fail2ban running, automatic backups scheduled.
When the full mail stack is on: Dovecot running, port 993 listening, ports 587 and 465 listening, Thunderbird autoconfig accessible through HTTPS.
If anything fails, the report flags it with the specific check that failed and what the expected value was. We have caught misconfigured DNS records, failed SSL issuance, dead PHP-FPM pools, and unresponsive mail services β all before a single visitor hit the server. Finding these problems during deployment takes seconds. Finding them after traffic is running costs hours and lost conversions.
Six Months of Automated Deploys: The Numbers
We have been running the automated system since late 2024. The comparison to our manual process:
Deployment time dropped from 2 hours 40 minutes to 14 to 18 minutes. The longest part is package installation, which depends on the VPS provider's mirror speed. Everything else β DNS, SSL, mail, nginx, PHP, firewall, health check β runs in under 5 minutes combined.
Error rate went from 25% to effectively zero. The health check catches everything. We have not had a "forgot DKIM" or "wrong PHP version" incident since we started using the script. When errors do occur, they are environmental β outbound port 25 blocked by the provider, DNS propagation slower than expected β and the script reports them clearly instead of failing silently.
Server rotation became trivial. When we need to move a site to a new IP, the process is: spin up a new VPS, run the script, upload the site via SFTP, switch DNS. Under 30 minutes total. Before automation, this was a half-day project that nobody wanted to start.
The script also generates a complete deployment report with every credential, IP address, file path, and configuration parameter. No more searching through terminal history to find the FTP password set three weeks ago. Every server has a timestamped report file with everything needed to manage it.
The investment β several hundred hours of development over multiple months β paid for itself within the first month of regular use. Not from some abstract efficiency metric, but from concrete hours reclaimed and concrete errors eliminated. Every server we deploy starts from the same hardened, tested, documented baseline. No exceptions. No shortcuts. No "I will configure the firewall later."