One VM on Azure, provisioned with OpenTofu and configured with Ansible.
tofu apply provisions the network, VM, Key Vault, and monitoring.
ansible-playbook installs and hardens the server.
Nothing is done manually.
Ask Claude anything about this project's architecture, security design, or how it was built.
| Host | |
| Domain | jrickey.cc |
| Region | Azure South Central US |
| Protocol | HTTP/2 · TLS 1.3 only |
| Certificate | Let's Encrypt (auto-renew) |
| Virtual Machine | |
| App VM | Standard_B2pts_v2 · ARM64 |
| OS | Ubuntu LTS (arm64) |
| Auth | SSH key only (Ed25519), no passwords |
| Network | |
| App VM | Public IP, internet-facing |
| Automation Stack | |
| Provisioning | OpenTofu |
| Configuration | Ansible |
| Secrets | Azure Key Vault (auto-generated, runtime fetch) |
| OpenTofu state | Azure Blob Storage (shared, locked) |
| Monitoring | Azure Monitor + email alerts |
| DNS | Cloudflare DNS · Azure DNS zone provisioned via OpenTofu |
| AI interface | MCP over Streamable HTTP |
- SSH hardened: key-only auth (Ed25519), no passwords, root login disabled, max 3 auth attempts.
- fail2ban: repeated failed SSH attempts trigger automatic IP bans.
- Unattended-upgrades applies security patches automatically.
- Zero-trust secrets: API keys auto-generated by OpenTofu, stored in Azure Key Vault, fetched at runtime via managed identity. Never typed or stored in code.
- Azure Monitor alerts on CPU above 85% for 5 minutes.
MCP (Model Context Protocol) is an open standard that lets AI models call tools on external servers. This deployment runs one MCP server (mcp-infra) on the app VM, accessible to the owner via Claude Code.
| Browser | |
| OS | |
| Language | |
| Screen | |
| CPU threads | |
| Cookies | |
| Page load | |
| User agent |
A sentinel script runs every 5 minutes via systemd timer on the app VM. It checks service health, verifies the nginx config checksum against the post-deploy baseline, and auto-restarts any failed service. All events are written to the sentinel log.
- nginx, ask-app, mcp-infra active via
systemctl - nginx config checksum matches post-deploy baseline
- Auto-restarts failed services, logs all events
# Creates VNet, subnets, NSGs, VMs, Key Vault, Monitor alerts, # and writes inventory.ini + group_vars with the live public IP. tofu apply -auto-approve
# Installs and hardens the VM: nginx, TLS, fail2ban, # unattended-upgrades, MCP servers. All tasks are idempotent. # API keys fetched from Azure Key Vault at boot via managed identity. ansible-playbook site.yml -i inventory.ini
resource "random_password" "mcp_api_key" { length = 64 special = false } resource "azurerm_key_vault_secret" "mcp_api_key" { name = "mcp-api-key" value = random_password.mcp_api_key.result key_vault_id = azurerm_key_vault.kv.id # Never typed or stored outside Key Vault. # VM fetches at boot via managed identity (IMDS). }
# Chicken-and-egg: certbot needs port 80 open to # validate the domain, but the final nginx config # references the cert that doesn't exist yet. - name: Deploy HTTP-only nginx config template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf when: not cert_stat.stat.exists # skip if cert already exists - name: Obtain SSL certificate command: > certbot certonly --webroot -w /var/www/html -d {{ app_domain }} --non-interactive args: creates: /etc/letsencrypt/live/{{ app_domain }}/fullchain.pem - name: Deploy HTTPS nginx config # now the cert exists template: src: nginx_ssl.conf.j2 dest: /etc/nginx/nginx.conf
# fetch-secrets.service runs before nginx and ask-app. # Uses IMDS to get a token, then pulls from Key Vault. # Writes to /run/ (tmpfs) — never touches persistent disk. TOKEN=$(curl -sf -H "Metadata: true" \ "http://169.254.169.254/metadata/identity/oauth2/token\ ?resource=https://vault.azure.net" \ | python3 -c \ "import sys,json; print(json.load(sys.stdin)['access_token'])") printf 'ANTHROPIC_API_KEY=%s\n' \ "$(get_secret anthropic-api-key)" \ > /run/secrets/ask-app.env chmod 600 /run/secrets/ask-app.env