Summary
In this post I will walk through how I set up my network and servers in order to run this blog and other services.
I chose to build my own ‘cloud’ because
- I wanted to learn about networking and DevOps
- The costs of running servers on DigitalOcean or AWS was more than I wanted to spend each month
- It was fun. I like building things and interacting with the physical world brings me joy
My current infrastructure runs on 3 Raspberry Pis in my apartment. I followed Pulumi’s Playbook for Kubernetes for organizing the resources into different ‘stacks’ which I will go through in detail. You can see all the code in my GitHub repo.

Table of Contents
- Hardware: Everything I bought
- Identity Stack: Users, roles, keys, and permissions
- Managed Infrastructure Stack: Resources needed to run the cluster
- Cluster Stack: Cluster deployment
- Cluster Services Stack: Cluster wide resources
- App Services Stack: App specific resources
- Managed Apps Stack: Apps developed by someone else (e.g. Grafana)
- Apps Stacks: Apps developed by me (e.g., this blog)
- CI/CD: Automated deployment using GitHub actions
Hardware
- 3 x Raspberry Pi 4 8GB
- 3 x Raspberry Pi POE Hat
- 3 x 32GB flash drive
- 3 x CAT 6 1 foot ethernet cables
- 1 x 8 port POE Network Switch
- 1 x 6U Server Cabinet
- 1 x 1U Raspberry Pi Rack Mount
- 1 x 1U Rack Mount Power Strip
Stacks
The reason for splitting up the infrastructure into different stacks is to limit the potential blast radius of errors. Typically the lower stacks (where the network, cluster, etc. are deployed) will change the least and the higher stacks (where applications are deployed) will change the most. This helps isolate potential errors when making changes.
Identity Stack
This stack defines the users, roles, permissions, and keys that will be used by the other stacks in order to make changes to managed resources. In my case I am using AWS for key management, S3 buckets, container registries, and sending email notifications, so I have defined roles and permissions in order to deploy those resources.
In a fresh AWS account I first manually created a GitHub identity provider which will allow GitHub actions to assume AWS roles in my account. You can read more about how this is configured in the GitHub documentation. Next I manually created an
identity-deployer
role with the following managed permissions:IAMFullAccess
AWSKeyManagementServicePowerUser
and the following trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "400689721046"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::400689721046:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
},
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:ameier38/infrastructure:*"
}
}
}
]
}
This role has permissions to manage IAM resources and KMS keys, and the trust policy allows the role to be assumed by users in my account and by GitHub actions running in the ameier38/infrastructure repo.
Next I created an
admin
user with the following policy attached:{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::400689721046:role/identity-deployer"
}
]
}
This allows the
admin
user to assume the identity-deployer
role. I added the admin
user keys to the ~/.aws/credentials
file on my computer.[admin]
aws_access_key_id={ACCESS_KEY}
aws_secret_access_key={SECRET_KEY}
I then added the
identity-deployer
role to the ~/.aws/config
file so I could assume this role in order to deploy the stack.[profile admin]
region=us-east-1
output=json
[profile identity-deployer]
role_arn=arn:aws:iam::400689721046:role/identity-deployer
source_profile=admin
With the
identity-deployer
role and admin
user created, I can then create the identity stack using the Pulumi CLI. pulumi new aws-typescript
In the new Pulumi project I created the encryption key which will be used by the other stacks to encrypt configuration secrets.
import * as aws from '@pulumi/aws'
const pulumiKey = new aws.kms.Key('pulumi')
Next I created the
infrastructure-deployer
role.import * as aws from '@pulumi/aws'
import * as identityProvider from './identityProvider'
import * as config from '../config'
const infrastructureDeployer = new aws.iam.Role('infrastructure-deployer', {
name: 'infrastructure-deployer',
assumeRolePolicy: {
Version: '2012-10-17',
Statement: [
// Trust principals in this account to assume role
{
Effect: 'Allow',
Action: 'sts:AssumeRole',
Principal: {
AWS: config.accountId
}
},
// Allow `infrastructure` repo actions to assume role
{
Effect: 'Allow',
Action: 'sts:AssumeRoleWithWebIdentity',
Principal: {
Federated: identityProvider.githubIdentityProviderArn
},
Condition: {
StringEquals: {
'token.actions.githubusercontent.com:aud': 'sts.amazonaws.com'
},
StringLike: {
'token.actions.githubusercontent.com:sub': 'repo:ameier38/infrastructure:*'
}
}
}
]
}
})
Then I attached a policy to the
infrastructure-deployer
role.new aws.iam.RolePolicy('infrastructure-deployer', {
name: 'infrastructure-deployer',
role: role.infrastructureDeployerName,
policy: {
Version: '2012-10-17',
Statement: [
// Allow usage of `pulumi` key
{
Effect: 'Allow',
Action: [
'kms:Decrypt',
'kms:Encrypt'
],
Resource: key.pulumiKey.arn
},
// Allow management of cloudflared ecr repository
{
Effect: 'Allow',
Action: 'ecr:GetAuthorizationToken',
Resource: '*'
},
{
Effect: 'Allow',
Action: [
'ecr:*'
],
Resource: pulumi.interpolate `arn:aws:ecr:${config.region}:${config.accountId}:repository/cloudflared-*`
},
// Allow management of ameier38-public bucket
{
Effect: 'Allow',
Action: [
's3:*',
],
Resource: [
'arn:aws:s3:::ameier38-public',
'arn:aws:s3:::ameier38-public/*'
]
},
// Allow `infrastructure-deployer` role to manage email service
{
Effect: 'Allow',
Action: [
'ses:*',
],
Resource: '*'
},
]
}
})
I can then deploy the stack by first assuming the
identity-deployer
role and using the Pulumi CLI.$env:AWS_PROFILE="identity-deployer"
pulumi up
Managed Infrastructure Stack
This stack includes resources needed to run and connect to the Kubernetes cluster. In order to create the stack I first added the
infrastructure-deployer
role to the ~/.aws/config
file.[profile infrastructure-deployer]
role_arn=arn:aws:iam::400689721046:role/infrastructure-deployer
source_profile=admin
Then I assumed the
infrastructure-deployer
role and created the stack using the pulumi
encryption key created in the identity stack.$env:AWS_PROFILE="infrastructure-deployer"
pulumi new aws-typescript --secrets-provider "awskms://{pulumi key id}?region=us-east-1"
Kubernetes API Tunnel
Since I am running Kubernetes on Raspberry Pis in my apartment I need some way to connect to the Kubernetes API from outside my home network (such as GitHub Actions for deploying applications). This is made possible using Cloudflare Tunnels. A tunnel is a persistent connection between a daemon application (called cloudflared) and the nearest Cloudflare datacenter. Cloudflare creates a public IP address for each tunnel and then proxies requests to that IP address to the cloudflared daemon. The cloudflared daemon then forwards the request to a service based on rules you specify. The below diagram from Cloudflare’s documentation illustrates this well.

I set up the cloudflared daemon on the Raspberry Pi running the master Kubernetes node. You can see this later in the Cluster Stack.
Using Pulumi I can create the tunnel and configure the credentials.
import * as cloudflare from '@pulumi/cloudflare'
import * as pulumi from '@pulumi/pulumi'
import * as random from '@pulumi/random'
const tunnelSecret = new random.RandomPassword('k8s-api-tunnel', {
length: 32
})
export const k8sApiTunnel = new cloudflare.ArgoTunnel('k8s-api', {
accountId: cloudflare.config.accountId!,
name: 'k8s-api',
secret: tunnelSecret.result.apply(s => Buffer.from(s).toString('base64'))
})
export const k8sApiTunnelCredentials = pulumi.all([
k8sApiTunnel.accountId,
k8sApiTunnel.id,
k8sApiTunnel.name,
k8sApiTunnel.secret
]).apply(([accountId, tunnelId, tunnelName, tunnelSecret]) => {
return JSON.stringify({
AccountTag: accountId,
TunnelID: tunnelId,
TunnelName: tunnelName,
TunnelSecret: tunnelSecret
})
})
The tunnel credentials format is not well documented but to get an example run
cloudflared tunnel create
DNS Records
With the tunnel defined, I then created a friendly DNS record to point to the tunnel. I use this domain in my kubeconfig file in order to connect to the Kubernetes API using kubectl.
import * as cloudflare from '@pulumi/cloudflare'
const andrewmeierDotDev = new cloudflare.Zone('andrewmeier.dev', {
zone: 'andrewmeier.dev'
})
new cloudflare.ZoneSettingsOverride('andrewmeier.dev', {
zoneId: andrewmeierDotDev.id,
settings: {
ssl: 'strict'
}
})
export const andrewmeierDotDevZoneId = andrewmeierDotDev.id
export const andrewmeierDotDevDomain = andrewmeierDotDev.zone
Next I created the DNS record
k8s.andrewmeier.dev
to point to the tunnel.import * as cloudflare from '@pulumi/cloudflare'
import * as pulumi from '@pulumi/pulumi'
import * as tunnel from './tunnel'
import * as zone from './zone'
export const k8sApiRecord = new cloudflare.Record('k8s.andrewmeier.dev', {
zoneId: zone.andrewmeierDotDevZoneId,
name: 'k8s',
type: 'CNAME',
value: tunnel.k8sApiTunnel.cname,
proxied: true
})
Identity Provider
For applications that I don’t want to expose to the public I can also use Cloudflare to authenticate requests with an identity provider. I chose to use GitHub as the identity provider but there are other options as well (such as Google).
I first created a GitHub OAuth application and then created the Cloudflare Access Identity Provider in Pulumi using the client ID and client secret provided by the GitHub OAuth application.
const githubIdentityProvider = new cloudflare.AccessIdentityProvider('github', {
name: 'github',
type: 'github',
accountId: cloudflare.config.accountId!,
configs: [{
clientId: config.githubConfig.clientId,
clientSecret: config.githubConfig.clientSecret
}]
})
Access Application
In order to authenticate access to the Kubernetes API, I must create a Cloudflare Access Application associated with the DNS record that I created above.
import * as cloudflare from '@pulumi/cloudflare'
import { githubIdentityProvider } from './accessIdentityProvider'
import { k8sApiRecord } from './record'
export const k8sApi = new cloudflare.AccessApplication('k8s-api', {
name: 'Kubernetes API',
domain: k8sApiRecord.hostname,
accountId: cloudflare.config.accountId,
allowedIdps: [ githubIdentityProvider.id ],
autoRedirectToIdentity: true,
type: 'self_hosted'
})
This way any request to
k8s.andrewmeier.dev
will first need to authenticate with GitHub.Access Policy
With the authentication configured I now need to configure authorization. This is accomplished using a Cloudflare Access Policy. Because I also want to access the Kubernetes API when running GitHub actions, I first need to create an Access Service Token.
import * as cloudflare from '@pulumi/cloudflare'
export const githubServiceToken = new cloudflare.AccessServiceToken('github', {
name: 'GitHub',
accountId: cloudflare.config.accountId!
})
Then I can create the access policies to authorize my email address (authenticated with GitHub) and systems using the service token.
import * as cloudflare from '@pulumi/cloudflare'
import { k8sApi } from './accessApplication'
import { githubServiceToken } from './serviceToken'
import { email } from '../config'
new cloudflare.AccessPolicy('k8s-api-user-access', {
name: 'Kubernetes API User Access',
precedence: 1,
accountId: cloudflare.config.accountId,
applicationId: k8sApi.id,
decision: 'allow',
includes: [{
emails: [ email ]
}]
})
new cloudflare.AccessPolicy('k8s-api-github-access', {
name: 'Kubernetes API GitHub Access',
precedence: 2,
accountId: cloudflare.config.accountId,
applicationId: k8sApi.id,
decision: 'non_identity',
includes: [{
serviceTokens: [ githubServiceToken.id ]
}]
})
I can now deploy the stack by assuming the
infrastructure-deployer
role then using the Pulumi CLI.$env:AWS_PROFILE="infrastructure-deployer"
pulumi up
Cluster Stack
This stack defines the Kubernetes cluster. I decided to host the cluster myself instead of using a managed cluster for a few reasons:
- I wanted to learn more about networking
- It is cheaper
There are three nodes in the cluster, each of which is a Raspberry Pi. In order to run Kubernetes I am using k3s, which is a lightweight version of Kubernetes. Running k3s on Raspberry Pis is great because they don’t take up much space (the entire rack fits in my coat closet), they don’t use much power (can run on PoE), and they are cheap (I bought them before chip crisis).
Once the Pis are setup and running, I can install k3s and cloudflared using the Pulumi Command package. I first created the stack using the key created in the identity stack.
$env:AWS_PROFILE="infrastructure-deployer"
pulumi new aws-typescript --secrets-provider "awskms://{pulumi key id}?region=us-east-1"
Then I configured the connections to each of the Raspberry Pis and retrieved the Kubernetes API tunnel information exported from the Managed Infrastructure stack.
import * as pulumi from '@pulumi/pulumi'
import * as command from '@pulumi/command'
export const env = pulumi.getStack()
const managedInfrastructureStack = new pulumi.StackReference('ameier38/managed-infrastructure/prod')
export const k8sApiTunnelId = managedInfrastructureStack.requireOutput('k8sApiTunnelId')
export const k8sApiTunnelCredentials = managedInfrastructureStack.requireOutput('k8sApiTunnelCredentials')
export const k8sApiTunnelHost = managedInfrastructureStack.requireOutput('k8sApiTunnelHost')
const rawConfig = new pulumi.Config()
export const privateKey = rawConfig.requireSecret('privateKey')
export const masterConn: command.types.input.remote.ConnectionArgs = {
host: 'ameier-1',
port: 22,
user: 'root',
privateKey: privateKey
}
export const agent1Conn: command.types.input.remote.ConnectionArgs = {
host: 'ameier-2',
port: 22,
user: 'root',
privateKey: privateKey
}
export const agent2Conn: command.types.input.remote.ConnectionArgs = {
host: 'ameier-3',
port: 22,
user: 'root',
privateKey: privateKey
}
Then I created the scripts required to install cloudflared and k3s.
import * as command from '@pulumi/command'
import * as pulumi from '@pulumi/pulumi'
import * as config from './config'
// ref: https://developers.cloudflare.com/cloudflare-one/tutorials/kubectl/
const installCloudflaredScript = pulumi.interpolate `
echo "Installing cloudflared"
set -e
echo "Creating config directory"
mkdir -p /etc/cloudflared
echo "Writing credentials"
cat << EOF > /etc/cloudflared/credentials.json
${config.k8sApiTunnelCredentials}
EOF
echo "Writing config"
cat << EOF > /etc/cloudflared/config.yml
tunnel: ${config.k8sApiTunnelId}
credentials-file: /etc/cloudflared/credentials.json
ingress:
- hostname: ${config.k8sApiTunnelHost}
service: tcp://localhost:6443
originRequest:
proxyType: socks
- service: http_status:404
EOF
echo "Downloading cloudflared"
curl -sfLO https://github.com/cloudflare/cloudflared/releases/download/2022.3.4/cloudflared-linux-arm64
echo "Updating cloudflared permissions"
chmod +x cloudflared-linux-arm64
echo "Moving cloudflared to bin"
mv cloudflared-linux-arm64 /usr/local/bin/cloudflared
if [ ! -f /etc/systemd/system/cloudflared.service ]
then
echo "Installing cloudflared service"
cloudflared service install
echo "Starting cloudflared service"
systemctl start cloudflared
else
echo "Restarting cloudflared service"
systemctl restart cloudflared
fi
`
const installK3sMaster = new command.remote.Command('install-k3s-master', {
connection: config.masterConn,
create: 'curl -sfL https://get.k3s.io | sh -'
})
new command.remote.Command('install-cloudflared', {
connection: config.masterConn,
create: installCloudflaredScript,
triggers: [ installCloudflaredScript ]
})
const readKubeconfig = new command.remote.Command('read-kubeconfig', {
connection: config.masterConn,
create: 'cat /etc/rancher/k3s/k3s.yaml'
}, { dependsOn: installK3sMaster })
export const kubeconfig =
pulumi
.all([readKubeconfig.stdout, config.k8sApiTunnelHost])
.apply(([kubeconfig, host]) => kubeconfig.replace('127.0.0.1', host))
const readToken = new command.remote.Command('read-token', {
connection: config.masterConn,
create: 'cat /var/lib/rancher/k3s/server/node-token'
}, {dependsOn: installK3sMaster })
const token = readToken.stdout.apply(token => token.replace('\n', ''))
for (const [i, conn] of [config.agent1Conn, config.agent2Conn].entries()) {
new command.remote.Command(`install-k3s-agent-${i}`, {
connection: conn,
create: pulumi.interpolate `curl -sfL https://get.k3s.io | K3S_URL="https://${config.masterConn.host}:6443" K3S_TOKEN="${token}" sh -`,
triggers: [token]
})
}
Then I can assume the
infrastructure-deployer
role (needed to access KMS key in order to decrypt secrets) and deploy the cluster.$env:AWS_PROFILE="infrastructure-deployer"
pulumi up
I must run this on the local network in order to access the Raspberry Pis. I could set up SSH with cloudflared running on each Pi but have not done it yet 😃
Connecting
In order to connect to the cluster using kubectl I have to configure a few things. First I need to export the kubeconfig file from the Cluster stack.
pulumi stack output --show-secrets kubeconfig > ~/.kube/kubeconfig
Then I have to configure the kubeconfig file to use a proxy.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ...
proxy-url: socks5://localhost:1234
server: https://k8s.andrewmeier.dev:6443
name: default
contexts:
- context:
cluster: default
namespace: andrewmeier
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: ...
client-key-data: ...
Note that I configured the tunnel in the installation script with
proxyType: socks
and pointed to the Kubernetes API tcp://localhost:6443
as shown below.tunnel: ${config.k8sApiTunnelId}
credentials-file: /etc/cloudflared/credentials.json
ingress:
- hostname: ${config.k8sApiTunnelHost}
service: tcp://localhost:6443
originRequest:
proxyType: socks
- service: http_status:404
Then on my laptop I connect to the tunnel using the cloudflared CLI.
cloudflared access tcp --hostname=k8s.andrewmeier.dev --url=localhost:1234
This sets up a proxy server on my laptop which will direct requests from
localhost:1234
to k8s.andrewmeier.dev
. Above I configured the kubeconfig to use this proxy. In a different shell, when I first run a kubectl command, a browser window will open asking me to authenticate with GitHub (I set up an access policy for the Kubernetes API in the Managed Infrastructure stack). Additional details about using kubectl with Cloudflare Tunnels can be found in the Cloudflare documentation.To make this easier for me, I added a function to my PowerShell profile to connect to the tunnel as a Job.
function start-ameier-k8s-tunnel {
$env:KUBECONFIG="C:\Users\andy\.kube\ameier-kubeconfig"
Start-Job -Name ameier-k8s-tunnel -ScriptBlock { cloudflared access tcp --hostname=k8s.andrewmeier.dev --url=localhost:1234 }
}
Then before running my kubectl commands I just need to make sure to run
start-ameier-k8s-tunnel
first.Cluster Services Stack
This stack defines all the cluster-wide services that are used for running and monitoring applications. I first created the stack using the Pulumi CLI.
$env:AWS_PROFILE="infrastructure-deployer"
pulumi new aws-typescript --secrets-provider "awskms://{pulumi key id}?region=us-east-1"
Next, because this stack will connect to the Kubernetes cluster, I need to configure the stack to use the kubeconfig file that I created above.
cat ~/.kube/kubeconfig | pulumi config set --secret kubernetes:kubeconfig
Reverse Proxy
I use Traefik for the reverse proxy which is used to manage request routing. Traefik ships with k3s so there is not much to set up. In my case I want to use Cloudflare’s Full Strict SSL which requires the use of of a Cloudflare origin certificate to terminate TLS.
import * as cloudflare from '@pulumi/cloudflare'
import * as pulumi from '@pulumi/pulumi'
import * as tls from '@pulumi/tls'
import * as config from '../config'
export const originCertPrivateKey = new tls.PrivateKey('origin-cert', {
algorithm: 'RSA'
})
const originCertRequest = new tls.CertRequest('origin-cert-request', {
privateKeyPem: originCertPrivateKey.privateKeyPem,
subject: {
commonName: config.andrewmeierDotDevDomain,
organization: 'andrewmeier.dev'
}
})
export const originCert = new cloudflare.OriginCaCertificate('origin-cert', {
csr: originCertRequest.certRequestPem,
requestType: 'origin-rsa',
hostnames: [
config.andrewmeierDotDevDomain,
pulumi.interpolate `*.${config.andrewmeierDotDevDomain}`
]
})
Next I configured Traefik to use this certificate for terminating TLS.
import * as k8s from '@pulumi/kubernetes'
import { originCert, originCertPrivateKey } from '../cloudflare/originCertificate'
// Traefik is deployed as part of k3s
const originCertSecret = new k8s.core.v1.Secret('origin-cert', {
metadata: { namespace: 'kube-system' },
stringData: {
'tls.crt': originCert.certificate,
'tls.key': originCertPrivateKey.privateKeyPem
}
})
// Enables Cloudflare Full Strict SSL
new k8s.apiextensions.CustomResource('tls-store', {
apiVersion: 'traefik.containo.us/v1alpha1',
kind: 'TLSStore',
metadata: {
name: 'default',
namespace: 'kube-system'
},
spec: {
defaultCertificate: {
secretName: originCertSecret.metadata.name
}
}
})
Tunnel
import * as cloudflare from '@pulumi/cloudflare'
import * as pulumi from '@pulumi/pulumi'
import * as random from '@pulumi/random'
const k8sTunnelSecret = new random.RandomPassword('k8s-tunnel-secret', {
length: 32
})
export const k8sTunnel = new cloudflare.ArgoTunnel('k8s', {
accountId: cloudflare.config.accountId!,
name: 'k8s',
secret: k8sTunnelSecret.result.apply(s => Buffer.from(s).toString('base64'))
})
export const k8sTunnelCredentials = pulumi.all([
k8sTunnel.accountId,
k8sTunnel.id,
k8sTunnel.name,
k8sTunnel.secret
]).apply(([accountId, tunnelId, tunnelName, tunnelSecret]) => {
return JSON.stringify({
AccountTag: accountId,
TunnelID: tunnelId,
TunnelName: tunnelName,
TunnelSecret: tunnelSecret
})
})
I then created a deployment for the cloudflared daemon and configured it to forward requests from the tunnel to the Traefik proxy.
import * as k8s from '@pulumi/kubernetes'
import * as pulumi from '@pulumi/pulumi'
import * as repository from '../aws/repository'
import * as tunnel from '../cloudflare/tunnel'
import * as config from '../config'
const identifier = 'cloudflared'
const cloudflaredConfig = pulumi.interpolate `
tunnel: ${tunnel.k8sTunnel.id}
credentials-file: /var/secrets/cloudflared/credentials.json
metrics: 0.0.0.0:2000
no-autoupdate: true
ingress:
- hostname: ${config.andrewmeierDotDevDomain}
service: http://traefik.kube-system
- hostname: '*.${config.andrewmeierDotDevDomain}'
service: http://traefik.kube-system
- service: http_status:404
`
const cloudflaredSecret = new k8s.core.v1.Secret(identifier, {
metadata: { namespace: 'kube-system' },
stringData: {
'config.yaml': cloudflaredConfig,
'credentials.json': tunnel.k8sTunnelCredentials
}
})
const registrySecret = new k8s.core.v1.Secret(`${identifier}-registry`, {
metadata: { namespace: 'kube-system' },
type: 'kubernetes.io/dockerconfigjson',
stringData: {
'.dockerconfigjson': repository.cloudflaredDockerconfigjson
}
})
const labels = { 'app.kubernetes.io/name': identifier }
new k8s.apps.v1.Deployment(identifier, {
metadata: {
name: identifier,
namespace: 'kube-system'
},
spec: {
replicas: 1,
selector: { matchLabels: labels },
template: {
metadata: {
labels: labels,
annotations: {
'prometheus.io/scrape': 'true',
'prometheus.io/path': '/metrics',
'prometheus.io/port': '2000',
}
},
spec: {
imagePullSecrets: [{
name: registrySecret.metadata.name
}],
containers: [{
name: identifier,
image: repository.cloudflaredImageName,
args: [
'tunnel',
'--config', '/var/secrets/cloudflared/config.yaml',
'run'
],
livenessProbe: {
httpGet: { path: '/ready', port: 2000 },
failureThreshold: 1,
initialDelaySeconds: 10,
periodSeconds: 10
},
volumeMounts: [{
name: 'cloudflared',
mountPath: '/var/secrets/cloudflared',
readOnly: true
}]
}],
volumes: [{
name: 'cloudflared',
secret: { secretName: cloudflaredSecret.metadata.name }
}],
nodeSelector: { 'kubernetes.io/arch': 'arm64' }
}
}
}
})
This will allow me to use the Traefik IngressRoute to direct traffic to services running in the cluster.
Monitoring
I use Prometheus for monitoring applications. It is easy to expose metrics endpoints using client libraries such as prometheus-net. Then you just need to annotate the pods and Prometheus will take care of scraping each pod. I used Helm as part of the Pulumi Kubernetes package to deploy.
import * as k8s from '@pulumi/kubernetes'
import * as pulumi from '@pulumi/pulumi'
import * as namespace from './namespace'
const chart = new k8s.helm.v3.Chart('prometheus', {
chart: 'prometheus',
version: '15.5.3',
fetchOpts: { repo: 'https://prometheus-community.github.io/helm-charts' },
namespace: namespace.monitoringNamespace,
values: {
serviceAccounts: {
alertmanager: { create: false },
pushgateway: { create: false }
},
alertmanager: { enabled: false },
pushgateway: { enabled: false }
}
})
const internalHost =
pulumi.all([chart, namespace.monitoringNamespace]).apply(([chart, namespace]) => {
const meta = chart.getResourceProperty('v1/Service', namespace, 'prometheus-server', 'metadata')
return pulumi.interpolate `${meta.name}.${meta.namespace}.svc.cluster.local`
})
const internalPort =
pulumi.all([chart, namespace.monitoringNamespace]).apply(([chart, namespace]) => {
const spec = chart.getResourceProperty('v1/Service', namespace, 'prometheus-server', 'spec')
return spec.ports[0].port
})
export const internalUrl = pulumi.interpolate `http://${internalHost}:${internalPort}`
I use Promtail and Loki to aggregate the logs from all the applications. Promtail takes care of scraping the pod logs and sending to Loki in the format it expects. Loki is nice as it integrates with Grafana and uses the same query structure as Prometheus. I also used Helm to deploy Loki and Promtail. You can see the rest of the code on GitHub.
Again, like with the previous stacks, I can deploy the stack by assuming the
infrastructure-deployer
role and using the Pulumi CLI. I also need to connect to the tunnel so that the stack can connect to the Kubernetes API.$env:AWS_PROFILE="infrastructure-deployer"
start-ameier-k8s-tunnel
pulumi up
App Services Stack
This stack is used for any application specific resources such as databases and DNS records. In my case I do not use any databases at the moment so it is just used for DNS and configuring access to internal services. Again, I first created the stack using the Pulumi CLI.
$env:AWS_PROFILE="infrastructure-deployer"
pulumi new aws-typescript --secrets-provider "awskms://{pulumi key id}?region=us-east-1"
Next I created the DNS records for the applications which all point to the hostname of the tunnel created in the Cluster Services stack (this tunnel routes requests to Traefik).
import * as cloudflare from '@pulumi/cloudflare'
import * as config from '../config'
export const traefikRecord = new cloudflare.Record('traefik.andrewmeier.dev', {
zoneId: config.andrewmeierDotDevZoneId,
name: 'traefik',
type: 'CNAME',
value: config.k8sTunnelHost,
proxied: true
})
export const whoamiRecord = new cloudflare.Record('whoami.andrewmeier.dev', {
zoneId: config.andrewmeierDotDevZoneId,
name: 'whoami',
type: 'CNAME',
value: config.k8sTunnelHost,
proxied: true
})
export const grafanaRecord = new cloudflare.Record('grafana.andrewmeier.dev', {
zoneId: config.andrewmeierDotDevZoneId,
name: 'grafana',
type: 'CNAME',
value: config.k8sTunnelHost,
proxied: true
})
export const andrewmeierRecord = new cloudflare.Record('andrewmeier.dev', {
zoneId: config.andrewmeierDotDevZoneId,
name: '@',
type: 'CNAME',
value: config.k8sTunnelHost,
proxied: true
})
Next I created Cloudflare Access Applications for each of the applications that I want to authenticate user access.
import * as cloudflare from '@pulumi/cloudflare'
import * as record from './record'
import * as config from '../config'
export const traefik = new cloudflare.AccessApplication('traefik', {
name: 'Traefik',
domain: record.traefikRecord.hostname,
allowedIdps: [ config.githubIdentityProviderId ],
autoRedirectToIdentity: true,
accountId: cloudflare.config.accountId,
logoUrl: config.logoUrl,
httpOnlyCookieAttribute: false
})
export const whoami = new cloudflare.AccessApplication('whoami', {
name: 'Whoami',
domain: record.whoamiRecord.hostname,
allowedIdps: [ config.githubIdentityProviderId ],
autoRedirectToIdentity: true,
accountId: cloudflare.config.accountId,
logoUrl: config.logoUrl,
httpOnlyCookieAttribute: false
})
export const grafana = new cloudflare.AccessApplication('grafana', {
name: 'Grafana',
domain: record.grafanaRecord.hostname,
allowedIdps: [ config.githubIdentityProviderId ],
autoRedirectToIdentity: true,
accountId: cloudflare.config.accountId,
logoUrl: config.logoUrl,
httpOnlyCookieAttribute: false
})
import * as cloudflare from '@pulumi/cloudflare'
import * as app from './accessApplication'
import * as config from '../config'
new cloudflare.AccessPolicy('traefik-user-access', {
name: 'Traefik User Access',
precedence: 1,
accountId: cloudflare.config.accountId,
applicationId: app.traefik.id,
decision: 'allow',
includes: [{
emails: [ config.email ]
}]
})
new cloudflare.AccessPolicy('whoami-user-access', {
name: 'Whoami User Access',
precedence: 1,
accountId: cloudflare.config.accountId,
applicationId: app.whoami.id,
decision: 'allow',
includes: [{
emails: [ config.email ]
}]
})
new cloudflare.AccessPolicy('grafana-user-access', {
name: 'Grafana User Access',
precedence: 1,
accountId: cloudflare.config.accountId,
applicationId: app.grafana.id,
decision: 'allow',
includes: [{
emails: [ config.email ]
}]
})
As before, I assume the
infrastructure-deployer
role and deploy the stack using the Pulumi CLI.$env:AWS_PROFILE="infrastructure-deployer"
pulumi up
Managed Apps Stack
This stack is used for any 3rd party applications that I want to deploy. I am currently using Grafana to visualize logs and metrics and whoami to debug routing and requests. I also expose the Traefik Dashboard in this stack. As before, I first created the stack using the Pulumi CLI.
$env:AWS_PROFILE="infrastructure-deployer"
pulumi new aws-typescript --secrets-provider "awskms://{pulumi key id}?region=us-east-1"
I also need to add the kubeconfig file since I will need to connect to the Kubernetes cluster.
cat ~/.kube/kubeconfig | pulumi config set --secret kubernetes:kubeconfig
Next I created the Grafana resources. I use Helm again to deploy the Grafana application and use the Traefik IngressRoute CRD to route any requests to
grafana.andrewmeier.dev
to the Grafana service.import * as k8s from '@pulumi/kubernetes'
import * as pulumi from '@pulumi/pulumi'
import * as random from '@pulumi/random'
import * as config from '../config'
const identifier = 'grafana'
const rawAdminPassword = new random.RandomPassword('admin-password', {
length: 20
})
export const adminPassword = rawAdminPassword.result
const secret = new k8s.core.v1.Secret(identifier, {
metadata: { namespace: config.monitoringNamespace},
stringData: {
user: 'admin',
password: adminPassword
}
})
const chart = new k8s.helm.v3.Chart(identifier, {
chart: 'grafana',
version: '6.24.1',
fetchOpts: { repo: 'https://grafana.github.io/helm-charts' },
namespace: config.monitoringNamespace,
values: {
// Use old version to provision notifiers
image: { tag: '8.2.7' },
testFramework: { enabled: false },
persistence: {
inMemory: { enabled: true }
},
admin: {
existingSecret: secret.metadata.name,
userKey: 'user',
passwordKey: 'password'
},
'grafana.ini': {
server: {
root_url: pulumi.interpolate `https://${config.grafanaHost}`,
},
smtp: {
enabled: true,
host: 'email-smtp.us-east-1.amazonaws.com:587',
user: config.smtpUserAccessKeyId,
password: config.smtpUserSmtpPassword,
from_address: '[email protected]'
},
users: {
auto_assign_org_role: 'Admin'
},
'auth.proxy': {
enabled: true,
header_name: 'Cf-Access-Authenticated-User-Email',
header_property: 'email'
}
},
datasources: {
'datasources.yaml': {
apiVersion: 1,
datasources: [
{
name: 'Prometheus',
type: 'prometheus',
url: config.prometheusUrl,
access: 'proxy',
isDefault: true
},
{
name: 'Loki',
type: 'loki',
url: config.lokiUrl,
access: 'proxy',
jsonData: { maxLines: 1000 }
}
]
}
},
notifiers: {
'notifiers.yaml': {
notifiers: [
{
name: 'email-notifier',
type: 'email',
uid: 'email1',
org_id: 1,
is_default: true,
settings: { addresses: config.email }
}
]
}
}
}
})
const internalPort =
pulumi.all([chart, config.monitoringNamespace]).apply(([chart, namespace]) => {
const spec = chart.getResourceProperty('v1/Service', namespace, identifier, 'spec')
return spec.ports[0].port
})
new k8s.apiextensions.CustomResource(`${identifier}-route`, {
apiVersion: 'traefik.containo.us/v1alpha1',
kind: 'IngressRoute',
metadata: { namespace: config.monitoringNamespace },
spec: {
entryPoints: ['web'],
routes: [{
kind: 'Rule',
match: pulumi.interpolate `Host(\`${config.grafanaHost}\`)`,
services: [{
kind: 'Service',
name: identifier,
namespace: config.monitoringNamespace,
port: internalPort
}]
}]
}
})
Grafana plays nicely with proxy authentication and I can configure it to automatically log in the user by specifying that it should use the
Cf-Access-Authenticated-User-Email
header which Cloudflare Access will set after authenticating the user.Next I created the resources for whoami and the Traefik dashboard. You can see the rest of this code on GitHub.
I can deploy the stack by assuming the
infrastructure-deployer
role, connecting to the Kubernetes API tunnel, and using the Pulumi CLI.$env:AWS_PROFILE="infrastructure-deployer"
start-ameier-k8s-tunnel
pulumi up
Apps Stacks
Lastly, each app gets its own stack which is used to deploy the application into the cluster. For instance, this blog has its own stack which you can see on GitHub. The main difference with the blog stack is that it has a separate
blog-deployer
role which can be assumed by GitHub actions from the blog repo.CI/CD
In order to deploy the stacks that connect to the Kubernetes cluster using GitHub, I need to set up GitHub actions to connect to the Kubernetes API tunnel before running the Pulumi CLI. Because I can’t use the browser to log in, this is where I use the service token that I created in the Managed Infrastructure stack. In my GitHub workflow file, I added a step which connects to the cloudflared tunnel using the managed Docker image.
- name: Start Tunnel
run: |
docker run \
-d \
-p 1234:1234 \
cloudflare/cloudflared:2022.3.2 \
access tcp \
--hostname=k8s.andrewmeier.dev \
--url=0.0.0.0:1234 \
--service-token-id=${{ secrets.tunnel-token-id }} \
--service-token-secret=${{ secrets.tunnel-token-secret }}
Also, in order to assume the
infrastructur-deployer
AWS role I am using GitHub as a trusted OIDC provider to assume the role with the permission AssumeRoleWithWebIdentity
. I configured this in the Identity stack. I can then use the aws-actions/configure-aws-credentials
GitHub action to assume the role.- name: Configure AWS Credentials
uses: aws-actions/[email protected]
with:
aws-region: us-east-1
role-to-assume: arn:aws:iam::400689721046:role/infrastructure-deployer
role-session-name: github
Conclusion
In this post I covered how I created my personal “cloud” that I use to run my apps and services. It has a lot of pieces but once it is set up it rarely changes. The other nice thing is that it only requires the up front cost of purchasing the equipment. The ongoing cost is near zero as I am using the free tiers of AWS and Cloudflare.
I have also found that this structure scales nicely and I use effectively the same setup for my work environments. As more people are contributing to the infrastructure, having separate stacks gives me confidence that deployment errors will be less likely to happen.
I hope you find this useful!