Production-ready infrastructure
Key infrastructure elements:
— Cloud or self-managed database
— Сollect application and cluster logs
— Сollect, alert, and visualize cluster and application metrics
— Vulnerability scanning and policy management
Recommended Kubernetes cluster configuration:
Small and medium workloads — 3 nodes X 4 VCPU 16 GB RAM
Huge workloads — 3 nodes X 8 VCPU X 64 GB RAM
Toolkit required for development and deployment:
, , - Cloud provider CLI and SDK. Depends on your cloud provider:
- connection and cluster management
- Kubernetes package manager
Optional - Development and Delivery tooling:
Database
Managed solution
Self-managed solution
First step — create volume
Copy apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db-master-data
namespace: prod
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 300Gi
# depend on your cloud provider. Use SSD volumes
storageClassName: managed-premium
Next - create all required configs, like postgresql.conf
, required container parameters and credentials.
Copy apiVersion: v1
kind: ConfigMap
metadata:
name: db-pg-config
namespace: prod
data:
postgres.conf: |-
listen_addresses = '*'
shared_buffers = '2GB'
max_wal_size = '4GB'
pg_stat_statements.max = 500
pg_stat_statements.save = false
pg_stat_statements.track = top
pg_stat_statements.track_utility = true
shared_preload_libraries = 'pg_stat_statements'
track_io_timing = on
wal_level = logical
wal_log_hints = on
archive_command = 'wal-g wal-push %p'
restore_command = 'wal-g wal-fetch %f %p'
Copy apiVersion: v1
kind: ConfigMap
metadata:
name: db-config
namespace: prod
data:
PGDATA: /data/pg
POSTGRES_DB: postgres
Copy apiVersion: v1
kind: Secret
metadata:
name: db-secret
namespace: prod
type: Opaque
data:
POSTGRES_PASSWORD: cG9zdGdyZXM=
POSTGRES_USER: cG9zdGdyZXM=
Now we can create a database StatefulSet
Copy apiVersion: apps/v1
kind: StatefulSet
metadata:
name: prod-db-master
namespace: prod
spec:
replicas: 1
serviceName: db
selector:
matchLabels:
service: db
template:
metadata:
labels:
service: db
spec:
volumes:
- name: db-pg-config
configMap:
name: db-pg-config
defaultMode: 420
- name: db-dshm
emptyDir:
medium: Memory
- name: db-data
persistentVolumeClaim:
claimName: db-master-data
containers:
- name: main
image: healthsamurai/aidboxdb:14.2
ports:
- containerPort: 5432
protocol: TCP
envFrom:
- configMapRef:
name: db-config
- secretRef:
name: db-secret
volumeMounts:
- name: db-pg-config
mountPath: /etc/configs
- name: db-dshm
mountPath: /dev/shm
- name: db-data
mountPath: /data
subPath: pg
Create master database service
Copy apiVersion: v1
kind: Service
metadata:
name: db
namespace: prod
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
selector:
service: db
Replica installation contains all the same steps but required additional configuration
Copy apiVersion: v1
kind: ConfigMap
metadata:
name: db-replica
namespace: prod
data:
PG_ROLE: replica
PG_MASTER_HOST: db-master
PG_REPLICA: streaming_replica_streaming
PGDATA: /data/pg
POSTGRES_DB: postgres
Recommended backup policy — Full backup every week, incremental backup every day.
Alternative solutions
A set of tools to perform HA PostgreSQL with fail and switchover, automated backups.
Aidbox
Create ConfigMap with all required config and database connection
Copy apiVersion: v1
kind: ConfigMap
metadata:
name: aidbox
namespace: prod
data:
AIDBOX_BASE_URL: https://my.box.url
AIDBOX_BOX_ID: aidbox
AIDBOX_FHIR_VERSION: 4.0.1
AIDBOX_PORT: '8080'
AIDBOX_STDOUT_PRETTY: all
BOX_INSTANCE_NAME: aidbox
BOX_METRICS_PORT: '8765'
PGDATABASE: aidbox
PGHOST: db.prod.svc.cluster.local # database address
PGPORT: '5432' # database port
Copy apiVersion: v1
kind: Secret
metadata:
name: aidbox
namespace: prod
data:
AIDBOX_ADMIN_ID: <admin_login>
AIDBOX_ADMIN_PASSWORD: <admin_password>
AIDBOX_CLIENT_ID: <root_client_id>
AIDBOX_CLIENT_SECRET: <root_client_password>
AIDBOX_LICENSE: <JWT-LICENSE> # JWT license from aidbox user portal
PGPASSWORD: <db_password> # database password
PGUSER: <db_user> # database username
Aidbox Deployment
Copy apiVersion: apps/v1
kind: Deployment
metadata:
name: aidbox
namespace: prod
spec:
replicas: 2
selector:
matchLabels:
service: aidbox
template:
metadata:
labels:
service: aidbox
spec:
containers:
- name: main
image: healthsamurai/aidboxone:latest
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8765
protocol: TCP
envFrom:
- configMapRef:
name: aidbox
- secretRef:
name: aidbox
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
failureThreshold: 12
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
startupProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 5
periodSeconds: 5
successThreshold: 1
failureThreshold: 4
To verify that Aidbox started correctly you can check the logs:
Copy kubectl logs -f <aidbox-pod-name>
Create the Aidbox k8s service
Copy apiVersion: v1
kind: Service
metadata:
name: aidbox
namespace: prod
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
service: aidbox
Ingress
Ingress NGINX controller
Copy helm upgrade \
--install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
CertManager
Copy helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.10.0 \ # Or latest available version
--set installCRDs=true
Configure Cluster Issuer:
Copy apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: hello@my-domain.com
preferredChain: ''
privateKeySecretRef:
name: issuer-key
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: nginx # Ingress class name
Ingress resource
Now you can create k8s Ingress
for Aidbox deployment
Copy apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: aidbox
namespace: prod
annotations:
acme.cert-manager.io/http01-ingress-class: nginx
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- my.box.url
secretName: aidbox-tls
rules:
- host: my.box.url
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: aidbox
port:
number: 80
Now you can test ingress
Copy curl https://my.box.url
Logging
Aidbox supports integration with the following systems:
ElasticSearch integration
Configure Aidbox and ES integration
Copy apiVersion: v1
kind: Secret
metadata:
name: aidbox
namespace: prod
data:
...
AIDBOX_ES_URL = http://es-service.es-ns.svc.cluster.local
AIDBOX_ES_AUTH = <user>:<password>
...
DataDog integration
Copy apiVersion: v1
kind: Secret
metadata:
name: aidbox
namespace: prod
data:
...
AIDBOX_DD_API_KEY: <Datadog API Key>
...
Monitoring
Copy helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack
Create Aidbox metrics service
Copy apiVersion: v1
kind: Service
metadata:
name: aidbox-metrics
namespace: prod
labels:
operated: prometheus
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8765
selector:
service: aidbox
Create ServiceMonitor config for scrapping metrics data
Copy apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/component: metrics
release: kube-prometheus
serviceMonitorSelector: aidbox
name: aidbox
namespace: kube-prometheus
spec:
endpoints:
- honorLabels: true
interval: 10s
path: /metrics
targetPort: 8765
- honorLabels: true
interval: 60s
path: /metrics/minutes
targetPort: 8765
- honorLabels: true
interval: 10m
path: /metrics/hours
targetPort: 8765
namespaceSelector:
any: true
selector:
matchLabels:
operated: prometheus
Or you can directly specify the Prometheus scrapers configuration
Copy global:
external_labels:
monitor: 'aidbox'
scrape_configs:
- job_name: aidbox
scrape_interval: 5s
metrics_path: /metrics
static_configs:
- targets: [ 'aidbox-metrics.prod.svc.cluster.local:8765' ]
- job_name: aidbox-minutes
scrape_interval: 30s
metrics_path: /metrics/minutes
static_configs:
- targets: [ 'aidbox-metrics.prod.svc.cluster.local:8765' ]
- job_name: aidbox-hours
scrape_interval: 1m
scrape_timeout: 30s
metrics_path: /metrics/hours
static_configs:
- targets: [ 'aidbox-metrics.prod.svc.cluster.local:8765' ]
Alternative solutions
Export the Aidbox Grafana dashboard
Additional monitoring
System monitoring:
PostgreSQL monitoring:
Alerting
Alert rules
Alert for long-running HTTP queries with P99 > 5s in 5m interval
Copy alert: SlowRequests
for: 5m
expr: histogram_quantile(0.99, sum (rate(aidbox_http_request_duration_seconds_bucket[5m])) by (le, route, instance)) > 5
labels: {severity: ticket}
annotations:
title: Long HTTP query execution
metric: '{{ $labels.route }}'
value: '{{ $value | printf "%.2f" }}'
Alert delivery
Alert manager template for Telegram
Copy global:
resolve_timeout: 5m
telegram_api_url: 'https://api.telegram.org/'
route:
group_by: [alertname instance]
# Default receiver
receiver: <my-ops-chat>
routes:
# Mute watchdog alert
- receiver: empty
match: {alertname: Watchdog}
receivers:
- name: empty
- name: <my-ops-chat>
telegram_configs:
- chat_id: <chat-id>
api_url: https://api.telegram.org
parse_mode: HTML
message: |-
<b>[{{ .CommonLabels.instance }}] {{ .CommonLabels.alertname }}</b>
{{ .CommonAnnotations.title }}
{{ range .Alerts }}{{ .Annotations.metric }}: {{ .Annotations.value }}
{{ end }}
bot_token: <bot-token>
Security
Vulnerability and security scanners:
Kubernetes Policy Management:
Advanced: