Health Checking of Istio Services

Kubernetes liveness and readiness probes offer three different options:

  1. Command
  2. TCP request
  3. HTTP request

This guide shows how to use these approaches in Istio with mutual TLS enabled.

Command and TCP type probes work with Istio regardless of whether or not mutual TLS is enabled. The HTTP request approach requires different Istio configuration with mutual TLS enabled.

Before you begin

Liveness and readiness probes with command option

First, you need to configure health checking with mutual TLS enabled.

To enable mutual TLS for services, you must configure an authentication policy and a destination rule. Follow these steps to complete the configuration:

Run the following command to create namespace:

$ kubectl create ns istio-io-health
  1. To configure the authentication policy, run:

    $ kubectl apply -f - <<EOF
    apiVersion: ""
    kind: "PeerAuthentication"
      name: "default"
      namespace: "istio-io-health"
        mode: STRICT
  2. To configure the destination rule, run:

    $ kubectl apply -f - <<EOF
    apiVersion: ""
    kind: "DestinationRule"
      name: "default"
      namespace: "istio-io-health"
      host: "*.default.svc.cluster.local"
          mode: ISTIO_MUTUAL

Run the following command to deploy the service:

$ kubectl -n istio-io-health apply -f <(istioctl kube-inject -f @samples/health-check/liveness-command.yaml@)

Repeat the check status command to verify that the liveness probes work:

$ kubectl -n istio-io-health get pod
NAME                             READY     STATUS    RESTARTS   AGE
liveness-6857c8775f-zdv9r        2/2       Running   0           4m

Liveness and readiness probes with HTTP request option

This section shows how to configure health checking with the HTTP request option when mutual TLS is enabled.

Kubernetes HTTP health check request is sent from Kubelet, which does not have Istio issued certificate to the liveness-http service. So when mutual TLS is enabled, the health check request will fail.

We have two options to solve the problem: probe rewrites and separate ports.

Probe rewrite

This approach rewrites the application PodSpec readiness/liveness probe, such that the probe request will be sent to Pilot agent. Pilot agent then redirects the request to application, and strips the response body only returning the response code.

This feature is enabled when installing with the default profile. If you find that the profile used to install Istio does not have it enabled, you have two ways to enable the rewrite of the liveness HTTP probes:

Enable globally via install option

Install Istio with --set values.sidecarInjectorWebhook.rewriteAppHTTPProbe=true.

Alternatively, update the configuration map of Istio sidecar injection:

$ kubectl get cm istio-sidecar-injector -n istio-system -o yaml | sed -e 's/"rewriteAppHTTPProbe": false/"rewriteAppHTTPProbe": true/' | kubectl apply -f -

The above installation option and configuration map, each instruct the sidecar injection process to automatically rewrite the Kubernetes pod’s spec, so health checks are able to work under mutual TLS. No need to update your app or pod spec by yourself.

Use annotations on pod

Rather than install Istio with different options, you can annotate the pod with "true". Make sure you add the annotation to the pod resource because it will be ignored anywhere else (for example, on an enclosing deployment resource).

apiVersion: apps/v1
kind: Deployment
  name: liveness-http
      app: liveness-http
      version: v1
        app: liveness-http
        version: v1
      annotations: "true"
      - name: liveness-http
        - containerPort: 8001
            path: /foo
            port: 8001
          initialDelaySeconds: 5
          periodSeconds: 5

This approach allows you to enable the health check prober rewrite gradually on each deployment without reinstalling Istio.

Re-deploy the liveness health check app

Instructions below assume you turn on the feature globally via install option. Annotations works the same.

$ kubectl create ns istio-same-port
$ kubectl -n istio-same-port apply -f <(istioctl kube-inject -f @samples/health-check/liveness-http-same-port.yaml@)
$ kubectl -n istio-same-port get pod
NAME                             READY     STATUS    RESTARTS   AGE
liveness-http-975595bb6-5b2z7c   2/2       Running   0           1m

Separate port

Another alternative is to use separate port for health checking and regular traffic.

Run these commands to re-deploy the service:

$ kubectl create ns istio-sep-port
$ kubectl -n istio-sep-port apply -f <(istioctl kube-inject -f @samples/health-check/liveness-http.yaml@)

Wait for a minute and check the pod status to make sure the liveness probes work with ‘0’ in the ‘RESTARTS’ column.

$ kubectl -n istio-sep-port get pod
NAME                             READY     STATUS    RESTARTS   AGE
liveness-http-67d5db65f5-765bb   2/2       Running   0          1m

Note that the image in liveness-http exposes two ports: 8001 and 8002 (source code). In this deployment, port 8001 serves the regular traffic while port 8002 is used for liveness probes.


Remove the mutual TLS policy and corresponding destination rule added in the steps above:

$ kubectl delete ns istio-io-health istio-same-port istio-sep-port
Was this information useful?
Do you have any suggestions for improvement?

Thanks for your feedback!