Main Product Category | [Agents] |
Sub Category | [Configuration] |
Objective
This article will guide you through leveraging both Istio filters and Kubernetes labels to deploy Traceable's tracing and platform agents. The end result will be an architecture similar to the one depicted in the image below.
Prerequisites
- An existing Kubernetes cluster with the Istio service mesh installed. Traceable supports Istio 1.7 +
- Existing java applications deployed to the cluster
- A Traceable auth token
- Helm 3 - This guide will assume kubernetes is managed using helm 3. Alternative modes of installing Traceable in a kubernetes cluster can be located here.
Kubernetes
Installing the Traceable Platform Agent (TPA)
The TPA is installed in its own namespace on the same cluster that the services you wish to protect are installed. It will aggregate spans, normalize them and redact any sensitive information before forwarding them onto the Traceable SAAS Platform.
-
export NAMESPACE=<NAMESPACE_TO_PROTECT>
-
export TOKEN=<token>
-
export ENV=<dev,test,stage,prod>
-
helm repo add traceableai https://helm.traceable.ai
-
helm repo update
-
helm install --namespace traceableai traceable-agent traceableai/traceable-agent \ --create-namespace --set token=$TOKEN \ --set environment=$ENV --set injector.propagationFormats={B3}
- Verify - Ensure no "ERROR" logs written during TPA install
-
kubectl get po -n traceableai
-
kubectl logs traceable-agent-xxxx -n traceableai
- Note: This ERROR message is expected and should technically be an INFO message
-
ERROR(1): 2021/09/20 20:14:24 libopa.go:156: Error while fetching policy: Get "http://localhost:8181/v1/policies/remote-bundle/traceable/http/request/policy.rego": dial tcp 127.0.0.1:8181: connect: connection refused
-
-
Istio
Note: If you have installed the istio control plane to a namespace other than the default istio-system namespace, replace all occurrences of "istio-system" with the namespace you have deployed the istio control plane to.
- Configure zipkin
-
istioctl install --set profile=<your profile> -y --set meshConfig.enableTracing=true --set meshConfig.defaultConfig.tracing.sampling=100 --set meshConfig.defaultConfig.tracing.zipkin.address=agent.traceableai:9411
-
- Label the namespace where the istio-ingressgateway is deployed (typically istio-system)
-
kubectl label ns istio-system traceableai-inject-tme=enabled
-
- Patch the istio-ingressgateway deployment.
-
kubectl patch deployment.apps/istio-ingressgateway -p '{"spec": {"template": {"metadata": {"annotations": {"tme.traceable.ai/inject": "true"}}}}}' -n istio-system
-
- Enable the envoy filter for the istio-ingressgateway
-
kubectl patch deployment.apps/istio-ingressgateway -p '{"spec": {"template": {"metadata": {"labels": {"traceableai-istio": "enabled"}}}}}' -n istio-system
-
- Create the envoy filter for the istio-system
-
helm install traceableai-istio traceableai/traceableai-istio -n istio-system
-
kubectl rollout restart deployment istio-ingressgateway -n istio-system
- Verify no error logs written during the Traceable Tracing Agent deployment
- The istio-ingressgateway pod should show 2/2
-
kubectl get po -n istio-system
-
kubectl logs istio-ingressgateway-xxxxx -n istio-system -c tme
-
kubectl logs istio-ingressgateway-xxxxx -n istio-system -c istio-proxy
-
- The istio-ingressgateway pod should show 2/2
-
- Verify the envoy filter is running
-
kubectl get envoyfilters.networking.istio.io -n istio-system
-
- Verify no error logs written during Traceable Platform Agent deployment
-
kubectl get po -n traceableai
-
kubectl logs traceable-agent-xxx -n traceableai
-
Access an external HTTP service
Given the TPA is external to the Istio service mesh, you will likely have to enable access to it using a ServiceEntry. Depending on your networking policies you may need to add one or both of the following ServiceEntry objects.
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: tpa
spec:
hosts:
- agent.traceableai
ports:
- number: 4317
name: zipkin
protocol: HTTP
- number: 8181
name: opa
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
EOF
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: tpa-saas
spec:
hosts:
- api.traceable.ai
ports:
- number: 443
name: traceable
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
EOF
JVM
Installing the Tracing Agent
The Tracing Agent is the span exporter. This agent can either be directly integrated with your application (docker image, startup command etc) or automatically injected by Kubernetes. The Traceable Agent will copy all http requests to a background thread. On the background thread, the request is used to construct a Span. This Span is sent to the Traceable Platform Agent (TPA). In a Kubernetes environment, the TPA should be deployed to a separate namespace on the same cluster as the application being instrumented with the Tracing Agent.
Auto Injection
- Install the TPA
- Choose a method to implement the "auto-injection" of the Traceable Java agent
-
- Patch a specific deployment
-
kubectl patch deployment.apps/<APP_NAME> -p '{"spec": {"template": {"metadata": {"annotations": {"java.traceable.ai/inject": "true"}}}}}' -n $NAMESPACE
-
-
- Label an entire namespace
-
kubectl label namespace $NAMESPACE traceableai-inject-java=enabled
- Restart the java application deployment
-
kubectl rollout restart deployment <DEPLOYMENT_NAME> -n $NAMESPACE
-
- Verify Java deployment has the Traceable Tracing Agent attached
-
kubectl get po -n traceableai
-
kubectl logs traceable-agent-xxx -n traceableai
- You should see a message similar to the following:
-
{"level":"info","time":"2021-09-19T20:00:52.540Z","message":"Will execute injection on pod","injector":"java","pod_name":"<APP_NAME-xxxx>","namespace_name":"<NAMESPACE>"}
-
-
Manual Injection
- Install the TPA
- Enable external access to the TPA
- Choose the best download option for your environment
- Configure - To configure the Tracing Agent, set an environment variable which points to a yaml file.
Update your Dockerfile (example)
FROM openjdk:11-jdk
Example config.yaml
ENV HT_CONFIG_FILE config.yaml
ARG JAR_FILE=app.jar
COPY ${JAR_FILE} demo.jar
COPY config.yaml config.yaml
COPY javaagent.jar javaagent.jar
ENTRYPOINT ["java", "-javaagent:/javaagent.jar", "-jar", "-Dserver.port=8080", "/app.jar"]serviceName: <REPLACE WITH SERVICE NAME>
reporting:
endpoint: http://agent.traceableai:4317
opa:
endpoint: http://agent.traceableai:8181
injector:
traceReporterType: ZIPKIN
propagationFormats:
- B3 - Rebuild and redeploy your containers to Kubernetes