NATS access through secure K8s access using Leaf Nodes and Secure WebSockets
What
Demonstration of how to connect a NATS leafnode to a NATS cluster using secure websockets.
Why
Part of the NATS TLS HandShake is in plain text. This results in a major inconvenience if the NATS cluster has to be exposed via secure ingress, OpenShift secure route etc. There are a couple of ways to alleviate the issue
- Poke a hole in the cluster by exposing an insecure NodePort
- Install TLS enabled proxies (HAProxy, envoy etc) inside and outside the cluster to provide a secure tunnel for NATS traffic
- Employ an edge NATS server (leaf node) that communicates securely with the main NATS cluster
How
The NATS “way” is to to enable websockets + TLS on the main NATS cluster. The main NATS cluster communicates with the leaf node) via Secure WebSocket via TLS / Seure WebSockets
For more details, the source code can be found at https://github.com/balamuru/nats-k8s-leafnode-websocket
Solus Linux
Here’s a shout out to Solus .. the primary OS on my personal workstation for over 5 trouboefree years and the one I prefer over Ubuntu , Arch, MacOS , CentOS and several other *nix variants.
https://www.gamingonlinux.com/2021/07/a-chat-with-joshua-strobl-of-the-solus-linux-distribution
Installing Istio on K3s (minimal Kubernetes)
Recent versions of K3s deploy Traefik load balancer by default .
It’s a great thing to have Out-of-the-box …. unless you intend to install Istio on the cluster in which case the Istio Ingress Gateway ports will conflict. The issue and solution follow
vinayb@carbon ~ $ kubectl get po -n=istio-system
NAME READY STATUS RESTARTS AGE
istiod-56874696b5-fd4tf 1/1 Running 0 3m55s
svclb-istio-ingressgateway-kbcvq 0/5 Pending 0 3m39s ===========> hanging pods
istio-egressgateway-585f7668fc-pgrp6 1/1 Running 0 3m39s
istio-ingressgateway-8657768d87-7g9rx 1/1 Running 0 3m39s
vinayb@carbon ~ $ kubectl describe po svclb-istio-ingressgateway-kbcvq -n=istio-system
Name: svclb-istio-ingressgateway-kbcvq
Namespace: istio-system
Priority: 0
Node: <none>
Labels: app=svclb-istio-ingressgateway
controller-revision-hash=754c7b499c
pod-template-generation=1
svccontroller.k3s.cattle.io/svcname=istio-ingressgateway
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: DaemonSet/svclb-istio-ingressgateway
Containers:
lb-port-15021:
Image: rancher/klipper-lb:v0.1.2
Port: 15021/TCP
Host Port: 15021/TCP
Environment:
SRC_PORT: 15021
DEST_PROTO: TCP
DEST_PORT: 15021
DEST_IP: 10.43.56.82
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ccf9r (ro)
lb-port-80:
Image: rancher/klipper-lb:v0.1.2
Port: 80/TCP
Host Port: 80/TCP
Environment:
SRC_PORT: 80
DEST_PROTO: TCP
DEST_PORT: 80
DEST_IP: 10.43.56.82
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ccf9r (ro)
lb-port-443:
Image: rancher/klipper-lb:v0.1.2
Port: 443/TCP
Host Port: 443/TCP
Environment:
SRC_PORT: 443
DEST_PROTO: TCP
DEST_PORT: 443
DEST_IP: 10.43.56.82
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ccf9r (ro)
lb-port-31400:
Image: rancher/klipper-lb:v0.1.2
Port: 31400/TCP
Host Port: 31400/TCP
Environment:
SRC_PORT: 31400
DEST_PROTO: TCP
DEST_PORT: 31400
DEST_IP: 10.43.56.82
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ccf9r (ro)
lb-port-15443:
Image: rancher/klipper-lb:v0.1.2
Port: 15443/TCP
Host Port: 15443/TCP
Environment:
SRC_PORT: 15443
DEST_PROTO: TCP
DEST_PORT: 15443
DEST_IP: 10.43.56.82
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ccf9r (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-ccf9r:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-ccf9r
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule op=Exists
node-role.kubernetes.io/master:NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m8s default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. =========================> port conflict
Warning FailedScheduling 4m8s default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
You can address this by deleting the Traefik depoyment & svc
- Disabling Traefik at the cluster level during startup ( https://rancher.com/docs/k3s/latest/en/faq/ ) OR
- Deleting the Traefik deployment and service objects and bouncing the istio ingress gateway pod
kubectl delete deploy traefik -n kube-system
kubectl delete svc traefik -n kube-system
kubectl delete po <ingress-gateway-pod> -n istio-system
vinayb@carbon ~ $ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-jm2dp 0/1 Completed 0 61d
kube-system metrics-server-86cbb8457f-6smlc 1/1 Running 14 61d
kube-system local-path-provisioner-5ff76fc89d-57vfc 1/1 Running 71 61d
keda keda-operator-968bbb969-9g62x 1/1 Running 14 61d
keda keda-operator-metrics-apiserver-6db5fd7f95-8w5tp 1/1 Running 14 61d
kube-system k8dash-87fd8b696-d696g 1/1 Running 14 61d
kube-system coredns-854c77959c-68fbx 1/1 Running 14 61d
istio-system istiod-56874696b5-fd4tf 1/1 Running 0 8m26s
istio-system istio-egressgateway-585f7668fc-pgrp6 1/1 Running 0 8m10s
istio-system istio-ingressgateway-8657768d87-7g9rx 1/1 Running 0 8m10s
istio-system svclb-istio-ingressgateway-kbdrb 5/5 Running 0 4s ===> pod is up
How Containers Work
While I’m aware of the underlying concepts (namespaces for entity isolation and cgroupa for setting limits), it’s a bit harder to actually explain these concepts concisely yet clearly. Here is a rare blog that illustrates these concepts in the all stuff / no fluff fashion https://jvns.ca/blog/2016/10/10/what-even-is-a-container/
Handling degradation in ML models
It is important to realize that AI / ML , while extremely powerful in their own right , deal with ever changing data characteristics. This can lead to degraded model performance over time. The following link addresses some of the factors at play https://towardsdatascience.com/why-machine-learning-models-degrade-in-production-d0f2108e9214
GCP Technology Cheat Sheet
Came across this . It’s a great way to make sense of the ever evolving smorgasbord of Google Cloud Offerings https://github.com/gregsramblings/google-cloud-4-words/blob/master/DarkPoster-medres.png
What the Heck is a Helm Operator ?
At least , .. that’s what I was asking myself when I first explored the concept. In reality, it’s not that complicated (although numerous blogs and articles would have you think so). It is, in essence, a way to express a Helm Chart as a Kubernetes resource or “Kind” . Here is a simple tutorial that I wrote up at https://github.com/balamuru/helm-operator-example.
Hope you find this useful.
Duo Two Factor Authentication Java Client
I wasn’t very happy with parts of the Duo Security REST API documentation (relating to auth) and the provided java client so I wrote my own. It’s hosted at https://github.com/balamuru/duo-client-java-spring-client . Hope this helps someone
Handling currency searches with SOLR
We needed to achieve this without compromising the functionality of the StandardTokenizer .
Working off some prior stack overflow info, this is how we accomplished the needful 🙂
You must be logged in to post a comment.