You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Anytime these pods spin up on the same node, things work. But, when I see them spin up on different nodes (multi az). DNS resolution breaks. Namely, for example, when the webserver spins up in a different node than the pgbouncer, it hangs on connecting to airflow-pgbouncer.airflo (172.20.227.96) port 6543 failed.
That ip address looks incorrect to me, as the ip pool is: default-ipv4-ippool 192.168.0.0/16 nat=true ipipmode=never vxlanmode=crossSubnet disabled=false disableBgpExport=false selector=all()
Would I expect that to resolve to a 192.168.0.0/16 ip on lookup? Is there additional calico config to enable this that I am missing?
Context
Working to migrate away from AWS VPC CNI to Calico
Your Environment
Calico version 3.25.1
EKS, with kubernetes 1.23, with KubeProxy and CoreDNS addons.
The text was updated successfully, but these errors were encountered:
I believe I have this worked out. I needed to remove the coredns, kubeproxy addons, deploy. Then add them back and re-deploy. Then, when monitoring the endpoints, it indeed switches over kube-dns to the calico network.
Migrating from Amazon AWS VPC CNI to Calico.
I remove the vpc cni Addon, then following the tutorial here: https://docs.tigera.io/calico/3.25/getting-started/kubernetes/managed-public-cloud/eks
I am running on EKS on a multi az, 2 subnets deployment.
I set hostNetwork: true on a aws-load-balancer-controlller.
finally, I set the max pods on my node group, and change the node group name to roll over all the nodes to a new group.
Expected Behavior
Pod-to-pod networking to work across all nodes within a namespace
Current Behavior
Pod to Pod networking appears to only work on pods running on the same nodes.
I have an airflow service running (installed with helm), this creates the following pods:
pgbouncer
,webserver
,triggerer
,statsd
,redis
,scheduler
Anytime these pods spin up on the same node, things work. But, when I see them spin up on different nodes (multi az). DNS resolution breaks. Namely, for example, when the webserver spins up in a different node than the pgbouncer, it hangs on connecting to
airflow-pgbouncer.airflo (172.20.227.96) port 6543 failed.
That ip address looks incorrect to me, as the ip pool is:
default-ipv4-ippool 192.168.0.0/16 nat=true ipipmode=never vxlanmode=crossSubnet disabled=false disableBgpExport=false selector=all()
Would I expect that to resolve to a 192.168.0.0/16 ip on lookup? Is there additional calico config to enable this that I am missing?
Context
Working to migrate away from AWS VPC CNI to Calico
Your Environment
The text was updated successfully, but these errors were encountered: