[Kubernetes] Service & DNS
* pod와 service간에 관계 = label과 label selector로 관계를 형성
** 참고 : 서비스 | Kubernetes
서비스
파드 집합에서 실행중인 애플리케이션을 네트워크 서비스로 노출하는 추상화 방법 쿠버네티스를 사용하면 익숙하지 않은 서비스 디스커버리 메커니즘을 사용하기 위해 애플리케이션을 수정할
kubernetes.io
* kubernetes에서 서비스를 외부로 노출시킬 때, Service와 Ingress방법이 존재
Service & DNS
- ClusterIP
- SessionAffinity
- Named Port
- Multi Port
- Service Discovery
- env를 이용한 SD
- DNS를 이용한 SD
- NodePort
- LoadBalancer
- MetalLB
- External Name
- Ingress
Service - ClusterIP
myweb-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb-svc
spec:
selector: # 파드 셀렉터
app: web
ports:
- port: 80 # 서비스 포트
targetPort: 8080 # 타켓(파드 포트)
kubectl create -f .
kubectl get svc myweb-svc
kubectl describe svc myweb-svc
kubectl get endpoint myweb-svc
kubectl run nettool -it --image ghcr.io/c1t1d0s7/network-multitool --rm
* --rm옵션을 추가해서 App과 Container가 종료되면 pod도 같이 종료되도록 실행
> curl x.x.x.x(서비스 리소스의 ClusterIP)
> host myweb-svc
> curl myweb-svc
Session Affinity
: 세션 고정 (항상 똑같은 Pod로만 연결)
apiVersion: v1
kind: Service
metadata:
name: myweb-svc-ses
spec:
type: ClusterIP
sessionAffinity: ClientIP
selector:
app: web
ports:
- port: 80
targetPort: 8080
kubectl get svc,ep,rs,pod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 5d23h
service/myweb-svc-ses ClusterIP 10.233.46.122 <none> 80/TCP 5m35s
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.100.100:6443 5d23h
endpoints/myweb-svc-ses 10.233.90.31:8080,10.233.90.32:8080,10.233.90.33:8080 5m35s
NAME DESIRED CURRENT READY AGE
replicaset.apps/myweb-rs 3 3 3 63s
NAME READY STATUS RESTARTS AGE
pod/myweb-rs-49zwx 1/1 Running 0 62s
pod/myweb-rs-4k4ql 1/1 Running 0 62s
pod/myweb-rs-m2lq5 1/1 Running 0 63s
Named Port
: port에 이름 부여
myweb-rs-named.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myweb-rs-named
spec:
replicas: 3
selector:
matchLabels:
app: web
env: dev
template:
metadata:
labels:
app: web
env: dev
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb
ports:
- containerPort: 8080
protocol: TCP
name: web8080
myweb-svc-named.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb-svc-named
spec:
type: ClusterIP
selector:
app: web
ports:
- port: 80
targetPort: web8080 #port 번호가 아닌 port-name으로 지정가능
Multi Port
myweb-rs-multi.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myweb-rs-multi
spec:
replicas: 3
selector:
matchLabels:
app: web
env: dev
template:
metadata:
labels:
app: web
env: dev
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
myweb-svc-multi.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb-svc-multi
spec:
type: ClusterIP
selector:
app: web
ports:
- port: 80
targetPort: 8080
name: http
- port: 443
targetPort: 8443
name: https
Service Discovery
: Application이 해당되는 Server를 찾는 것
환경 변수를 이용한 SD
모든 파드는 실행 시 현재 시점의 서비스 목록을 환경 변수 제공
# env | grep MYWEB
MYWEB_SVC_PORT_80_TCP_PORT=80
MYWEB_SVC_PORT_80_TCP_PROTO=tcp
MYWEB_SVC_PORT_80_TCP=tcp://10.233.25.106:80
MYWEB_SVC_SERVICE_HOST=10.233.25.106
MYWEB_SVC_PORT=tcp://10.233.25.106:80
MYWEB_SVC_SERVICE_PORT=80
MYWEB_SVC_PORT_80_TCP_ADDR=10.233.25.106
DNS를 이용한 SD
host -v myweb-svc
kube-dns(pod=coredns-X)
Service를 생성하면 해당 이름으로 FQDN을 DNS 서버에 등록
[서비스 이름].[네임스페이스].[오브젝트 타입].[도메인]
myweb-svc.default.svc.cluster.local
nodelocal DNS
- nodelocal DNS 캐시 사용
Pod -- dns → 169.254.25.10 (node-cache process) = DNS Cache Server → coredns SVC (kube-system NS) → coredns POD

* 참고 : Using NodeLocal DNSCache in Kubernetes clusters | Kubernetes
Using NodeLocal DNSCache in Kubernetes clusters
FEATURE STATE: Kubernetes v1.18 [stable] This page provides an overview of NodeLocal DNSCache feature in Kubernetes. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your clust
kubernetes.io
- nodelocal DNS 캐시 사용 X
Pod -- dns → coredns SVC(kube-system NS) → coredns POD
Service - NodePort
: ClusterIP 기반이 구성이 되고 그 위에 해당되는 Node의 Port가 추가적으로 열림 → 외부에서 접근 가능
svc.spec.type
- ClusterIP : 클러스터 내에서 사용하는 LB
- NodePort : 클러스터 외부에서 접근하는 포인트
- LoadBalancer : 클러스터 외부에서 접근하는 LB
NodePort의 범위: 30000-32767
myweb-svc-np.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb-svc-np
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 8080
nodePort: 31313
Service - LoadBalancer
: LoadBalancer = (외부) LoadBalancer + NodePort + ClusterIP
L4 LB
myweb-svc-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb-svc-lb
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
nodePort: 31313
Metallb - Addon
(METALLB 오픈소스)
* 참고 : MetalLB, bare metal load-balancer for Kubernetes (universe.tf)
MetalLB, bare metal load-balancer for Kubernetes
MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. MetalLB is a young project. You should treat it as a beta system. The project maturity page explains what that implies. Why? Kubernetes does not
metallb.universe.tf
~/kubespray/inventory/mycluster/group_vars/k8s-cluster/addons.yml
...
139 metallb_enabled: true
140 metallb_speaker_enabled: true
141 metallb_ip_range:
142 - "192.168.100.240-192.168.100.249"
...
168 metallb_protocol: "layer2"
...
~/kubespray/inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
129 kube_proxy_strict_arp: true
ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml -b
클러스터 내부에서 클러스터 외부의 특정 서비스에 접속하기 위해 DNS CNAME을 설정
apiVersion: v1
kind: Service
metadata:
name: weather-ext-svc
spec:
type: ExternalName
externalName: www.naver.com
Ingress
: L7 LB (Application 계층의 LB = ALB)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myweb-ing
spec:
rules:
- host: '*.encore.xyz'
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myweb-svc-np
port:
number: 80
가상의 domian 접속 테스트
방법1 (가장 추천)
curl --resolve www.encore.xyz:80:192.168.100.100 http://www.encore.xyz
방법2
/etc/hosts
...
192.168.100.100 www.encore.xyz
curl http://www.encore.xyz
방법3 (많이 사용)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myweb-ing
spec:
rules:
- host: '*.nip.io'
...
kubectl replace -f myweb-ing.yaml
curl http://192-168-100-100.nip.io
인그레스 예제
hello:one 이미지
Dockerfile
FROM httpd
COPY index.html /usr/local/apache2/htdocs/index.html
index.html
<h1> Hello One </h1>
hello:two 이미지
Dockerfile
FROM httpd
COPY index.html /usr/local/apache2/htdocs/index.html
index.html
<h1> Hello Two </h1>
docker image build X/hello:one
docker image build X/hello:two
docker login
docker push X/hello:one
docker push X/hello:two
RS
one-rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: one-rs
spec:
replicas: 3
selector:
matchLabels:
app: hello-one
template:
metadata:
labels:
app: hello-one
spec:
containers:
- name: hello-one
image: c1t1d0s7/hello:one
ports:
- containerPort: 80
protocol: TCP
two-rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: two-rs
spec:
replicas: 3
selector:
matchLabels:
app: hello-two
template:
metadata:
labels:
app: hello-two
spec:
containers:
- name: hello-two
image: c1t1d0s7/hello:two
ports:
- containerPort: 80
protocol: TCP
one-svc-np.yaml
apiVersion: v1
kind: Service
metadata:
name: one-svc-np
spec:
type: NodePort
selector:
app: hello-one
ports:
- port: 80
targetPort: 80
two-svc-np.yaml
apiVersion: v1
kind: Service
metadata:
name: two-svc-np
spec:
type: NodePort
selector:
app: hello-two
ports:
- port: 80
targetPort: 80
hello-ing.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-ing
annotations:
nginx.ingress.kubernetes.io/rewrite-target: / # URL 재작성, /one -> /, /two -> /
spec:
rules:
- host: '*.nip.io'
http:
paths:
- path: /one
pathType: Prefix
backend:
service:
name: one-svc-np
port:
number: 80
- path: /two
pathType: Prefix
backend:
service:
name: two-svc-np
port:
number: 80
kubectl create -f .
Readiness Probe
파드의 헬스체크를 통해 서비스의 엔드포인트 리소스에 타겟 등록
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myweb-rs
spec:
replicas: 3
selector:
matchLabels:
app: web
env: dev
template:
metadata:
labels:
app: web
env: dev
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:alpine
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
exec:
command:
- ls
- /tmp/ready
apiVersion: v1
kind: Service
metadata:
name: myweb-svc-lb
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
kubectl create -f .
watch -n1 -d kubectl get po,svc,ep
kubectl exec <POD> touch /tmp/ready
[Kubernetes] Calico CNI 동작원리 이해하기 (velog.io)
[Kubernetes] Calico CNI 동작원리 이해하기
Kubernetes Advanced Networking Study(KANS)의 3주차 내용을 학습하며 정리한 내용입니다.
velog.io