2021/07/28 の昼頃から、kubectl を使って pod に処理を実行させる系のジョブがすべてコケる
自分の PC から kubectl コマンドを打つと error: You must be logged in to the server (Unauthorized) がでる
再度 kubectl コマンドを自分の端末から打つと、The connection to the server 192.168.10.190:6443 was refused - did you specify the right host or port? となった
kubelet のログを journalctl -u kubelet で見たところ、 part of the existing bootstrap client certificate is expired: 2021-07-27 06:24:58 +0000 UTC というメッセージが出て異常終了していた。
復旧手順
去年の手順と完全に一緒。
バックアップを取得
1 2 3
sudo su - mkdir ~/k8s_backup_20210728 cp -rva /etc/kubernetes ~/k8s_backup_20210728/
# cd /etc/kubernetes/pki # mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/k8s_backup_20210728/ # kubeadm init phase certs all --apiserver-advertise-address 192.168.10.190
W0728 01:03:46.960780 4926 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Using existing ca certificate authority [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubemaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.190] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Generating "apiserver-etcd-client" certificate and key [certs] Using the existing "sa" key
# cd /etc/kubernetes # mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/k8s_backup_20210728/ # kubeadm init phase kubeconfig all W0728 01:05:49.334781 5092 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file
私は思いっきり勘違いをしたのだが、上の設定で言うと、 port 5400 に飛んできたログは、このファイルの定義で解釈する。のではなく、どこから飛んできたログであろうと、conf.d 以下の全てのファイルの設定を通過するので必ず対象でなければ何も filter しないように書かなければいけないようだ。ようするに・・・ログ種別の分だけ if 文が増える。 これはつらい。なので以下の記事参照・・・
## Browsing/Identification ### local master = no preferred master = no
# Change this to the workgroup/NT-domain name your Samba server will part of server role = standalone server netbios name = chinachu workgroup = WORKGROUP server string = %h server (Samba, Ubuntu)
[global] ## Browsing/Identification ### local master = no preferred master = no
# Change this to the workgroup/NT-domain name your Samba server will part of netbios name = filesrv workgroup = WORKGROUP server string = %h server (Samba, Ubuntu)
class ApplicationController < ActionController::Base before_filter :set_request_store def set_request_store RequestStore.store[:request] = request end end
セットしたものを使いたい場合のコード
1 2
RequestStore.fetch(:request) { nil } RequestStore.fetch(:request) # 取得できないと no block given エラー