跳转至

1.将容器镜像导入到DV

将容器镜像导入到DV

目的:统一镜像管理,即用容器镜像仓库(harbor)同时管理虚拟机镜像和容器镜像。

前面已经制作好了容器镜像,接下来将容器镜像导入到dv 需要一个镜像仓库,如harbor或registry,此处以 harbor为例 确保容器镜像已经推送到镜像仓库

ctr -n k8s.io images push --platform linux/amd64 --user admin:Harbor12345 -k registry.demo.com/vmidisk/ubuntu22.04:latest
因为harbor用的证书是自签名非信任的证书,所以还需要将证书存到configMap中,在DV中引用
kubectl create configmap registry-demo-certs --from-file=/etc/containerd/certs.d/registry.demo.com/ca.crt
kubectl patch cdi cdi --patch '{"spec": {"config": {"insecureRegistries": ["registry.demo.com"]}}}' --type merge
在k8s中使用harbor镜像仓库,还需要将harbor的域名加到coredns的configMap中,能被集群解析到才行
kubectl get cm -n kube-system coredns -o yaml
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes demo.com in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        hosts {
          192.168.59.251 registry.demo.com
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2023-03-25T05:25:21Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "4390968"
  uid: 80e05125-d912-4442-8810-ab8e68b3f20a

---
apiVersion: v1
kind: Secret
metadata:
  name: endpoint-harbor-secret
  labels:
    app: containerized-data-importer
type: Opaque
#data:
#  accessKeyId: "YWRtaW4K"  # <optional: your key or user name,like admin, base64 encoded>
#  secretKey:    "SGFyYm9yMTIzNDUK" # <optional: your secret or password,like Harbor12345, base64 encoded>
stringData:
  accessKeyId: "admin"
  secretKey: "Harbor12345"
---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: dv-image-ubuntu2204
spec:
  source:
    registry:
      # 注意url写法
      url: "docker://registry.demo.com/kubevirt/vmidisk:ubuntu22.04"
      secretRef: endpoint-harbor-secret
      certConfigMap: registry-demo-certs
  pvc:
    storageClassName: "ceph-hdd-block"
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 20Gi
    ## 支持使用Block
    volumeMode: Block

说明:volumeMode默认为Filesystem,可以设置为Block,参考卷模式

创建vm验证,这里仅是测试,所以不再创建PVC,直接使用上面DV对应的PVC

---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: my-ubuntu2204
  namespace: default
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: my-ubuntu2204
      #annotations:
      #  ovn.kubernetes.io/ip_address: 10.244.10.203
    spec:
      domain:
        cpu:
          cores: 1
          model: host-passthrough
        memory:
          guest: 1Gi
        devices:
          disks:
            - name: root-disk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              masquerade: {}
        resources:
          requests:
            memory: 1024M
      networks:
        - name: default
          pod: {}
      volumes:
        - name: root-disk
          persistentVolumeClaim:
            claimName: dv-image-ubuntu2204
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              hostname: my-ubuntu2204
              ssh_pwauth: True
              timezone: Asia/Shanghai
              disable_root: false
              chpasswd: {"list":"root:123456",expire: False}