迁移步骤
说明:如果openstack和kubevirt是共用同一个ceph,且用的是相同的pool,那只需基于rbd创建pvc即可。
以下步骤是openstack与kubevirt使用了不同的ceph。 1. 从openstack中导出vm(其实是从ceph从导出vm的rbd)
在openstack控制台,找到对应要迁移的vm,查看vm对应的卷id,如下图所示。然后从ceph中根据卷id找到对应的rbd块,并操作导出

## 根据卷id找对应的rbd块设备
# rbd ls volumes |grep 6e2fe3f2-f5a1-4204-8326-60f0b717c28f
volume-6e2fe3f2-f5a1-4204-8326-60f0b717c28f
## 执行rbd导出,导出时间跟实际磁盘大小相关
rbd export volumes/volume-6e2fe3f2-f5a1-4204-8326-60f0b717c28f k8s-node01.raw
## pvc-size 可通过 qemu-img info k8s-node01.raw 命令查看
# qemu-img info k8s-node01.raw
image: dcloud-26.raw
file format: raw
virtual size: 200 GiB (214748364800 bytes)
disk size: 198 GiB
# upload-k8s-node01.sh
virtctl image-upload --uploadproxy-url=https://15.98.129.113 \
--insecure \
--pvc-name=k8s-node01 \
--pvc-size=200G \
--storage-class=ceph-hdd-block \
--access-mode=ReadWriteOnce \
--block-volume \
--image-path=./k8s-node01.raw
# 将rbd块设备放在能通过http请求到的地方
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: k8s-node01
namespace: default
spec:
source:
http:
url: http://192.168.171.102:1080/mnt/k8s-node01.raw
pvc:
storageClassName: "ceph-hdd-block"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Gi
volumeMode: Block
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: k8s-node01
namespace: default
spec:
#running: true
runStrategy: Always
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: k8s-node01
annotations:
ovn.kubernetes.io/ip_address: 192.168.171.30
ovn.kubernetes.io/logical_switch: provider
spec:
#nodeSelector:
# kubernetes.io/hostname: linux9f100
dnsConfig:
nameservers:
- 223.5.5.5
- 114.114.114.114
domain:
cpu:
cores: 8
model: host-passthrough
memory:
guest: 16Gi
devices:
disks:
- name: root-disk
disk:
bus: virtio
cache: writeback
interfaces:
- name: default
model: virtio
bridge: {}
rng: {}
resources:
requests:
memory: 16Gi
limits:
memory: 16Gi
networks:
- name: default
pod: {}
terminationGracePeriodSeconds: 10
volumes:
- name: root-disk
persistentVolumeClaim:
claimName: k8s-node01