Storage
-
PVE Enabling IOMMU for Hardware Passthrough and Troubleshooting Passthrough Errors
Read more: PVE Enabling IOMMU for Hardware Passthrough and Troubleshooting Passthrough ErrorsI. Introduction What is Hardware Passthrough? VT-d, DirectPath I/O, allows a virtual machine to access physical PCI functions on the platform through the I/O Memory Management Unit, commonly known as virtualization passthrough. Simply put, it allows the host…
-
Proxmox VE Hyper-converged Cluster Service Remains Uninterrupted while Adding New Configurations (Disks)
Read more: Proxmox VE Hyper-converged Cluster Service Remains Uninterrupted while Adding New Configurations (Disks)For a Proxmox VE hyper-converged cluster with five nodes, two Ceph Pools have been created: one is a high-speed Nvme storage pool, and the other is a large-capacity SATA storage pool. Now, there is a need to replace all the…
-
How to Handle NODE FAILURE in a Proxmox Cluster?
Read more: How to Handle NODE FAILURE in a Proxmox Cluster?PVE1, PVE2, and PVE3 form a cluster using Ceph storage. PVE1 has failed, and now PVE2 needs to be removed.
-
Handling Proxmox VE Hyper-Converged Cluster Ceph OSD Disk Full Issue
Read more: Handling Proxmox VE Hyper-Converged Cluster Ceph OSD Disk Full IssueIdentify which nodes the data-filled OSDs are located on, then delete non-running or unnecessary virtual machines on those nodes to free up disk space.
-
Proxmox VE Hyperconverged Cluster Configuration Update Without Service Interruption
Read more: Proxmox VE Hyperconverged Cluster Configuration Update Without Service InterruptionA five-node Proxmox VE hyperconverged cluster has been set up with two Ceph pools. One is a high-speed NVMe storage pool, and the other is a large-capacity SATA storage pool. Now, the existing SATA disks need to be removed and…
-
Destroying a Ceph Pool in a Proxmox VE Hyper-converged Cluster
Read more: Destroying a Ceph Pool in a Proxmox VE Hyper-converged ClusterDestroying the Ceph Pool involves two major steps: destroying the Ceph Pool and the Ceph OSD. If the OSD destruction step is skipped, the system will continuously throw errors when the cluster’s servers are rebooted without the hard drives.