With the release of PVE 3.0, the Proxmox VE Web-Interface does no longer require Apache.
Instead using a standard WebServer, Proxmox team is now proud to use a new event-driven API-Server called ‘pveproxy’ listening on TCP Port 8006 and delivering contents via HTTPS using a self-signed certificate.
Proxying pveproxy behind NgiNX will prevent direct access to the event-driven API-Server, let the administrator to (optionally) add a second layer HTTP authentication, to configure a standard HTTPS TCP port to reach the admin panel and to use his own SSL certificates.
Proxmox VE installation can’t be done from scratch to a Software-RAID Device (MD RAID).
Here is a simple guide covering this kind of setup, doing some post-install work.
Install VMWare Tools can be tricky, all informations found OnLine (including VMWare’s website) can be considerated not fully complete or specific for Ubuntu Distribution.
This simple procedure describes how to install VMWare Tools v4 on Ubuntu 12.04 LTS Server.
Ordinarily, the loss of quorum after one out of two nodes fails will prevent the remaining node from continuing (if both nodes have one vote.)
Special configuration options can be set to allow the one remaining node to continue operating if the other fails.
Using Proxmox VE, in 2 members cluster configuration with LVM on top of DRBD, makes possible to have FileSystem Data Redundancy and support OnLine VMs Migration between Host Controllers.
DRBD refers to block devices designed as a building block to form high availability (HA) clusters.
This is done by mirroring a whole block device via an assigned network. DRBD can be understood as “Network-Based RAID1”.
For detailed information please visit Linbit.
After rebooting a controller it is possible to see that DRBD is not properly working and the rebooted node is indicating: “Failure: (104) Can not open backing device.” and DRBD’s DS is “Diskless”.