I did a fresh install of 1.2.0RC10 and 1.2.0RC11 on 2 SSDs in RAID 1, but I can't save config.
My system is running in UEFI mode.
I did a fresh install of 1.2.0RC10 and 1.2.0RC11 on 2 SSDs in RAID 1, but I can't save config.
My system is running in UEFI mode.
RC11 is getting old, please retry with latest rolling: https://downloads.vyos.io/rolling/current/amd64/vyos-1.2.0-rolling%2B201901040337-amd64.iso
Hi Syncer,
I just tested this with 1.2.0 LTS and the problem persist.
My tests consisted of two installs on 2 SSD disks with no partition tables. First in BIOS mode and second in EFI mode.
In BIOS mode, command "install image" did not generate errors and then the system was able to boot normally from the RAID array.
In EFI mode, command "install image" did not generate errors as well but the RAID 1 array was not detected any more after rebooting.
The system rebooted in LiveCD mode.
Here are the messages that were displayed at boot time in EFI mode during first and subsequent reboots:
Loading, please wait... mdadm: WARNING /dev/sda3 and /dev/sda appear to have very similar superblocks. If they are really different, please --zero the superblock on one If they are the same or overlap, please remove one from the DEVICE list in mdadm.conf. mdadm: No arrays found in config file or automatically mount: mounting /dev/sda1 on /live/persistence/ failed: No such device mount: mounting /dev/sda on /live/persistence/ failed: No such device rmdir: '/live/persistence/': Device or resource busy mount: mounting /dev/sdb1 on /live/persistence/ failed: No such device rmdir: '/live/persistence/': Device or resource busy Welcome to Debian GNU/Linux 8 (jessie)! ...
Right after running "install image" in EFI mode, before first reboot, I mounted the RAID array on /mnt and all installed files and config were there.
I booted sys-rescue-cd 6.0 in EFI mode from usb stick.
I could mount the RAID array without problem.
So it is really a matter of adding what is missing so the array gets detected correctly.
I'm experiencing this exact issue as well. brand new install from rolling release - (vyos-1.2.0-rolling 2B201905140337-amd64.iso).
System is supermicro sys-5018D-FN4T. raid1 is using 2x 150gb Intel DC S3500.
I've already tried setting acpi=off and rootdelay=15 to no effect.
It looks like the check that determines if 'mdadm --zero-superblock' is needed does not work.
Also ill try to get the config in with update-initramfs so that it does not go to md127.
Working on it.
@jmlccdmd Strange is the superblock on /dev/sda, without uefi we use sda1 for raid and with uefi sda3.
Could you clear the superblock from sda and sdb and maybe try again?
mdadm --stop /dev/sda
mdadm --stop /dev/sda1
mdadm --stop /dev/sda3
mdadm --zero-superblock /dev/sda
mdadm --zero-superblock /dev/sdb
@etfeet you also get
mdadm: WARNING /dev/sda3 and /dev/sda appear to have very similar superblocks. If they are really different, please --zero the superblock on one If they are the same or overlap, please remove one from the DEVICE list in mdadm.conf.
maybe you can try my post above also then?
(latest rolling)
Just wanted to add I’m seeing this issue on 1.2.6 LTS as well running on a dell r220 ii with dual Samsung 840s. I was not seeing it in rolling nor if I switch to mbr boot.