#StackBounty: #boot #dual-boot #uefi #20.04 #bootloader boot options disappear after installing windows in UEFI mode?

Bounty: 50

I have ubuntu 19.10 on my system with the Legacy boot. I installed Windows 10 in UEFI mode and now boot options are missing at the startup.

I tried following things to fix the boot:

  • tried boot-repair utility, it failed with following error: ‘Please enable a repository containing the [grub-efi-amd64-signed] packages’
  • created UEFI partition and tried installing grub on it manually. It also didn’t work.
  • logged into windows and tried easyBCD utility. Here in add new entry option, Linux is disabled.

But neither worked. I cannot login into Ubuntu now. However, I have bootable Ubuntu USB handy with me, my home is mounted on a separate partition.

is there a way to fix the boot loader without losing anydata?
If re-install is the only option, If I use re-install ubuntu option instead of “something-else” will it fix the bootloader? Please refer me to the any useful link which I can follow to re-install.


Get this bounty!!!

#StackBounty: #boot #bios #hdmi #uefi #displayport Can't get into the BIOS/UEFI or see boot output on DisplayPort, but can on HDMI …

Bounty: 100

Per the title, this is the extent of my problem. When booting this system, there is no output on the screen (or even a signal – the monitor goes to sleep) until Windows comes up and shows me the login screen. Aside from this annoying BIOS/UEFI issue, the computer works normally.

If I connect a monitor via HDMI instead of DisplayPort, then I’m able to see the BIOS/UEFI as expected, boot messages, and so forth. However, due to my workstation setup (HDMI on my monitor is connected to another system), I want to stay on DisplayPort if at all possible

Further, apparently randomly, a 1 long, 3 short beep code is generated on cold boots, which indicates inability to detect the GPU. (This beep does not happen every cold boot, and happens roughly 60% of the time)

However, even on boots where this beep code happens, eventually Windows starts, displays the login screen, and the computer works otherwise normally.

This problem started when I upgraded from a GTX970 to a GTX1070. The 970 did not have this problem, the 1070 and 2080 did.

System Specs

  • ASUS Maximus IX Hero (BIOS version 1301, 3/14/2018)
  • I7-7700K @ 4.2GHz
  • 32 GB RAM
  • GeForce RTX 2080 SUPER (previously a GeForce 1070, and before that a 970)
  • Windows 10 Pro

What I have tried

  • Setting the Compatibility Support Module to Auto/Enabled/Disabled (source, fixed the problem for someone else)
  • Setting PEG as the primary display device in the UEFI/BIOS.
  • Installing Windows 10 in UEFI mode (by installing with the CSM set to enabled or auto)
  • Disconnecting all devices from the computer aside from the keyboard and display
  • An entirely new GPU (Twice!)
  • Another DisplayPort monitor
  • Another DisplayPort cable
  • Modifying the deep sleep, OD, and refresh rate overclock on the monitor.
  • Every combination of the above three items
  • Upgrading the GPU firmware on the 1070 (source, references “blank screens on boot until the OS loads”)

How do I get the BIOS/UEFI output to show up on my one and only monitor?


Get this bounty!!!

#StackBounty: #boot #dual-boot #grub2 #kernel #uefi Kernel panic: not syncing try passing init= option. Also for liveUSB

Bounty: 50

TLDR: BIOS flashing resulted in kernel panic missing init, including for liveUSB, i.e. no way to address with BootSectorFix / BootRepair. How to find init / rebuild MBR?

having a nightmare today. Attempted to update flash my BIOS using file from motherboard manufacturer website, which ‘worked’ but then bootlooped. I then tried to flash back to my previous BIOS version but that’s not available on the site so I had to go back a few versions then back up. The BIOS manager seemed to save one of the later versions, preventing updating, blah blah, it’s a mess. Anyhow, I was at v42a, then 50a, then 31, 42d, and now finally 41 stable.

If I try to boot into my normal install (xubuntu 5.6.5-050605-generic) it hangs at:

killed
run-init: can’t execute ‘/bin/init’: no such file or directory
/bin/sh: 0: can’t access tty; job control turned off

for a while then

end kernel panic – not syncing: Attempted to kill the idle task! kernel
offset: # from # (relocation range: #)

This is AFTER I added “init=/init” to the kernel options in grub, during the previous boot attempt, but I didn’t add anything this time.

Trying the same kernel with recovery mode (again, no init specified), hangs at new serial device found, my 5 year old mouse. This is the first time I’ve seen this issue.

Trying kernel 5.3.0.46 nonrecovery: black screen, nothing happens. Rebooted. No response. Power cycled.

5.6.5-050605 recovery: sits at “Loading initial ramdisk…” indefinitely.

5.6.5-050605 recovery, “init=/init” at end of linux line: gets past ramdisk line,

end Kernel panic – not syncing: Requested init /init failed (error
-2).

So I guess my init line is wrong.

5.6.5-050605 recovery, “init=/usr/lib/systemd/systemd” at end of linux line:

kernel panic: attempted to kill idle task error

Various threads recommend disabling secure boot but I don’t see that it’s enabled. Many more suggest options which require booting into the system (!) or into a liveCD. I made a liveUSB of xubuntu 20.04, try Xubuntu without installing (safe graphics):

end kernel panic – not syncing: No working init found. Try passing init= option to kernel

But this is a liveUSB?? Prior lines of interest were

Initramfs unpacking failed: no cpio magic

This suggests that the problems are from windows trying to fix itself and messing up grub/mbr but I’m not sure. I don’t know why that would affect a liveUSB…

This suggests it might be a problem with my liveUSB build & I should try Rufus not Tuxboot. Just tried that:

Some bad messages (went by too fast), then gets to xubuntu splash screen with rotating circle which stops at maybe 15%. This happened before with my previous liveUSB attempt (17.04). Then upon retry it hangs at the same message as before, no cpio magic, no working init. Other options include inspecting UEFI firmware which hangs and reboots. Before I had the chance to select a different option in the third try it had selected the default option (try xubuntu).

Initramfs unpacking failed: decoding failed. sda no caching mode page
found BUG: bad page state in process plymouthd general protection
fault 0000 #1 SMP NOPTI Comm plyouthd not tainted 5.4.0-26-generic
30-ubuntu (& lots more)

I don’t have 5.4.0.26 kernel installed on the machine though maybe that’s the liveUSB kernel.

Other grub2 options bkpbootx64.efi:

Failed to open EFIBOOTgrub64.efi – Not found
Failed to load image EFIBOOTgrub64.efi Not found
Start_image returned Not Found

fwupx64:

System BootOrder not found. initializing defaults

fwupx64.efi: reverts to grub2 menu

Trying linux 4.14 generic recovery mode:

fixing recursive fault but reboot is needed

Possibly from “unable to handle kernel null pointer deference at (null)”

Rebooted, same kernel recovery mode: panic not syncing fatal exception in interrupt, kernel offset.

Retried liveUSB recovery mode:

Comm swapper/8 tainted 5.4.0-26-generic

then pauses for ages. Not sure if it’s doing anything. Eventually rebooted.
While waiting I looked into the Initramfs unpacking decoding issue, some recommend adding nomodeset to the kernel entry but it’s enabled by default in the liveUSB’s safe graphics entry.

Trying that again:

Bug: unable to handle page fault for address: #
PF supervisor write access in kernel mode
pf: error_code(0x)003) – permission violation
not tainted kernel this time
kernel panic not syncing attempted to kill the idle task, kernel offset

Found out that csm was secure boot. Disabled that, tried booting into normal ubuntu (non liveUSB). Pauses for ages at [end trace] then continues; made some progress:

gave up waiting for root file system device. Common problems:

ALERT! uuid=# does not exist. Dropping to a shell!
(initramfs)

Possibly I have a bad superblock.

I didn’t touch anything & looked back over and it was back at the idle task kernel panic error. Rebooted, missing init kernel panic error. Tried default xubuntu without installing liveUSB, blackscreened before I could hit enter (BIOS still buggy?). Rebooted. Tried the same, faster. No working init error. So maybe the csm/secure boot has fixed the initramfs error but dropped it down to only the missing init issue which still affects the liveUSB. Trying liveUSB safe graphics just to be sure, blackscreened. Powercycled, tried again, no working init bug.

AHCI already enabled for SATA (i.e. not RAID). Disabled csm AGAIN in bios (not sure why it reverted) and tried normal boot again. Gets further (coloured text),

FAILED to mount NFSD config filesystem
Dependency failed for NFS mount daemon, NFS server & services, NFSv4 ID_name mapping services.

Waits for ages after “reached target local encrypted volumes”. Ages like forever. Rebooted. So maybe this is my final bug? Edit: liveUSB (inc safe graphics) fails with init location error still; normal ubuntu with recovery mode waits ages after “attached scsi disk” then “gave up waiting for suspend/resume device” then kill init error.

If anyone has any ideas I would be very grateful indeed. LiveUSB solutions & Installed solutions fail because of the init location bug.

I can’t enter commands anywhere other than kernel options and (once) an initramfs busybox.

Thanks so much in advance.

This also killed my dualboot win10 install but I figure if I can get xubuntu installed, I can worry about windows then.

I also tried Boot Repair liveUSB (tested working on another machine):

Unable to handle null pointer dereference at #

Oops: 0002 [#1] SMP

Tainted 4.13.0-16-generic ubuntu (& lots more output)

Kernel panic, not syncing, fatal exception in interrupt, shutting down cpus with MMI

Trying BootRepair with ide=noidma:

screenshot1

Trying BootRepair again with default settings, says BootRepairDisk for a second in blue then:

screenshot2

(A bit part of why this bug report/help request is such a nightmarish mess is that the error message is regularly different the second time I do the same thing).

Same again, blackscreen, no output, twice. Power cycled. Again, back to default error:

Screenshot3

Retrying default ubuntu, which has these settings:

Screenshot4

Results in the “attempted to kill init” flavour of init fail:

Screenshot5

Retrying, hung in BIOS. Has blackscreened a few times rebooting & not getting to BIOS. Given these issues were almost certainly caused by me flashing the BIOS and scrambling MBR?init locations, I’m loathe to suspect hardware issues, but could this be a motherboard issue? I’ve had rare but intermittent hangs since I built the machine, have RMAd a few memory stcks but maybe it’s other hardware?

Default ubuntu, safe mode (“kill idle task” fail):

screenshot6

Ubuntu 4.14.174 recovery:

screenshot7

Completely out of ideas. Bounty added. Please help, anyone.

Edit 2020-04-28

I bought a SATA connector, plugged the SSD (which is the booter for linux & win10 on 2 partitions) into my working laptop, backed up files, and can investigate the drive. But I’m not really sure what I’m looking for in terms of signs that things are wrong. grub.cfg’s root UUID matches the drive name listed in Thunar. /sbin/init is a symlink to lib/systemd/systemd which doesn’t open with mousepad for investigation. No reason to assume this is ‘wrong’? With gparted I can ‘check’ the boot partition: check and repair filesystem (fat32). Not sure whether this is a good idea? Did it, performed fine. Ran TestDisk:

testdisk

Looks fine? Installed Boot Repair, but it doesn’t look like there’s an option to analyse the secondary drive’s boot sector, and I don’t want to risk messing with my laptop’s boot sector. Pastebin for boot-repair check. Perhaps the most important looking bit:

=> No boot loader is installed in the MBR of /dev/sdb.

Note: sda = working laptop without separate boot partition. sdb = external SSD from the failed machine with separate boot partition (sdb1). sdb2=xubuntu, sdb3=win10. But:

Boot sector info: No errors found in the Boot Parameter Block.

However the UUID that I believe the boot process is looking for is aeb0822c-0854-4d06-aa9d-33986c319666 which is sdb2, the xubuntu partition, and NOT 34CA-81B4, the sdb1 boot partition. But I could be, and probably am, 100% wrong about this. But sdb1’s grub says to look for the long UUID on sdb2 so that’s probably fine.

Windows not detected by os-prober on sdb3

Sounds bad. But a later problem. However:

OS#2: Ubuntu 19.10 on sdb2

OS#3: Windows on sdb3

In partitions, sdb1 says “notbiosboot”

Suggested repair: The default repair of the Boot-Repair utility would
purge (in order to fix packages) and reinstall the
grub-efi-amd64-signed of sda1, using the following options:
sdb1/boot/efi, Additional repair would be performed:
unhide-bootmenu-10s fix-windows-boot use-standard-efi-file
restore-efi-backups

I REALLY don’t want to mess with sda1 AT ALL.

Final advice in case of suggested repair: Please do not forget to make
your BIOS boot on sdb1/efi/…/grub*.efi file!

Sounds promising?

Edit: later that night

In putting the SSD back in I noticed that the CPU/DRAM lights were blinking on some boot attempts. After some googling, this and the above regular changes in boot problems, sluggish booting, regular hangs at bios, blackscreens etc, get me thinking maybe it’s a hardware fault concealed by my boot flashing. Indeed I’d had intermittent hangs since I built the rig. Removed 2 mem sticks & it booted, very sluggishly, stopping for long periods at random periods in the boot log. Now typing from that machine. Will attempt to debug hardware. I suspect it’s the motherboard.


Get this bounty!!!

#StackBounty: #boot #bios #hdmi #uefi #displayport Can't get into the BIOS/UEFI or see boot output on DisplayPort, but can on HDMI …

Bounty: 100

Per the title, this is the extent of my problem. When booting this system, there is no output on the screen (or even a signal – the monitor goes to sleep) until Windows comes up and shows me the login screen. Aside from this annoying BIOS/UEFI issue, the computer works normally.

If I connect a monitor via HDMI instead of DisplayPort, then I’m able to see the BIOS/UEFI as expected, boot messages, and so forth. However, due to my workstation setup (HDMI on my monitor is connected to another system), I want to stay on DisplayPort if at all possible

Further, apparently randomly, a 1 long, 3 short beep code is generated on cold boots, which indicates inability to detect the GPU. (This beep does not happen every cold boot, and happens roughly 60% of the time)

However, even on boots where this beep code happens, eventually Windows starts, displays the login screen, and the computer works otherwise normally.

This problem started when I upgraded from a GTX970 to a GTX1070. The 970 did not have this problem, the 1070 and 2080 did.

System Specs

  • ASUS Maximus IX Hero (BIOS version 1301, 3/14/2018)
  • I7-7700K @ 4.2GHz
  • 32 GB RAM
  • GeForce RTX 2080 SUPER (previously a GeForce 1070, and before that a 970)
  • Windows 10 Pro

What I have tried

  • Setting the Compatibility Support Module to Auto/Enabled/Disabled (source, fixed the problem for someone else)
  • Setting PEG as the primary display device in the UEFI/BIOS.
  • Installing Windows 10 in UEFI mode (by installing with the CSM set to enabled or auto)
  • Disconnecting all devices from the computer aside from the keyboard and display
  • An entirely new GPU (Twice!)
  • Another DisplayPort monitor
  • Another DisplayPort cable
  • Modifying the deep sleep, OD, and refresh rate overclock on the monitor.
  • Every combination of the above three items
  • Upgrading the GPU firmware on the 1070 (source, references “blank screens on boot until the OS loads”)

How do I get the BIOS/UEFI output to show up on my one and only monitor?


Get this bounty!!!

#StackBounty: #dual-boot #partitioning #grub2 #uefi Remove grub2 core.img (having 2) and resize /boot/efi

Bounty: 50

Some time ago I installed ubuntu alongside with win10.
I am a total newby, so please excuse my maybe stupid questions.

I think I might have done some mistakes while installing ubuntu.
I fact, I had to install it twice and I might done bad things to my
paritions.

This is my current table:

enter image description here

My problems:

1. Why do I have two grub2 core.img partitions? Can I remove one? The latter?

I have 5.62 GiB unallocated behind the second grub2 core.img and I would like
to merge them together with the freed space from the img into 0n1p6, which will be my seperate /home.

2. /boot/efi is to small, so I can’t update my firmware

fwupdmgr -get-devices

XPS 13 9380 System Firmware
  DeviceId:             6c24a747f97668873b761558e322398a91dbf394
  Guid:                 ce945437-7358-49f1-95d8-6b694a10a755
  Plugin:               uefi
  Flags:                internal|require-ac|supported|registered|needs-reboot
  Version:              0.1.10.0
  VersionLowest:        0.1.10.0
  VersionFormat:        quad
  Icon:                 computer
  Created:              2020-04-26
  UpdateState:          success
  UpdateError:          /boot/efi does not have sufficient space, required 33,6 MB, got 25,7 MB

What would be a good solution to apply more space to /boot/efi?


Get this bounty!!!

#StackBounty: #memory #cpu #motherboard #uefi #ddr UEFI configuration for DDR4 memory on ASRock X99 Extreme3

Bounty: 200

I once over-estimated my knowledge of hardware (especially combination of motherboard and memory) and bought 4x 16GB GeIL EVO Potenza DDR4-3000 DIMM CL16 for an ASRock X99 Extreme3. I tried to configure it myself, but I never got the forth memory bar to be recognized and the Ubuntu system thus had only 12 of 16 GB available memory.

Then, I brought the PC to a professional for some repairs and asked that the configuration ought to be checked. He indeed succeeded to configure the UEFI to make all 4 bars work with a maximum frequence of approx. half of the 3000MHz available (I think 1600).

Now, I left the PC turned off for some months and the UEFI settings got erased. I forgot that that happens… Now, I’d like to find these settings again to save myself the way to the store and maybe learn something about memory configuration.

The product details from the sellers homepage (translated):

Model name:     EVO Potenza
Capacity:   16GB
# of modules:   4x
Capacity of each module:    4096MB
Type of memory:     DDR4-3000
JEDEC Norm:     PC4-24000U
Memory type:    unbuffered
Norm:   DIMM
Memory interface:   DDR4
Max. Frequency:     3000MHz
Voltage:    1.35V
Connection:     288-pin
Latency (CL):   CL16
RAS to CAS Delay (tRCD):    16
Ras Precharge Time (tRP):   16
Row Active Time (tRAS):     36
Features:   XMP 2.0 Support

I have the following options available in the UEFI (version 3.70):

BCLK Frequency
DRAM Voltage
DRAM Reference Clock
DRAM Frequency (Auto | DDR4-800 | DDR4-1066 | DDR4-1333 | DDR4-1600)

Primary Timing Options:
CAS# Latency (tCL)
RAS# to CAS# Delay (TRCD)
Raw Percentage Time (tRP)
RAS# Active Time (tRAS)
Command Rate (CR)

Secondary Timing Options:
Write Recovery Time (tWR)
Refresh Cycle Time (tRFC)
RAS to RAS Delay (tRRD)
RAS to RAS Delay (tRRD_L)
Write to Read Delay (tWTR)
Write to Read Delay (tWTR_L)
Read to Percentage (tRTP)
Four Activate Window (tFAW)
CAS Write Latency (tCWL)

Third Timing Options and Advanced Settings
[I can list them if that if that helps]

I already tried to set all options available from the product details which still causes only 3 of 4 bars to be recognized.

The CPU is Intel(R) XEON(R) CPU E5-2603 v3 @ 1.60GHz. The OS is now Ubuntu 18.04 (in case that matters, the memory is already not recognized in the UEFI).

16GB of RAM installed, ~12GB useable is marked as duplicate of question regarding Windows which has never been involved in my system or an erroneous connection of the memory or CPU socket which I believe is unlikely since the problem is identical to the situation before the first fix which needs to be found again (unfortunately and because of my bad).


Get this bounty!!!

#StackBounty: #ubuntu #uefi #boot-loader #bios Flashed Bios and can only boot ubuntu 18.04 recovery mode

Bounty: 50

Back Story

I wanted to install minikube, and that wanted virtual box, and that gave errors complaining about AMD-V (hardware virtualization) being unavailable, even after I found that in the bios and turned it on. Wisdom on the web said that the version of my Bios (F1) had a bug in AMD-V, and any version after F3 was also broken WRT virtualization, and didn’t work well with 1950x processors (being more focused on the 29XX line). Soo…. I flashed to F3j

Problem

Now when I boot grub comes up just fine, and gives me the expected list of options (Ubuntu, Advanced Ubuntu options, mem check stuff, Windows 10) Windows works, and if I go into Ubuntu Advanced Options and select a recovery image it boots fine, however if I allow timeout or select the default Ubuntu option it boots, showing a purple screen and no progress (sometimes) or very fast scrolling text that stalls out one a message about clocksource: Switched to clocksource tsc. During this dead boot state neither the mouse or the keyboard (razer led lit models) light up and Ctrl-Alt-Del has no effect. The only recourse is to hold the power button for several seconds to force a reboot

What I’ve tried

My searching here and elswhere on the web suggests that this might be because EFI variables were being used and flashing removed them. These reports however all mention other motherboards, so I am not confident this is the problem. However in recovery mode I find that efibootmgr won’t run complaining that EFI Variables are not supported on this system. So I’ve been trying to get that enabled… I loaded up the bios, and I’ve tried fiddling with the following options:

  1. I changed Storage Boot Option Control to UEFI Only from Legacy setting
  2. I switched Boot Option 1 and Boot Option 2, and there appears to be no difference as to whether Samsung 960 or UEFI 5.0 is first
  3. I changed the BBS Priorities to put UEFI first.
  4. Reading of issues with IOMMU settings I also tried moving this from auto to enabled

None of these has had any noticible effect.

System Details

  • Ubuntu 18.04 LTS dual booting via GRUB with Windows 10 successfully for the last 2 years.
  • Rev 1.0 Gigabyte Aorus Gaming 7 X399 motherboard
  • Ryzen ThreadRipper 1950x cpu
  • Samsung 960 nvme 500mb main drive
  • Samsung 970 nvme secondary drive
  • 48GB mem, 1080ti card, etc peripherals probably not relevant…

The Question(s)

  • Has anyone upgraded the bios on this hardware set (or at least on a Aorus X399) successfully? If so did you have this problem, and if you did how did you get around it.
  • Any boot linux boot gurus out there have any ideas how to get things back to normal?

I took a backup of the bios (using the quick boot tool in the bios) so restoring that might get me back to normal booting, but then I am still stuck on the Minikube install.

Update: after trying a BBS order with 960 Samsung first (the normal boot drive) I am now getting this when attempting a normal boot from grub:
enter image description here
The windows boot and boot from recovery/resume continue to work

Also Note: the UEFI:Verbatim 5.0 boot option is the memory stick I used to load the loader (didn’t realize that was the name attached to my memory stick, thought it was some built in thing)


Get this bounty!!!

#StackBounty: #centos #uefi CentOS 8 not booting going to dracut resume mode

Bounty: 50

I am running CentOS 8 in VMWare Guest OS. it running for some time, today rebooted the machine, but its going to Dracut resuce mode. it not able to active the OS LVMs.

any help to fix this error?

Update 1:

It goes to Darcut shell, I have to run these two command and exit the shell.

lvm vgscan
lvm vgchange -ay

after this server is booting? now sure why LVM is not getting active during boot.

enter image description here
Thanks

update 1/9/2020

Booting from old kernel also same issue, here is the rdsosreport lines.

# more /run/initramfs/rdsosreport.txt
+ cat /lib/dracut/dracut-049-10.git20190115.el8
dracut-049-10.git20190115.el8
+ cat /proc/cmdline
+ sed -e 's/(ftp://.*):.*@/1:*******@/g;s/(cifs://.*):.*@/1:*******@/g;s/cifspass=[^ ]*/cifspass=*******/g;s/iscsi:.*@/iscsi:******@/g;s/rd.iscsi.password=[^ ]*/rd.iscsi.password=*
*****/g;s/rd.iscsi.in.password=[^ ]*/rd.iscsi.in.password=******/g'
BOOT_IMAGE=(hd0,gpt2)/vmlinuz-4.18.0-80.7.1.el8_0.x86_64 root=/dev/mapper/cl-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap net.ifnames=0 biosdevname=0 ipv6.disable=0 a
udit=1 loglevel=7 systemd.log_level=debug

during boot, it goes this prompt, after active lvm OS comes up

Here is the diskinfo:

+ blkid
/dev/sda1: UUID="CA2D-9F26" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="d999eb85-785f-467a-9d99-62d285374df7"
/dev/sda2: UUID="d6d4c6f0-a259-4a2d-9bb4-da691f85f40c" TYPE="ext4" PARTUUID="b6015c77-0900-4228-927f-47cf670d7ab5"
/dev/sda3: UUID="9XRB6B-pJI3-8K9l-1vOi-rQ8r-4ghe-pQw7q7" TYPE="LVM2_member" PARTUUID="8a37820e-352e-47e5-b4a7-890d43078415"
+ blkid -o udev
ID_FS_UUID=CA2D-9F26
ID_FS_UUID_ENC=CA2D-9F26
ID_FS_TYPE=vfat
ID_FS_PARTLABEL=EFI System Partition
ID_FS_PARTUUID=d999eb85-785f-467a-9d99-62d285374df7

ID_FS_UUID=d6d4c6f0-a259-4a2d-9bb4-da691f85f40c
ID_FS_UUID_ENC=d6d4c6f0-a259-4a2d-9bb4-da691f85f40c
ID_FS_TYPE=ext4
ID_FS_PARTUUID=b6015c77-0900-4228-927f-47cf670d7ab5

ID_FS_UUID=9XRB6B-pJI3-8K9l-1vOi-rQ8r-4ghe-pQw7q7
ID_FS_UUID_ENC=9XRB6B-pJI3-8K9l-1vOi-rQ8r-4ghe-pQw7q7
ID_FS_TYPE=LVM2_member
ID_FS_PARTUUID=8a37820e-352e-47e5-b4a7-890d43078415
+ ls -l /dev/disk/by-id /dev/disk/by-partlabel /dev/disk/by-partuuid /dev/disk/by-path /dev/disk/by-uuid
/dev/disk/by-id:
total 0
lrwxrwxrwx 1 root root  9 Jan  9 21:45 ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root 10 Jan  9 21:45 lvm-pv-uuid-9XRB6B-pJI3-8K9l-1vOi-rQ8r-4ghe-pQw7q7 -> ../../sda3

/dev/disk/by-partlabel:
total 0
lrwxrwxrwx 1 root root 10 Jan  9 21:45 EFIx20Systemx20Partition -> ../../sda1

/dev/disk/by-partuuid:
total 0
lrwxrwxrwx 1 root root 10 Jan  9 21:45 8a37820e-352e-47e5-b4a7-890d43078415 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jan  9 21:45 b6015c77-0900-4228-927f-47cf670d7ab5 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jan  9 21:45 d999eb85-785f-467a-9d99-62d285374df7 -> ../../sda1

/dev/disk/by-path:
total 0
lrwxrwxrwx 1 root root  9 Jan  9 21:45 pci-0000:02:01.0-ata-1 -> ../../sr0
lrwxrwxrwx 1 root root  9 Jan  9 21:45 pci-0000:03:00.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan  9 21:45 pci-0000:03:00.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jan  9 21:45 pci-0000:03:00.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jan  9 21:45 pci-0000:03:00.0-scsi-0:0:0:0-part3 -> ../../sda3

Here is udev message:

systemd-udevd[409]: passed 821 byte device to netlink monitor 0x559ccca0e790
systemd-udevd[381]: passed 240 byte device to netlink monitor 0x559ccc9e0e20
systemd-udevd[423]: Successfully forked off '(spawn)' as PID 579.
systemd-udevd[423]: Process '/sbin/modprobe -bv sg' failed with exit code 1.
systemd-udevd[423]: passed 240 byte device to netlink monitor 0x559ccca4bbf0
systemd-udevd[381]: passed 269 byte device to netlink monitor 0x559ccc9e0e20
systemd[1]: dev-disk-byx2dpath-pcix2d0000:02:01.0x2datax2d1.device: Changed dead -> plugged
systemd[1]: dev-disk-byx2did-atax2dVMware_Virtual_SATA_CDRW_Drive_00000000000000000001.device: Changed dead -> plugged
systemd[1]: dev-sr0.device: Changed dead -> plugged
systemd[1]: sys-devices-pci0000:00-0000:00:11.0-0000:02:01.0-ata3-host3-target3:0:0-3:0:0:0-block-sr0.device: Changed dead -> plugged
systemd-udevd[423]: Successfully forked off '(spawn)' as PID 580.
systemd-udevd[423]: created db file '/run/udev/data/b11:0' for '/devices/pci0000:00/0000:00:11.0/0000:02:01.0/ata3/host3/target3:0:0/3:0:0:0/bloc

systemd[1]: dev-disk-byx2dpath-pcix2d0000:02:01.0x2datax2d1.device: Installed new job dev-disk-byx2dpath-pcix2d0000:02:01.0x2datax2d1.de$

systemd[1]: dev-disk-byx2did-atax2dVMware_Virtual_SATA_CDRW_Drive_00000000000000000001.device: Installed new job dev-disk-byx2did-atax2dVMwa$
1.device/nop as 54
systemd[1]: dev-sr0.device: Installed new job dev-sr0.device/nop as 55
systemd[1]: sys-devices-pci0000:00-0000:00:11.0-0000:02:01.0-ata3-host3-target3:0:0-3:0:0:0-block-sr0.device: Installed new job sys-devices-pci0$
rget3:0:0-3:0:0:0-block-sr0.device/nop as 56
systemd[1]: sys-devices-pci0000:00-0000:00:11.0-0000:02:01.0-ata3-host3-target3:0:0-3:0:0:0-block-sr0.device: Job sys-devices-pci0000:00-0000:00$
0:0-block-sr0.device/nop finished, result=done
systemd[1]: dev-sr0.device: Job dev-sr0.device/nop finished, result=done
systemd[1]: dev-disk-byx2did-atax2dVMware_Virtual_SATA_CDRW_Drive_00000000000000000001.device: Job dev-disk-byx2did-atax2dVMware_Virtual_SAT$
inished, result=done
systemd[1]: dev-disk-byx2dpath-pcix2d0000:02:01.0x2datax2d1.device: Job dev-disk-byx2dpath-pcix2d0000:02:01.0x2datax2d1.device/nop finis$

systemd-udevd[423]: passed 844 byte device to netlink monitor 0x559ccca4bbf0
dracut-initqueue[425]: Scanning devices sda3  for LVM logical volumes centos/root centos/swap
dracut-initqueue[425]: inactive '/dev/cl/swap' [7.91 GiB] inherit
dracut-initqueue[425]: inactive '/dev/cl/home' [<40.50 GiB] inherit
dracut-initqueue[425]: inactive '/dev/cl/root' [50.00 GiB] inherit
kernel: random: fast init done
dracut-initqueue[425]: Volume group "centos" not found
dracut-initqueue[425]: Cannot process volume group centos
dracut-initqueue[425]: Volume group "centos" not found
dracut-initqueue[425]: Cannot process volume group centos
systemd[1]: systemd-udevd.service: Got notification message from PID 381 (WATCHDOG=1)
systemd[1]: systemd-udevd.service: Got notification message from PID 381 (WATCHDOG=1)
systemd[1]: systemd-journald.service: Got notification message from PID 245 (WATCHDOG=1)
dracut-initqueue[425]: Warning: dracut-initqueue timeout - starting timeout scripts
dracut-initqueue[425]: Scanning devices sda3  for LVM logical volumes centos/root centos/swap
dracut-initqueue[425]: inactive '/dev/cl/swap' [7.91 GiB] inherit
dracut-initqueue[425]: inactive '/dev/cl/home' [<40.50 GiB] inherit
dracut-initqueue[425]: inactive '/dev/cl/root' [50.00 GiB] inherit
dracut-initqueue[425]: Volume group "centos" not found
dracut-initqueue[425]: Cannot process volume group centos
dracut-initqueue[425]: Volume group "centos" not found
dracut-initqueue[425]: Cannot process volume group centos
dracut-initqueue[425]: Warning: dracut-initqueue timeout - starting timeout scripts
dracut-initqueue[425]: Warning: dracut-initqueue timeout - starting timeout scripts
kernel: random: crng init done
kernel: random: 7 urandom warning(s) missed due to ratelimiting


Get this bounty!!!

#StackBounty: #linux #windows-10 #installation #uefi #manjaro I can't install Linux (+ Brick, HELP!)

Bounty: 50

I’m trying to install Linux in this computer:

  • MB: ASRock Z75 Pro3
  • CPU: I3-3220
  • GPU: AMD Radeon HD 6800
  • RAM: 8GB
  • BIOS: UEFI American Megatrends ICN 2.00, 2013/10/9
  • SMBIOS version: 2.7.
  • SSD with two HDD in RAID as a slave
  • Windows 10

I’ve done the next actions:

  • I disabled SecureBoot
  • I disabled Fastboot
  • I have checked the ISO of Manjaro
  • I made the bootable USB again (Rufus and Etcher) with 3 different USB sticks.
  • I started with restart+shift (win) and selecting the UEFI USB Manjaro installation (and also the Legacy before)
  • I tried with Puppy Linux also and get similar results (when it is loading kernels in the installation process I get a black screen)
  • I tried with Puppy in DD mode (Rufus)
  • I checked that the disk is in a GPT mode
  • I tried with Arcolinux broking the PC (more info below)

All that I get is the Manjaro installation menu, I select languages and Time Zone and then select the Manjaro installation option I get a black screen. In the best results, I could see an Asrock logo for milliseconds on the screen (I think it is after a fast reboot).

Now, after trying to install Arcolinux as @vxp suggested, I have broken my computer. I get a black screen since the very beginning, I can’t see even the BIOS messages.

Any idea?


Get this bounty!!!

#StackBounty: #linux #windows-10 #installation #uefi #manjaro I can't install Linux (UEFI, SecureBoot disabled)

Bounty: 50

I’m trying to install Linux in a computer with Windows 10 and UEFI system (Asrock). Nowadays I have Win10 in an SSD with two HDD in RAID as a slave.

  • I disabled SecureBoot
  • I disabled Fastboot
  • I have checked the ISO of Manjaro
  • I made the bootable USB again (Rufus and Etcher) with 3 different USB sticks.
  • I started with restart+shift (win) and selecting the UEFI USB Manjaro installation (and also the Legacy before)
  • I tried with Puppy Linux also and get similar results (when it is loading kernels in the installation process I get a black screen)
  • I tried with Puppy in DD mode (Rufus)
  • I checked that the disk is in a GPT mode

All that I get is the Manjaro installation menu, I select languages and Time Zone and then select the Manjaro installation option I get a black screen. In the best results, I could see an Asrock logo for milliseconds in the screen.

Any idea?


Get this bounty!!!