It will cause a critical error during boot if the device isn’t given the nofail
mount option, which is not included in the defaults
option, and then fails to mount. For more details, look in the fstab(5)
man page, and for even more detail, the mount(8)
man page.
Found that out for myself when not having my external harddrive enclosure turned on with a formatted drive in it caused the pc to boot into recovery mode (it was not the primary drive). I had just copy-pasted the options from my root partition, thinking I could take the shortcut instead of reading documentation.
There’s probably other ways that a borked fstab can cause a fail to boot, but that’s just the one I know of from experience.
Its a ‘failsafe’ , like if part of the system depends on that drive mounting then if it fails then don’t continue. Not the expected default, but probably made sense at some point. Like if brakes are broken don’t allow starting truck, type failsafe.
Yea like the default is smart? How is it supposed to know if that’s critical or not at that point? The alternative is for it to silently fail and wait for something else to break instead of failing gracefully? I feel like I’m growing more and more petty and matching the language of systemd haters but like just think about it for a few minutes???