Help, my disk array has one dead member

Xen list at xenhideout.nl
Wed Apr 5 07:29:08 UTC 2017


Liam Proven schreef op 26-03-2017 16:35:
> On 26 March 2017 at 16:00, Xen <list at xenhideout.nl> wrote:
>> 
>> Windows cannot include RAID on the boot disk.
> 
> So?

My attempt was to give a speed boost to Windows without using SSD for 
it.

Ideally hardware raid is something Windows can't see (or Linux).

Firmware raid is something Windows potentially can see but usually 
doesn't.

Firmware raid is something Linux can see and you can just ignore it or 
use the arrays as constructed (if it works lol, haven't even really 
tried it yet).

I know in the past my 2 RAID arrays from AMD raid were recognised and 
usable by OpenSUSE but Kubuntu only saw one of them for some reason.

It's a bit ugly how in Ubuntu/Linux the raid arrays come to be (those 
cryptic device and partition names)

>> I guess I am wrong about it, the Microsoft Dynamic Disks page is a 
>> morass
>> without much clear information. It was said that Windows 7 cannot boot 
>> from
>> a dynamic disk with more than 1 volume.
> 
> Part of the reason is that the "Dynamic Disks" functionality in
> Windows 2000 Server and later isn't Microsoft code. It is a
> licensed-in version of the Veritas storage manager.

Okay. Still you said to use Windows software raid if using Windows, and 
not some firmware raid. Intel RAID seems to be a lot better than AMD's, 
but really can't say. You would suppose that a vendor like HighPoint 
would create something more usable, but not really sure.


>> If I converted this disk to dynamic I would lose the Linux partition 
>> on it
>> ;-). I don't know what would happen to it. There is so much unclarity. 
>> I
>> guess Linux can read dynamic disks? There is GRUB code that handles 
>> them.
> 
> I think so. I haven't tried and would not recommend it.

I meant not that I would use a Windows RAID solution and then do 
something in Linux with it at the same time.

I meant that I would just create 2 identical disks with identical 
partitions in that LDM thing. Not touch them in Windows. Then just use 
these partitions in Linux as device mapped things. As long as you don't 
partition them or use them in Windows, you can use them as regular block 
devices. Windows won't know what's on it and won't care. It's just a 
different partition table style.

This information seems to be current:



Windows 2000, XP, and Vista use a new partitioning scheme.  It is a 
complete
replacement for the MSDOS style partitions.  It stores its information 
in a
1MiB journalled database at the end of the physical disk.  The size of
partitions is limited only by disk space.  The maximum number of 
partitions is
nearly 2000.

Any partitions created under the LDM are called "Dynamic Disks".  There 
are no
longer any primary or extended partitions.  Normal MSDOS style 
partitions are
now known as Basic Disks.

If you wish to use Spanned, Striped, Mirrored or RAID 5 Volumes, you 
must use
Dynamic Disks.  The journalling allows Windows to make changes to these
partitions and filesystems without the need to reboot.

Once the LDM driver has divided up the disk, you can use the MD driver 
to
assemble any multi-partition volumes, e.g.  Stripes, RAID5.

To prevent legacy applications from repartitioning the disk, the LDM 
creates a
dummy MSDOS partition containing one disk-sized partition.  This is what 
is
supported with the Linux LDM driver.

A newer approach that has been implemented with Vista is to put LDM on 
top of a
GPT label disk.  This is not supported by the Linux LDM driver yet.


Since I do not use GPT disks in Windows, I'm sure I would be able to do 
this, but I would have to test. (So annoying, having to test everything 
all the time, and what system to use for it).

Oh I know, I could clone a disk in Linux and then run the dynamic disk 
conversion on it in Windows and then boot Linux from it and see if it 
can see the partition.

But always more work ;-).


>> There is probably no cross-platformness whatsoever regarding the RAID 
>> part.
>> I wonder if I can create a Windows Dynamic RAID and then turn one of 
>> the
>> subvolumes into LVM raid.
> 
> Doubt it. I *definitely* would not try it.

If the linux MD driver can read them (the partitions themselves) then 
you can even use Windows RAID directly. But the LDM driver apparently 
directly supports the partitions as well. So you can either use Windows 
RAID or use some Linux RAID on it. The first will be usable by Windows, 
the second only by Linux.

Typically though I am scared of whatever crap Microsoft pulls as of 
late. I don't like PowerShell, I didn't like .NET, I don't like many 
things they keep introducing, now it's Cortana and Edge and other crap I 
don't want, and I used to be a Windows developer until .NET, but that 
aside.

So using Windows RAID, while apparently perfectly possible -- if there 
are no boot issues just seems to be quite a bit scary.

> I think Linux can read it but possibly not make Linux-native
> partitions inside it. I don't know, I have not and would not try
> except for experimental research.

If you turn it into a PV you can always do with it whatever you like. If 
you can even assemble the arrays using MD you can then also even put a 
Linux filesystem on top of the Windows RAID ;-).

Experimental perhaps yes but it seems like this information would have 
to be out there at some point, why this would still need to be 
considered "experimental" is like ...

>> At least this was true for Windows Server 2008. It is hard to find 
>> updated
>> information.
> 
> That's essentially the Server edition of Vista. Server 2008 R2 was
> Windows 7 Server.

Okay.

> Windows 8 and later replace the Veritas code with a new in-house LVM
> system called Storage Spaces.
> 
> https://blogs.msdn.microsoft.com/b8/2012/01/05/virtualizing-storage-for-scale-resiliency-and-efficiency/

Thanks for the heads up. (I have so little interest in these things...).

"In Windows 8, you cannot boot from a space. As an alternative, you can 
continue to use dynamic volumes for booting. At release, we will offer 
guidance on how you can add appropriately partitioned system/boot disks 
(with dynamic volumes) to a pool."

That's awesome.

In Microsoft nomenclature the "system" disk is the actual boot 
disk/partition and the "boot" disk is the actual system disk/partition.

Hahahahahha. Idiots. Haha.

>> See I just think the entire Windows solution is a big mess.
> 
> It is somewhat coherent inside its own space.

I guess. But that's the problem with many things, there is no 
interoperability at all which kinda ties you to the platform in a larger 
extent than ever before.

Also they create complex solutions that are just not fundamentally sound 
if you know what I mean. They are mixtures of layers and techniques and 
maybe on the surface it works well but you cannot take it apart and 
understand it easily.

> Do not mix Windows LVM with Linux LVM. In fact, in general, do not mix
> advanced storage (any of them, ZFS, LVM, XFS, JFS, Veritas, anything)
> on the same physical drive. At all, ever.

I have once mixed ZFS for 2 seconds with LVM, to give it a short.

I just don't like ZFS much or its commands.

Both XFS and ext4 are taken as good "partners" of LVM, though.

Their goal is to focus on these 2 filesystems.

Besides I have LVM running on a virtual harddisk inside a ZFS cluster 
;-).

But then, that uses virtualization too.

>> If GRUB can actually read LDM partitions than I'm sure I can boot from 
>> it in
>> Linux, but can Linux also read those partitions? In other words, can I
>> create whatever I like out of a LDM volume and have Windows not 
>> interfere
>> with it?
> 
> Don't know. Don't even try.

You're pretty bossy aren't you. This attitude leads to lack of 
knowledge. Apparently no one knows this stuff.

I don't like LDM partitions myself but that is mostly because I would be 
tongue-tied to the Windows platform for maintenance. The benefit of 
Linux is always that it is better at doing any partition work than 
Windows ever was, including even some commercial tools. Ugh. So hard to 
use at times.

Then again some Windows tools can move partitions around, Linux can't.

Not being able to use Linux tools would be a real bad thing.

>> You realize those RocketRAID cards sell for some €100 to €130 euros 
>> right
>> (or more).
> 
> Yes. Cheap. Also crappy. Do not use.

Compared to a motherboard or $50 with RAID builtin, these dedicated 
cards are *supposed* to offer something more you know.

I mean, a $5 raid solution or a $100 or $130 raid solution, their ought 
to be a difference.

But maybe we're just getting scammed.

> For hardware RAID, controllers, look at adding a zero to that price.

We're getting scammed here. It shouldn't be that hard to create 
something functional.

>>> DO NOT USE FIRMWARE RAID IN LINUX. AT ALL, EVER, UNDER ANY 
>>> CIRCUMSTANCES.
>> 
>> 
>> All of your screaming isn't going to do much good. If it's the only 
>> way to
>> be cross-platform.
> 
> It is never the only way.

So the alternative is Windows RAID apparently, which kinda sucks. But 
then, if that's a Windows partition perhaps that's okay, you only need 
to be able to read/assemble it.

What I was looking at was using LVM RAID while using a firmware solution 
for Windows.

So: Windows sees the two disks as sitting in a RAID 0 stripe. It thinks 
the entire disks are striped and sees only one logical volume. This is 
asking for trouble lol. Okay, this can't be done.

The issue is ... well. Creating a second disk only for use by Linux to 
put some RAID on it. Then Windows won't have anything.

The only way to do both Windows RAID and Linux RAID on the same disks is 
if Windows can boot off of stripe disks (in my case) while it can't read 
Linux anyway (the time when ext filesystems were readable on Windows is 
long gone I guess) and then Linux will do its own thing in the space 
left behind by Linux, but this means moving the disks to LDM.

If you create a firmware stripe and if Linux is even capable of booting 
that or booting into that, the individual disks just won't be usable as 
they would with RAID 1 or even RAID 10. No that's incorrect, only RAID 
1.

RAID 0+1 would have individual disks readable and not even that, 
individual stripe sets would be readable, but I digress.

So you cannot have a stripe using firmware that only works in Windows 
and not in Linux, and, you cannot have a stripe in Windows that is 
usable in Linux without using LDM.

A stripe set only contains one partition table (on the first disk) so 
there is not really anything possible with firmware raid unless you just 
use it in Linux entirely as you do in Windows.

So there are only two solutions:

- use stripe completely
- use LDM disks with a Windows stripe for Windows and/or a Linux stripe 
for Linux.

Everything gets more complicated all the time this way. KISS.

> Do not do it. You are _asking_ to lose data.

I lost data because I cloned a disk in Linux using dd, and LVM would see 
competing UUIDs for the same disks / volumes and it would randomly 
replace a used PV with an unused PV while the system was running, 
resulting in enormous data corruption.

It stopped doing that since the version that is in Ubuntu 16.10, at 
least, but Ubuntu 16.04 still has that.

Great.

I don't need Windows for data loss ;-). Or firmware raid.

I just need dd and behind-the-scenes processes that mess with my data 
;-).

>> Hardware cards as well as firmware BIOS typically have superior user
>> interfaces. The AMD software was also exceptionally good, it just 
>> bugged out
>> in the most important operation. I haven't seen anything from Linux 
>> that was
>> really friendly; the mdadm interface is worse than what existed 
>> before; you
>> can't use it witout a cheatsheet telling you what to do.
> 
> This stuff is complicated. You are doing things that are complicated.
> If you can't handle that, then don't do it. Buy a cheap NAS like a
> Synology or something and let it worry about it for you.

Those cards are more expensive than a Synology NAS and yet you still 
call them cheap crap.

But Linux is just notorious for not having good user interfaces, that's 
all. If someone would really write a good management console in a 
nCurses application, it would easily be solved, but we keep messing 
about with specifics and don't focus on user experience. In other words: 
20.000 versions with new features, but nothing that actually increases 
the user experience of existing versions.

Even CloneZilla is just so crappy and it is already at major version 3. 
This is supposed to be a tool that is about data safety and the GUI is 
SO BAD that half the time you need to rerun the command if you're like 
me. Their GUI doesn't have a "back" button and the script doesn't even 
check for the existence of necessitates you select in the GUI (such as 
pigz, or pixz) and you can't select pxz, but that aside. It uses 
commands with the wrong command line parameters, etc. etc.

That's our main cloning tool in that sense and it is just *so bad*.

> For lower-budget clients, I have successfully used devices such as
> Thecus to inexpensively attach a few terabytes of RAID to a
> peer-to-peer LAN:
> 
> http://www.thecus.com/product.php?PROD_ID=8

I understand. I have a Synology. I am just not using it for RAID. I used 
this, or wanted to use this for speed and fun in a certain sense.

>> So ehm. Linux software raid would be awfully nice if it was actually 
>> usable.
> 
> I've been using it for 20 years, both on my own systems and on live
> customer ones. It works fine. It isn't easy, no, but neither is
> walking a tightrope between the 2 towers of the Petronas building. (I
> was going to say the World Trade Centre, as Philip Petit actually did
> that, but it's not there any more.)

:). You are ignoring the fact that if there was a good user interface, 
all of that wouldn't be as complicated. Just look at the other thread 
that unfolded, how difficult is it? It doesn't need to be that 
different.

No, but Linux people focus on whether "cat | program" or "program < 
file" is the right way to do things.

If we keep stuck in that kind of debate forever, we will never improve.

> With asking to do this with dual-boot, you are asking to ride a
> unicycle across it. And if you want to mix 2 different software RAID
> systems, you are asking to do it blindfolded.
> 
> Don't.

You're making me want to do it if you call it like that :).

> I am not claiming to be an expert. I have done it and still have live
> systems using it now. I know what I'm talking about.
> 
> Don't.

Well that's what they say about firmware raid anyway. But Synology RAID 
devices also get in a lot of trouble when disks suddenly start to drop.

My AMD raid would just randomly drop devices too though. Completely 
unacceptable.

But a firmware RAID card or even a non-firmware raid card should solve 
that. If it doesn't, something is wrong...

>> I would try (and I might try) (and I will try) LVM RAID which is 
>> almost the
>> same as regular mdadm software RAID now.
> 
> But more difficult, in my direct personal experience.

You might be correct. I just don't like mdadm tools. It seems there were 
better tools before mdadm, or at least more logical to me. I never 
experienced them. I only created a few arrays in a Debian system and 
don't know how to maintain them.

>> LVM is just a bitch to work with if you do the more advanced stuff as 
>> well.
>> They also don't care *too* much about fail-safe operation.
> 
> I don't recommend Linux LVM at all, to be honest. If I wanted such
> advanced storage, I'd look to ZFS, probably, and today that means
> using TrueOS, i.e. FreeBSD.

It's the only toy system a home user can use really ;-).

> Or, if the customer has the budget, a NetApp or some other grown-up
> storage solution.

Ok.




More information about the ubuntu-users mailing list