Smart array 200i driver
Does anyone have any other ideas? Am I on the right track with the cache battery? The storage controller is integrated so I can't just swap it out. And I have been unsuccessful trying to get my hands on updated firmware from HP since the server is out of warranty and has no active carepack. This will mean that the controller is in read cache only mode, and if it still fails it mean the integrated controller has gone belly up.
It just sits on "initializing" the smart array controller for about 5 minutes and then fails. So, you're saying that I can get a standard non-integrated controller and it will work? Do you know if there is a way to get my hands on the firmware? The E and Ei seem to be the same product one integrated and the other add-on card.
I ordered the e controller and was able to get the server up and running. Thanks for all the help! To continue this discussion, please ask a new question. In the end, the system disk would not boot so we re-installed our original set of 5 drives.
Our intent at this time was to return to normal operating status, lick our wounds and go home. Nope- the Raid Controller configuration was lost. We had to setup our two raid arrays and their sub-ordinate logical drives from scratch.
No idea why So now we had two brand new Raid Arrays each with a single Logical drive. Totally blank drives, not even partitioned. Only option at this time is to perform a system restore using our back-up tapes. My first thought is A ton of your problems would come from that. The need for similar hardware, in ability to test, need to be risky to test I'd solve that problem too before looking elsewhere.
Virtualize and keep all of your data on the enterprise gear where it is safest. This setup is designed to be expensive and fragile. It's not why the system died here, but it is the worst possible use of five drives. Or nearly so. But whatever you do, don't recreate that storage. A normal DR test involves powering down or turning off applications.
A DR test that involves yanking drives implies that it isn't a real DR test as you are using some of the failed gear for the test AND triggering major disaster possibilities, like you found out.
RAID is not designed for this type of use. Yes, it should work, but you are asking a lot of it and taking a ton of unnecessary chances. Yes, but I don't know of any hardware controller that offers that. The Ei most definitely does not.
There are ways to do it if you were running software RAID instead. It should not have but you are dealing with competing setups and if anything went wrong with the first drives, the second drives overwriting the controller's data would wipe out the protection that the controller normally offers. If everything worked perfectly, you'd be okay.
But you are reducing the system's ability to deal with issues in this scenario. This creates a problem when you try to recover a bare metal config in that the array has to be created and the drives put in the correct position before you boot you or the config can get hosed because the names don't match.
You may be able to use the HP array config utility the put your volumes back together or you may need to speak to HP and find out if you need to be using a different controller ie a P for what you are trying to do.
In my own defense, our system was originally set up by a consultant. I have to admint, I like isolating the users files on the 'data' disk from the system files, but perhaps there are better ways to manage that. One alternative we are considering is adding a NAS drive for all user data Our intent was to simulate a total hardware loss.
This box also server our main applications. Thus we put the main drives on the shelf where they would be safe at least that was the plan! Then installed 5 blank drives. Time to rebuild from scratch or attempt to restore from backups We chose backups, the rest is history.
How is it a 'Disaster Recovery' if you have merely lost an application? Isn't that just an application crash and re-install? I realize the level of impact depends on 'which' application and the level of dependancy on that application, but it is still an isolated fault, isn't it?
Yes we used the 'failed' Server PC, but we at least pretended it was brand spanking new. In light of the fact that as John Pohlman pointed out the Array configurations are stored on the drives and not in the controller, I can see how mixing the drives: 2 Original drives for array A, and 3 new drives for Array B may have corrupted the Array config. My error here was in not reading the manual or docs for the Controller and having a complete understanding of how my hardware really worked. I'm beginning to see a common thread here- I should have done more research.
AND I should re-visit our recovery path. Perhaps trying to duplicate our hardware after a total loss is not the best plan.
Have to re-visit this I do full scale COB testing all the time. When we do that, we yank the network connection to the datacenter and see if the alternate location picks up. Not one piece of shared hardware.
It both simulates the ability to failover as well as puts the original environment at no risk. What you are doing is good in some says - but you are creating a ton of risk, as you've seen. It isn't just a matter of logically turning a route back on. Enterprise drives shouldn't be powered down to cool off, let alone removed and moved around unless absolutely necessary. I have a windows 10 Pro 64bit PC and just installed.
Tags 1. Tags: Microsoft Windows 10 bit. I have the same question. Accepted Solutions. Level Message 2 of 2. Was this reply helpful? Yes No. If you think you have received a fake HP Support message, please report it to us by clicking on "Flag Post". By using this site, you accept the Terms of Use and Rules of Participation.
0コメント