What a ******* nightmare... aka what happened to the server
#1
Well it's been a stressful few days, the new host showed themselves to be moronically incompetent. I managed to keep a pretty level head during the whole thing, but I am still not convinced you catch more flies with honey, back to my old philosophy of the squeaky wheel gets the greesin!
Sunday the server crashed about 10:30, the host rebooted it and 20 minutes later it crashed again. About 3:30 they concluded it was a hardware failure and that the server would be put in the que for testing. Since it was following a holiday weekend they said things were a little backed up. Sunday night they started testing and said they would test the drives first, by monday afternoon the first drive in the raid array had tested good and they were testing the second. Tuesday afternoon it was verified both drives were good and rather then swap the good drives into a new server they made me wait while they tested the memory. Wednesday it was determined the RAM was bad and the ram was replaced, when they attempted to bring the server back online they discovered they had crashed the drives and the FINALLY decided to start over with a new server and restore from backups, by some miracle they managed to do this quickly and things were restored around 6:45 PM Eastern on wednesday.
It's been a stressful few days, all I gotta say is I'm glad that gold did some big moves, it kept my mind of this mess for a little while
Sunday the server crashed about 10:30, the host rebooted it and 20 minutes later it crashed again. About 3:30 they concluded it was a hardware failure and that the server would be put in the que for testing. Since it was following a holiday weekend they said things were a little backed up. Sunday night they started testing and said they would test the drives first, by monday afternoon the first drive in the raid array had tested good and they were testing the second. Tuesday afternoon it was verified both drives were good and rather then swap the good drives into a new server they made me wait while they tested the memory. Wednesday it was determined the RAM was bad and the ram was replaced, when they attempted to bring the server back online they discovered they had crashed the drives and the FINALLY decided to start over with a new server and restore from backups, by some miracle they managed to do this quickly and things were restored around 6:45 PM Eastern on wednesday.
It's been a stressful few days, all I gotta say is I'm glad that gold did some big moves, it kept my mind of this mess for a little while
#3
#4
Well I don't really need VM, but yea definitely need to find another host. The issue is i've paid these guys for a year, just paid them in November so I can't really bail on a years worth of payment, I'm going to have to fight it out with them to get my money back. I'm working on it, we'll see. Which then leads me to the next issue of budget, so depending on what I can do after arguing with them for compensation/refund, then I'll have to go from there as far as moving DC's. Hardware is basically the only thing i rely on the DC for, I do everything else remotely, reboots, updates, etc, etc.... So it's usually not an issue, in fact we've only had one issue in the past 8 years.
#8
#9
#10
That would be my arguement in getting the money back, not walking away from it
I guess it would matter in which way the legal documents were worded when making an arguement however