Today I found a lot of connectivity issues on my colocated server.
It was rather surprising to discover the continuous streams of downtime.
I was even more amazed to discover that I could likely accomplish the same job for a price of $59.95 for a 512/512 connection. It’s a little slower, but in the end the result would be so much more better.
Anyway, the outages started at midnight last night, and finished at 10pm tonight, the large section from 4pm to 10pm was the most concerning, as during that time I was missing on email, I was missing visitors to my website, and the only thing I could do is sit there and scream F*$K Y@U, dirty SOB’s that can’t keep a cable lit for a day right.
I was really, really, annoyed.
Then I started to see, what can I do about the situation to make it better. Well, there’s a lot I can do. I can tell the current mob, that it’s great they house me, but the people they get the inbound from are very disappointing and need a good kick in the N$ts, as I am simply not going to tolerate unstable connectivity on a site designed to be.. A MONITORING WEBSITE.
Or. I can do something else. I can instead monitor webhosts, as I have had a plan to do for some time now, and with that publish web hosting statistics, which, in the case of Koala VoIP forced them to be a more reliable provider (ignore the recent news that they are going down the drain financially), and perhaps force a reliable change by publishing the real statistics of downtime in a manner that is accurate!
Or. I can take the system elsewhere and see if things look better when you remove the current provider.
Or. I can install an ADSL2+ connection at the data centre my box is housed at, pay line rental and for an internet connection that is there for downtime purposes so that I can always acccess it. But that’d be stupid, because if that was the case, I’d just host it in my own home, it’d have the same quality, the same specific stats, just without the network connectivity issues.
Or, I can remain thankful for all they have done for me in getting this in the data centre and realise that the upstream provider still needs a kick in the nuts as they just changed the backhaul provider to WCG, an ex supplier of services that they are now back with.. Weird, but maybe someone has learnt something.
Anyway, the issue was resolved after many hours of downtime (not impressed), and I suspect I might just wait it out now, and see if things are on the rise. After the change back to the other provider that was causing issues.
A whirlpool user highlighted something that is very important though: Where’s the redundancy? Why on earth have ONE link to the internet, why not two ? I could do the same from home for about $50 a month for the second connection, and if it was down, take the routing back to the Netshape connection.
I suppose in a data center data is a bit more expensive, but on the other hand, the costs are a determination of your costing model. If it costs $1/Mbit for each connection, fine, it’s $3.20/GB for the sake of redundancy, I doubt anyone would say no to that, it’s a little more, but for the sake of reliability, I’m sure everyone would be paying.
I have no real idea of what 100Mbit of connectivity costs, but if its something like what they are paying now, then you can surely double up and operate two links, multihomed (published routes on both links). Or better, load balance. That is, two links provisioned at half the maximum capacity required. If all is up and fine, the traffic goes in and out of two links suitable for carrying the load, if a link goes down, the other one is a little saturated, but sure enough, online.
When you kick the other mob in the n$ts, you end up with full capacity again.
Anyway, that’s how I’d be doing it, to benefit from both routing paths, redundancy, and better stability. It’s so bloody obvious I’m amazed they don’t do that already.
It’s good to be back, not so good that we spent many hours down! Let’s hope things are on the up from here, with a constant stream of reliability!
Enjoy!