![]() |
M3 Server
I tried to check my domains this morning and they are all down. I went to email the host and can't even get into to them to do that. Grrrrr |cry|
|
Yeah I'm down too, not happy at all. :(
Traceroute to m3server.com (216.77.84.13), 30 hops max, 40 byte packets 1 * * * 2 10.247.92.1 (10.247.92.1) 13.088 ms 13.929 ms 11.44 ms 3 c-66-176-2-205.se.client2.attbi.com (66.176.2.205) 12.765 ms 38.605 ms 10.556 ms 4 c-66-176-1-253.se.client2.attbi.com (66.176.1.253) 9.649 ms 12.524 ms 111.227 ms 5 12.119.94.9 (12.119.94.9) 10.867 ms 10.19 ms 13.697 ms 6 gbr4-p100.ormfl.ip.att.net (12.123.218.66) 16.858 ms 15.769 ms 17.668 ms 7 tbr1-p012301.attga.ip.att.net (12.122.2.129) 26.775 ms 24.963 ms 25.296 ms 8 tbr2-cl1.wswdc.ip.att.net (12.122.10.69) 43.368 ms 37.683 ms 42.428 ms 9 ggr2-p390.wswdc.ip.att.net (12.123.9.85) 54.427 ms 37.078 ms 40.646 ms 10 att-gw.washdc.level3.net (192.205.32.42) 46.196 ms 79.582 ms 41.407 ms 11 so-1-1-0.bbr2.washington1.level3.net (64.159.3.65) 40.75 ms 46.445 ms 50.443 ms 12 ae-0-0.bbr2.atlanta1.level3.net (64.159.1.46) 50.99 ms 51.336 ms 92.316 ms 13 so-5-0-0.gar1.atlanta1.level3.net (4.68.96.22) 54.677 ms so-6-0-0.gar1.atlanta1.level3.net (4.68.96.10) 54.031 ms 112.092 ms 14 bellsouth-te.gar1.level3.net (67.72.8.42) 55.875 ms 52.269 ms 54.56 ms 15 axr00asm-1-0-0.bellsouth.net (65.83.236.3) 53.375 ms 55.241 ms 53.418 ms 16 ixc00bhm-6-0-1.bellsouth.net (65.83.237.41) 69.437 ms 61.713 ms 57.443 ms 17 207.203.159.89 (207.203.159.89) 61.618 ms 70.744 ms 60.523 ms 18 68.152.224.106 (68.152.224.106) 62.122 ms 67.726 ms 62.697 ms 19 * * * |
they are down AGAIN? that is not good at all and seems to be happening too often :(
|
yep, here too :( This is really getting old
|
Is Dantheman no longer with them?
|
I don't host with them but have seen way too many posts about them being down in the last few months, that sucks for everybody :( hope you all get back up soon
|
Fuckers. I have a hard enough time trying to make sales without this shit.
|
Yeah this is just wonderful huh. Did another storm take out the whole state or whatever it was last time?
Maybe the dog ate the router. Well fuck |angry| |angry| |
Hmmm.. I was planning on going dedicated in the near future, M3 Server looked cool, but they seem to be down a lot lately...
|
Appears to be back up. And not a second too freaking soon : /
|
1 Attachment(s)
Yeah back up but I want a better response then the total lack of one from last time.
I've been with them for close to four years now and don't want to have to move three servers but this is my income that is at risk. LadyB They are usually active on your board. Have you heard anything now that you are back up? What has become of Dantheman? |
Cleo - I haven't worked over there for at least 2 and a half years so I dunno lol.. but I'll go check. Last time I called M3 a couple of months ago Danny answered the phone, so if he's gone it's been since then.
Gotta agree with you though.. it's a pain in the ass to move everything, but if this keeps up I'll certainly do it : / |
Quote:
I asked about Danny because I haven't seem him come up in my ICQ for some time. He did post over at Oprano that he was going on vacation and as far as I know hasn't been heard from since. |
Just checked.. not a single word mentioned about it over there. lol
|
I see that Dantheman posted over on VNWR
Anyone know what this actually means in plain English? Quote:
|
Cleo - lol I saw that too and had to ask him wtf it meant.
My best guess is.. "something in our system broke and we're checking to see if some asshole did it to us (malicious activity), or if it was just a glitch (abnormal occurrence) |noclue| |
There are a few reasons for packet storms.
One possibility is a Denial of Service attack launched from one of the machines. Easy to find, just look for the guy maxing out his connection, clip that port. Another is faulty hardware or faulty firmware in the switch. This one can be quite annoying to track down. Its one of those, log everything, wait to see if it happens again. Yet another possibility is a network that is designed 'flat'. While not the most efficient, it is the easiest to add machines to and easiest to move machines around when repairing/testing. Spanning trees can only handle so many mac adresses. As they get overused, the spanning tree throws out the old ones using an LRU (Least Recently Used) method. If you have >1024 mac addresses behind a spanning tree, and someone does a scan of your network hitting every machine and doing requests, the rebuild of the spanning tree could take the switch to 100%, at which point it would stop routing packets. Or everyone gets spidered by Google at roughly the same time or whois.sc, it could have the same effect as a network scan. Or a guy with a zombie on his home PC that decided it was time to scan that segment for vulnerabilities. Not an easy thing to track down. :) |
Quote:
|
I would have never guessed that Sparky would have such a handle on such things :)
|
Sparky always gets to the point quickly :D I think he gets it from Gabbo
|
So shouldn't something like this get noticed and fixed before it results in 3+ hours of downtime?
I've never really fully understood what Spanning Tree is but I do know that it wrecks havoc on Mac networks using Appletalk and that we usually disable it when we can. |
hehe, ok, ethernet 101
Ethernet is a collision avoidance network scheme. The way it works, machine A sends traffic to Machine B, Machine C, D, E, F, G all hear those broadcasts, but, ignore the traffic because it isn't destined for them. If Machine A and Machine C send traffic at the same time, each will wait a random amount of time and retransmit -- until the packet goes through. This is a half-duplex circuit. Half-duplex = bad. :) So, the engineers said, lets develop a switch, but, to avoid collisions (collisons beget retries, retries = latency, latency = worse performance) we'll design a system that only sends traffic to the machine or wire that it is destined for. There are two major designs for switches, cut-through and store and forward. I think 99% of the switches today are store and forward. With either, a machine could run full duplex without having to worry about collisions between the machine and the switch. With a cut-through switch, the switch itself would detect the collision and signal the retry, but, it was like a virtual relay that would jumper the wire so that traffic would flow directly. Store and forward would buffer the request and send it so that the collision would be avoided. Of course, if you have 1 internet connection and 23 machines hooked to a switch, that first port has a lot of contention, and probably does have collisions, but, overall, the rest of the network runs faster because they are not listening to traffic unless it is destined for them. But, you need more than 23 machines, so, you buy another switch and connect it to port 2. Now you have to figure out where to send the traffic, so, each mac address does a little broadcast, and the proxy arp tells the first switch, hey, I have these 8 mac addresses and the first switch says, ok, anything destined for those 8 gets sent down port 2's wire for the other switch to handle. Well, pretty soon, a large enough network will have more than 1024 mac addresses to contend with. The 'core' switch must then figure out where to send that traffic, but, if it exceeds the number of entries allowed in the spanning tree, it will spend quite a bit of time discovering where those machines are. That's sort of the basic theory behind it. |
Sparky why don't you just unveil your time machine and become a billionare already :D or is Gabbo holding you back |mml| he does tend to get jealous sometimes..... (and your knowledge amazes me, what do you really do for a living??) it has to be or once was illegal :D
|
Quote:
The switch sends configuration BPDUs to communicate and compute the spanning-tree topology. A MAC frame conveying a BPDU sends the switch group address to the destination address field. All switches connected to the LAN on which the frame is transmitted receive the BPDU. BPDUs are not directly forwarded by the switch, but the information contained in the frame can be used to calculate a BPDU by the receiving switch, and, if the topology changes, instigate a BPDU transmission. The answer to your question Cleo is no, there might not be a warning of a problem because the switch is doing exactly what its suppose to do, recalculate the tree. However, it is possible for forge BPDU and cause the main, or *root* switch to constantly update the tree resulting in a form of DoS attack. My servers stopped sending traffic about 4 this morning, I doubt corey was making changes to the network at that time of the morning. Considering that and the way Danny's post was worded, I'll bet it was the upstream provider having trouble. |
While I cannot address M3server directly as I am not aware of thie business or situation, I can tell you from experience that many of the "big adult hosts" are really just subbing from other companies, using shared techs and shared facilities. They are often 50 - 100 server boxes basically in the corner of someone else's data center. In other cases, they are merely reselling the products of another company for margin.
That means that they are more often at the mercy of the acts of other people, and more often that not, unaware of problems on "their network" because it really isn't their network. All hosts have issues from time to time, networks are not the easiest things to keep 100% when you are adding and removing hardware and making other changes on a regular basis. Some hosts handled the situations better than others, and others will fail. The quality of a host is often in how they handle the times when they are down or are facing network issues. Alex |
All times are GMT -4. The time now is 03:26 PM. |
Powered by vBulletin® Version 3.8.1
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
© Greenguy Marketing Inc