Select Page

Cogent

I’ve not posted here for a while, been insanely busy but today those lovely people at Cogent kicked me out of my blogging slumber with a shocking display that I simply had to share.

Cogent for those who do not know is a very large Tier 1 ISP, known mostly for many disputes with other ISPs about peering, it has become so bad that in the UK at least they are basically a complete no-go zone for anyone. 

I’ve previously delt with Cogent when a client signed up for a few mbit of Cogent bandwidth on the basis of a ยฃ5/mbit pricing structure, they soon realised that you get what you pay for.  Even between racks in the same Data Center you could not reach each other without first hopping over to Europe.  I’ve attempted to resolve this at the time with Cogent and the other ISPs and  both confirmed that it is essentially a waste of time.  Cogent said they can’t speak to the ISP in question since most of the UK ISP industry can’t stand them.  The other ISP basically laughed out loud about being ‘suckered’ into buying Cogent bandwidth.

This is confirmed elsewhere, search the Renesys Blog for Cogent and you’ll find a lot of information about Cogent, mostly bad news.  From an article on their blog about Cogent in the UK you will see this:

Firstly, Cogent has a fairly serious Europe problem right now. They
have been aggressively attacking the European market for a few years
now and making some solid headway. They bought a couple of carriers
(Lambdanet Spain and France, Carrier1 in Germany among them), ruthlessly
integrated them and then proceeded to undersell the market by a factor
of 50-80%. This has made them many enemies.

As a result of this approach to business, Cogent has much less
effective peering in Europe than do many of its larger competitors.
Most of the European PTTs refuse to peer with Cogent anywhere on the
European continent. Recently, some large US carriers (among them
Level (3) ) seem to have adopted a similar approach. This means that
when Cogent sells capacity in Europe, it is forced to drag that
traffic back to the US to hand it off to its peers here. Of course
that means that if the ultimate destination is European, the traffic
has to travel back. This is a burden on both Cogent and the European
carrier and, of course, the customers on both sides. But it’s
unlikely to change because of just how much hate there is for Cogent
among European networkers.

This basically confirms my experience with Cogent and those of many I have spoken too.  As such if you choose to support Cogent you will basically be forced to:

  • Buy a lot of other bandwidth since if you’re hoping to serve UK customers, Cogent is a terrible sole carrier to have
  • You will need to invest in extra hardware, extra admin time, extra complex routing infrastructure and additional overhead on your teams
  • You will forever be at the mercy of everyone who hates Cogent, you will find your self randomly falling off the internet, randomly de-peering with vast swaths of the internet and basically the whole thing will be a pain in the behind.

For these reasons, everyone who I know with Cogent bandwidth use them as last resort backup carriers, they are cheap and basically shit, but ok enough to use as a backup when everything else failed.

Over the last few years Cogent has contacted me direct via email to attempt to sell their wares, always the threads end withe me saying something along these lines:

Furthermore we’d prefer to use companies who do not directly
contact us with marketing material, please remove us from your lists
for future contact.

Today again one of my clients got a mention on Techcrunch which resulted in more spam from Cogent, again to an email address totally unrelated to my business activities, not listed in whois records for the client or anything like that.  The sales person even had the nerve to copy the email that the above quote is from in his mail to me asking if I can have a conference with him.

My response was the usual, no we don’t deal with spammers, you were told to leave us alone now please stop bothering us.  Which resulted in an amazingly pushy email from the sales person, quoted below:

No doubt that writing when being asked not to is, well, borderline. That
said, it is both of our responsibilities to make sure that all options
are explored. You need to confirm that you are aware of all vendors
information, and mine includes getting it out there. 

.

.

Admittedly, this is difficult to resolve via email. However, if I didn’t
think that we could compliment your service, I wouldn’t persist.

This is just amazing, this person really think he can presume to tell me what my responsibilities are, what I need to do, and that I have to indulge his blatant b/s.

After I again pointed out that they were asked to stop mailing me and I pointed out that they were using a private email address held by a UK citizen and as such under the data protection laws they need to stop contacting me when asked, they once again mailed me demanding further information about my customers.  They really are on par with simple Viagra spammers.

Does anyone really think this kind of heavy handed tactic gets them business?

The worst part of it is the ISP who currently provide a large part of our b/w are Cogent customers, Cogent Sales people do not think twice to approach clients of their clients and try to undercut them – effectively trying to steal their customers business away from them. 

Why would any business support such a company?  I would not, I would effectively be negligent in my duties to my clients to ever recommend these clowns for anything since they are just a nightmare waiting to happen.

Devolo dLAN Homeplug Networking

I live in a pretty typical for London double story house, my study is upstairs with TV etc downstairs.   Till now I just use a Wireless N router to get connectivity downstairs but it’s proven to be less than reliable.  Additionally my ADSL router was upstairs – but on an extension and not on the main plug, it’s a recipe for disaster.

I’ve considered many options, long cables and all sorts of things like this.  Today while wandering through PC shops trying to find a decent USB reader I again noticed the Homeplug devices and thought I’ll give them a try.

I bought 3 units of the Devolo dLAN 200 AVeasy units, they are 200Mbps maximum devices and support all sorts of fancy things like AES Encryption and basically an ACL of sorts to only allow certain devices to talk to each other.  You can essentially create a VLAN by giving groups of devices different passwords etc.

At first I was fairly sceptical but figured it’s worth a shot, I am glad to say the devices totally exceeded my wildest expectations.

Installation was a breeze, pop them into the wall, plug in cables and it all just work.  Of course it is not secured by default so I went digging through their site, the docs and so forth is pretty crap to say the least but I found software for Linux, Windows and OS X to manage them.  Each device has a security id on the back and you just type the keys for all your devices into the app and provide a password.  This gets used to secure the network with AES.

I have now moved my router and firewall machine downstairs to the main socket – ADSL is now much stabler – and have moved the Wifi router downstairs too via the Devolo units.  Overall the whole setup just works great, even my Xbox is working great again after my old Wireless Bridge died.

I use a 1GB switch on my LAN and get around 0.3ms ping times in general, if I ping a device on the other end of the Devolo units ping times are around 4ms, transfer speeds over the units are around 7MB/sec when using scp, these figures are very respectable and much better than I had hoped for in the past while considering them.

At +- 50 GBP per unit and the sacrifice of a wall socket its a pretty expensive solution (other manufacturers apparently have ones that act as a network and power adapter so you don’t waste a port) but for me this has proven to be an excellent solution and completely sorted out my network reliability issues.

Layeredtech’s thanks to old customers

I have been a customer of Layeredtech for years, at present I have only 2 machines there but at times I’ve had 7 or 8.  My one machine is pretty old, I think I got it circa 2002 or so and it’s been doing well, same hardware etc.

Yesterday I received the following email from them:

Layered Tech is committed to being the leader of the Hosted
Infrastructure market by providing our customers with the best products
backed by the best service.  In an effort to improve our customer
experience, we have determined that a small number of existing servers
will need to be relocated from their current data center.  As you are
receiving this message, we have identified that you have one or more
servers in the in area of the Savvis facility that will need to be
moved.  It is our intention to minimize any interruption in service and
we will do our best to work within predetermined time frames that are
convenient to you.

Due to the form factor (chassis type) of
this server, we will need to migrate your data to a new server. We will
work with you so that the impact is as minimal as possible.  

Below
are the servers that are affected by this migration.  Please respond to
this message acknowledging the need to relocate your server(s).  At
that point, we will move this ticket to our Operations Department where
we will work with you on a migration schedule.

From reading this you might assume they will assist you with the migrate and this is a notice of an impending change, perhaps a month or two from now?

In reality the situation is that no, they will not help you migrate your data.  They want you to take out a contract for a new machine and then migrate your data yourself – something which even at best will take 5 to 10 hours on oldish machines like this.

They do not offer any compensation, and when pressed on that point only offer 1 month…the cherry on the cake is that all this has to be done for 18 days from now, in effect they are terminating your old machine forcing you to take a new one and doing it with less than the agreed 30 days notice.  Like it or not.

The sales person who has been coordinating this from their side is incredibly unhelpful and frankly useless, only after much pushing back by me do I even get a hint that anything other than do-it-yourself migration is an option, at this point still waiting for details.

This kind of disregard for customers is typical of large hosting centres, they have thousands of customers and their hard handed handling of their customers is acceptable because at worse they’ll loose a fraction of a percentage of customers, so being unhelpful really does pay off for them since most people will probably just take this crap.

This is shockingly poor service, if you value your data, avoid Layeredtech.

flashpolicyd 2.0

I wrote a multi threaded server for Adobe Flash Policy requests, some background from Adobe:

Since policy files were first introduced, Flash Player has
recognized /crossdomain.xml as a master location for URL policy files. However, prior to version 9,0,115,0,
Flash Player did not recognize a fixed master location for socket policy files.
Flash Player 9,0,115,0 introduces a concept of socket master policy files,
which are served from the fixed TCP port number 843.

If your application currently requires /crossdomain.xml files to work properly while making Socket requests you should pay close attention to this, as of the current latest version of Flash Player your application will STOP working if you do not run a policy server.  You need to read this link and take appropriate action.

The servers that Adobe provides are not full featured so I set out to write one, the resulting server can be found on my wiki at http://www.devco.net/pubwiki/FlashPolicyd I will regularly post updates here.

A quick feature list:

  • Serves XML files on port 834 using the Adobe protocol
  • Multi Threaded for performance, but limited due to Ruby’s green threads, testing did not find this to be a problem though
  • Supports logging to a log file, debug mode can be toggled on the fly via Unix signals
  • Debugging information such as thread lists can be dumped using Unix signals
  • Adjustable frequency of status messages showing amount of connections, problematic ones and current connection counts

The tar ball includes the main daemon, a Red Hat compatible init script, a standard Nagios monitoring script and a Puppet module for installation on a Red Hat machine.

You need just the basic Ruby installed, I suggest Ruby newer than 1.8.1 due to bugs in the Logger class on 1.8.1.

Version 2 can be downloaded at http://www.devco.net/code/flashpolicyd-2.0.tgz

Adventures with Ruby

Some more about my continuing experiences with ruby, in my last post I said

the language does what you’d expect and as you’ll see in my next post
spending a week with it on and off is enough to write a capable multi
threaded socket server.

As it turns out I quickly lived to regret saying that.  Soon after I hit publish I started running into some problems with the very same socket server.

A bit of background, Adobe has made a change to how things work moving away from their previous crossdomain.xml file served over HTTP for cross domain authorization to a new model that requires you to run a special TCP server on port 834 serving up XML over a special protocol.  I won’t go into how brain dead I think this is, suffice to say I needed to run one of those for a client.  Adobe of course does provide a server for this, but it has some issues, I chose the simplest of their examples – Perl under xinetd – and quickly discovered that it has no concept of timeouts, or anything that doesn’t speak it’s protocol.  The end result is that you just end up with a ever growing number of perl stuff running waiting around for ages.

I took this as a challenge to write something real under Ruby using it as a learning experience as well, so set out to write a multi threaded server for this.  At first glance it looks almost laughably trivial:  The Ruby STL includes GServer – a very nice class that does the hard work of thread management for you, you just inherit from it and supply the logic for your protocol and let it do the rest, awesome.

I wrote this, put in logging, options parsing and all the various bits I needed, tested it locally – 10 concurrent workers doing 200,000 requests and it served it in no time at all with limited CPU impact. I then wrote RC scripts, config files and all that and deployed it at my client.

Real soon after deploying it I noticed the wheels came off a bit.  I, out of curiosity, put in some regular logging that would print lines like:

Jun 23 08:23:37 xmpp1 flashpolicyd[7610]: Had 10042 clients of which 285 were bogus. Uptime 0 days 14 hours 2 min. 23 client(s) connected now.

Note in that line how it claims to have 23 connections at present? That’s complete b/s, I added the ability to dump actual created threads and there just weren’t enough threads for 23 clients and the TCP stack agreed…Turns out gserver has issues handling bad network connections – my clients are over GPRS, Modems, and all sorts – and it seems threads die without GServer removing them from it’s list of active connections.

This is a small problem except that GServer uses the connection count towards figuring out if you’ve hit its max connections setting.  So while I could just set that to some huge figure, it does indicate theres a memory leak – array grows for ever.  Not to mention it just leaving me with a bad taste in my mouth over the quality of my new and improved solution.

Naturally I gave up on GServer I didn’t feel like installing all sorts of Gems on the servers so figured I’ll just write my own thread handling.  While it’s not trivial its by far not the most complex thing I’ve ever done.  Happy in this case with a bit of wheel reinventing for the sake of learning. 

I chose to use the Ruby STL Logger Library for logging and even added the ability to alter log level on the fly through sending signals to the process, very nice and I were able to re-use much of the option parsing code etc from my previous attempt so this only took a few hours.

I did the development on my Mac using TextMate – the really kick arse Mac text editor that has nice Ruby support – the Mac is on Ruby 1.8.6.  I intended to run this on RHEL 4 and 5, they have Ruby 1.8.1 and 1.8.5 respectively, so I was really setting myself up for problems all of my own making.

Turn out Logger has a bug, fixed here in revision 6262 without any useful svn log, that only bit me on the RHEL 4 machine.  It would open the initial log correctly with line buffering enabled, but once it rotates the log the new log and subsequent ones wouldn’t have line buffering.  Which in my case means I get log lines showing up once every 5 hours!

This sux a lot, and it’s unlikely that RedHat will backport such a small little thing, and since RedHat 4 will be here till 2012 I guess I’ll just have to patch it myself or move to RedHat 5 on this server, something I planned to do anyway.

So something that should have been fairly trivial has turned into a bit of a pain, not really Ruby’s fault that I am using 1.8.1 when much newer versions are out, but not nice regardless.  At the end of it all my flash server is working really well and handling clients perfectly with no leaking or anything bad

I, [2008-06-26T23:02:36.607920 #22532]  INFO — : -604398464: Had 15611 clients of which 423 were bogus. Uptime 0 days 13 hours 41 min. 0 connection(s) in use now.

Those bogus clients are ones that timeout or just otherwise never complete a request, these were the ones that would trip up GServer in the past.

Once I’ve done documenting it I’ll be releasing the flash server here