The Toolserver now has a paid admin: Wikimedia Deutschland contracted River Tarnell to look after the Toolserver, starting February 1. As you probably know, River has been an integral part of the Toolserver team from the very beginning, and has done a great job as a volunteer. However, when doing system administration in your spare time, the more annoying jobs tend to be left – which may lead to unpleasantness or even downtime every now and then. A fixed number of hours dedicated to the toolserver and a protocol for emergency situations will now help us to get the Toolserver’s availability to at least 99%.
Contracting a payed admin should also benefit the other Toolserver administrators, as they can now rely on River to look after the every day operations and to coordinate further administration work. Volunteer work will of course remain an essential component in the operation of the Toolserver, as is the case with all Wikimedia projects.
Wikimedia Deutschland wants to thank everyone who works on the toolserver, all admins and users, for their great contributions. We look forward to a future of even greater success of the Tooserver project!
Some months ago, the Wikimedia Foundation approved a $40,000 grant to Wikimedia Deutschland, for the purpose of improving Toolserver reliability. We’ve now implemented the first part of this plan: redundant NFS and LDAP.
When we first proposed the grant, the plan (which you can read more about at the above link) was to purchase 3 database servers, which we would use to provide a redundant backup for the 3 current servers. However, before we made the purchase, we realised that for the same amount of money, we could purchase 2 database servers, 2 smaller servers and a disk array. The Foundation approved this change, and that’s what we ended up buying.
The purpose of the two small servers and array was to provide redundant service for NFS and LDAP. These services are critical to the platform operation; if either is offline, the entire platform is down. Previously, both were hosted on a single server (hyacinth), which meant the entire Toolserver depended on this server being up. As well as hurting reliability, this made it very difficult to do any maintenance on that server.
Now, however, the NFS and LDAP data is stored on the disk array, which is connected to two servers (turnera and damiana) running Solaris Cluster software. If one server breaks, or we need to do maintenance on it, the services are automatically moved to the other server, with no interruption in service. The array itself has two redundant, independent controllers, making failure quite unlikely.
The previous NFS/LDAP server, which is now idle, has exactly the same specification as a database server. We will be using this as the third redundant database (along with the two we purchased with the grant) to provide redundant access to the MySQL databases. More news on that later.
From afternoon on August 24th until August 25th, the Toolserver was offline due to an unscheduled outage. Everything is now back online, and the technical details of the outage are documented here for anyone who’s interested.
The Wikimedia Foundation has approved a grant of $40,000 towards improving the Toolserver’s reliability. We requested the grant in April, and we are very happy it worked out. The background of the grant is that the most central feature of the Toolserver, live replication of the nearly 800 wiki databases, is far too shaky. If it breaks for a day or so, or we have any kind of corruption, we need to import a full new dump, causing days and weeks of outdated information for Toolserver users (and for the users of Toolserver users’ tools). It also means that during such times, there is no up to date off-site backup of the wiki databases.
To improve this situation, we plan to buy three new database servers, so we can keep two copies of each database, instead of just one. This way, one copy will remain available when the other breaks, and we will be able to fix things without too much interruption. The new servers will very likely be the same as our other newer database servers, namely, Sun Fire X4250s with 32GB RAM and 16 internal disks with 146 GB each. We hope to have these online some time in September or October. This should greatly improve the availability of live replication, and thus of any tools relying on real time information.
After the hardware-installation were finished yesterday River began to set up the servers that will run solaris, while I began to set up cassini that will run debian.
After I was trough the tftp-network-installation-hell again (took me only 2h and a dozen reboots to let the installation starting…), we discovered that the host doesn’t have a hardware-raid. A fast investigation result in that the osm-squid ortelius (which is nearly identical to cassini) and cassini were swap at the hardware installation. Because cassini should run a little copy of the osm-database itself, using it without hardware-raid is no option.
We are unsure what is the best way to handle the situation and how long it will take until cassini can used at the moment.
Yesterday Mark and Multichil were at our colo again and finished the hardware-installation of our new servers.
The toolserver-cluster got two new boxes: daphe, that is going to replace zedler as databasehost of the s2-cluster, and hyacinth that is going to work as host for our /home-directory.
Also they installed new servers for the openstreetmap/wikimedia-cooperation: A openstreetmap-toolserver on that people can play with osm-data (like our toolserver for wikimedia) named cassini, and a squid-proxy named ortelius and a database-host named ptolemy – both to handle the load the wikimedia-projects will create when they will use openstreetmap-maps.
At last two terminal-servers were installed to make it easier for the roots to handle broken servers.
Thanks to Mark and Multichil!
So, I’ve just blocked msnbot (which is, I assume, the search spider for Microsoft Live Search) from indexing the Toolserver. Many spiders, such as Google and Yahoo!, index our website every day, and cause no problems; in fact, we don’t even notice them. Msnbot is different. Specifically, it seems to have no rate limiting. Microsoft claim it will only request pages around once every 10 seconds; in reality, it was making 5-10 requests per second. Unfortunately, the page in question was a slow CGI script, and msnbot seemed to have obtained a list of every possible parameter it could pass to the script, which it then did, as fast as possible, until the web server was so overloaded it could hardly serve user requests:
wolfsbane up 53+10:55, 1 user, load 53.93, 55.49, 55.22
It doesn’t seem to have noticed that it’s blocked. It’s still hammering away as fast as it can, and getting nothing but 403 in reply. I’ve even added it to robots.txt, but it doesn’t seem to have noticed that either yet. Fortunately, our web server is quite fast at returning 403, so the load is looking much happier:
wolfsbane up 53+11:58, 4 users, load 0.68, 1.15, 3.44
After I blocked it, I tried to find a contact at Microsoft to report the problem too—as the spider clearly isn’t behaving like they expect, I thought they might appreciate a warning. Well, I can now report that Live Search really don’t want to be contacted. The closest thing I could find to a contact form, linked from the “troubleshooting problems with msnbot” page, had a list of categories for me to choose from. None of them was even slightly related to search. Some Googling suggested that “email@example.com” might work, but nope (“Returned mail: user unknown”). There’s a feedback link on the MSN front page, but who knows where that would go, and whether the feedback would ever reach someone who could deal with it? (Certainly not me, as they clearly state that they won’t reply to your feedback.)
I gave up in the end. If someone reading this happens to have a contact at Microsoft who would be interested in this issue, please feel free to let them know. Otherwise, I imagine Live Search users will just have to live without the Toolserver.
PS: I know msnbot (supposedly) supports the Crawl-Delay parameter in robots.txt. But given what I’ve seen today, I don’t particularly want to rely on this, even if it does, some day, reload our robots.txt.
Yesterday, Wikimedia Deutschland has ordered five new Servers, three of which will be added to the toolserver cluster. They will be delivered (hopefully) in two to three weeks, and will go online perhaps a week or two after that. Here’s what the servers will be used for:
The first server will replace Zedler as our database server for the s2 cluster. Zedler is our oldest database server, and has lately been overloaded nearly constantly. We may keep Zedler around with a backup copy of s2, but it will mostlky be idle. We’ll see what use we can put it to later on.
The second server will take Hemlock’s duty of serving the home directories, and it will become the host system of the stable server. This means the stable server becomes virtualized (probably as a Solaris zone). Willow (the current server for stable projects) will then be free, we will probably make it into a second login server where users can run bots. This should take some load off Nightshade.
The thirs server is the “OpenStreetMap Toolserver”: it will be for the OpenStreeMap project what the Toolserver cluster has so far been for Wikimedia projects: a place to play with data and host bots and web applications. The Toolserver rules have been changed to accomodate this.
All in all, we will then have 12 servers in the Toolserver cluster.
The two remaining servers that have been ordered yesterday will also be used for the OpenStreetMap project, but not in the context of the Toolserver. They will be used to integrate interactive maps from OpenStreetMap directly into wikipedia Articles. More information about the OSM integration project is avialable on meta.
So, for a while we used Apache Roller for the journal. Roller was something I’d used before (as a user, not an admin), and it could be integrated into our web SSO system, so it seemed like a good fit. Unfortunately, it didn’t work out so well. Roller is rather clunky to use, and has a few small bugs (mostly related to SSO) that made it somewhat unpleasant to use. Today, it decided not to allow a new user to log in, and wouldn’t provide any sort of useful error message besides Java stack traces. So, I decided it was time for a change.
The Toolserver journal is now running on WordPress. WordPress is much nicer to use, looks nicer for users, and has a large community of users. It was also very easy to set up, and allowed us to import the old posts from Roller. The only downside is the lack of an up-to-date LDAP integration plugin. However, I don’t think that should be too hard to write…
PS: In case Planet gets confused by the change and re-displays all our old posts–sorry!
So, a while ago, Sun donated some servers to Wikimedia. We managed to earmark one of them for the Toolserver, but for various reasons, it couldn’t be delivered to Amsterdam, so it ended up at our Tampa facility. It’s been sitting there ever since, until yesterday, when we managed to rack it and set up its OS and array. The server is a Sun Fire X4150, with 2 quad-core 2.8GHz CPUs, 32GB RAM, 4 146GB system disks, and an external array with 12 15’000 rpm 146GB SAS disks. Originally, it was meant to be a database server, but there’s not much point putting a single database server at Tampa, so now we’re looking for other uses.
The most obvious is to provide a US replica for cache.stable.toolserver.org, the caching proxy in front of the stable server; this will improve WikiMiniAtlas performance for North American users (the only tool currently using the cache). But that’s only a tiny load, so it would be a waste to use the entire server for just that. Other possibilities include moving our webapps (e.g. JIRA) there, which would free up some resources from the server hosting them in Amsterdam. No doubt, there are several other things we could use it for… please feel free to offer suggestions.