If you’re like many developers you either work from home completely or work from home fairly regularly to avoid the interruptions and distractions of the office. One of the difficulties in doing so,, ven if you have a VPN that allows you to get to the dev and test servers at the office, and obviously worse if you work on your own and don’t have an office with servers to VPN into, is that VPNs can be extremely unwieldy, particularly insofar as they affect your local networking, and in many cases you need to fake domains that are not the domain your VPN is giving you. Since most dev servers are shared, messing with the domain there is not a gerat idea, and can inadvertently maket that server inaccessible via the VPN.
A common solution, especially popular over the last few years, is to create a virtual test server in VMWare or VirtualBox on your dev desktop or laptop. While convenient, though, the performance is likely to be poor especially on a laptop, as giving it sufficient memory to perform decently is going to take memory away from your dev environment (and if you use Eclipse, you know what a memory hog it can be). Another solution is to buy a server that has been obsoleted by some company off ebay or your local computer recycling depot, install some kind of environment (usually Linux) and bingo, you have a test server that doesn’t impact your dev machine.
The only issue with this is that since it’s a relatively common solution (as is buying such computers as cheap servers by small companies) there tends to be a sufficient market for used x64 machines to make them fairly expensive, espec ially if you’re writing heavily multithreaded code and want a test server that has a decent number of cores to test the threading under heavy load.
A viable, though less popular, solution is to buy a used Sun/Orac le Ultrasparc server. The advantages (so long as you are writing in a properly virtualized language – i.e. anything that runs in a JVM such as Java, Groovy, Jython, Jruby etc., or another virtualized or interpreted language such as PHP, Smalltalk, Perl or Python:
- you can get a machine with a huge number of cores and a massive amount of RAM for minimal cost.
- Even if you do manage to get an x64 system with a decent number of cores and a good amount of RAM for a good price, your ongoing costs on your electrical bill will be far higher than an equivalent performance Ultrasparc – by going to Ultrasparc T series servers everywhere the City of London was able to avoid building four new power plants!
The disadvantages include:
most recent Ultrasparc systems do not come with a video card, so you can’t use any X based GUI (or any GUI really) even remotely.
Oracle made the decision to not support pre T series Ultrasparc machines with the latest Solaris 11 release.
Although the environment, particularly from the command line, is very Linux like (and open source package managers are available) it’s not exactly Linux and there can be a few gotchas for those used to Linux environments.
While Solaris itself is free, support isn’t, so you won’t get interim patches from Oracle unless you have a support contract (if you work for a bigger company, many do have Oracle support contracts (even if they don’t use Oracle machines, they often sue Oracle software) so you can ask your sysadmin for your company login. You can of course upgrade when a new point release comes out without any support contract.
One gotcha that most people used to x86/x64 machines will not find obvious concerns the Solaris filesystem, ZFS, and the notion of root pools. If you’re like me, and you find yourself with a T series Ultrasparc machine, the first thing you’re going to do is install a fresh copy of Solaris. Likely you’re going to use the text installer off a USB stick or DVD (it’s downloadable with a free Oracle Technology Network account). If your machine, as is common, has two or four identical SAS disks, the default Solaris text install will only show you the first of those as an install candidate. This would be fine, you can create another ZFS pool for the other drives and mount it where you tend to use the most drive space, except when buying a used Sun machine quite often the first disk has been wiped but the others have not been, since the people doing that are often themselves unfamiliar with Solaris and ZFS. The machine that you just paid a couple of hundred dollars for (and that can be a machine with 16 or 32 cores and up to 128GB of RAM) was likely bought as a mission-critical macine by its original owner, which means the ZFS root pool (and other ZFS pools) are likely to be on a mirrored drive.
The result of all this is that you choose to use the whole disk (say c0t0s0 – usually the first available)), go through the rest of the choices at the serial terminal console, and Solaris installs fine and reboots allowing you to SSH into the machine over the network and disconnect that irksome USB serial cable.
You do some basic configuration, then install some package that wants you to reboot, or you simply decide to shut down the machine for the night (unless you like white noise when you sleep, Ultrasparcs tend to have relatively loud high speed fans going all the time). When you restart the machine if you’re looking at it through the serial console (or network management console if you can be bothered configuring it) at a certain point in the boot process you’ll see a message similar to this one
Sep 15 16:06:06 svc.startd: svc:/system/early-manifest-import failed with exit status 1.
Sep 15 16:06:06 svc.startd: system/filesystem/usr:default failed fatally: transitioned to maintenance (see ‘svcs -xv’ for details)
Root password for system maintenance (control-d to bypass):
If you do enter your system password to get to a command prompt and type svcs to see what services are not running, you’ll find most of them are offline. Worse, the usual fix (restoring the boot files) doesn’t work because there is no backup yet. Getting frustrated, you decide something didn’t go well with the Solaris install (or some package you installed hosed the system) and you go ahead and erinstall, only to find that on the second boot the same thing happens.
The problem is that Solaris, after the first boot, looks for any ZFS pools and automatically mounts them at their named mount point (there’s plenty of documentation on ZFS disk pools online, so I won’t go into that). If you’re unlucky enough that the first disk was mirrored, likely the other disk in the mirror has a (different version of) Solaris than the one you just installed. This disk gets mounted after the new boot disk, and since the mount point is usually / it overwrites your root file system, leading to the error.
The solution,while it takes a little patience, is once the install medium is booted, DON’T choose Install Solaris immediately, instead choose Shell and drop to a command prompt. From there you can follow the documentation to format each drive (this takes some patience, formatting a 73GB drive takes around 2 hours), then either create a new mirrored ZFS rpool (root pool) with the first two disks, or simply type exit to get back to the installer and let it use the first disk as your rpool (this is my usual method, since I’m not concerned about the data in the event of disk failure and Sun/Oracle machines tend to be limited on disk space unless you start adding external SCSI enclosures).
Once Solaris is installed, you can create a regular ZFS pool with your remaining disks and choose a convenient mount point (make sure you copy any files in that directory elsewhere because mounting the ZFS pool will make them inaccessible). At this point you should be able to reboot quite happily and not see any svc messages telling you the system is going into a maintenance mode you can’t recover from without reinstalling.