The SSL certificate for this blog expired a few days ago; it took me awhile to notice and get a new one signed by StartSSL. Now we’re back online!
A friend offered me the chance to get my hands on a Sunfire x4150 1U rack server for the cost of shipping. I snatched it up and a few days ago the machine arrived on my doorstep.
- 2 Xeon x5460 quad-core CPUs at 3.16GHz
- 8GB DDR2 ECC RAM (soon to be upgraded to 20GB)
- 8 146GB 10K RPM SAS drives attached to an Adaptec STK RAID controller card
- 16 40mm dual-rotor cooling fans, because if you’re going to sound like a jet engine, you might as well go all out.
- 2 redundant 650W hot-swappable PSUs
There’s not much to say about the server itself. It’s a rack server. It’s (exceptionally) loud. It draws a lot of power (500W under full load). It’s easy to work on and it’s built for high availability. (For example, 14 of the fans can be swapped out while the server is running, and it seems that the server has enough cooling capacity to keep running even if a few of the fans are offline.)
While setting it up, the Adaptec RAID controller whined that its battery was missing or damaged. The RAID controller has a battery-backed cache, which speeds up write operations considerably. When the OS requests a write, the controller immediately returns success, freeing up the OS to do other things, while in fact the controller is storing the write request in a DRAM cache until the disks can get around to it. A power outage at an inopportune time can wipe the temporary cache and nuke whatever write operations were pending. Most controllers have a battery or capacitor to power the cache through medium-duration power outages (a day or two).
With the Adaptec’s battery out of commission, the write cache was disabled. Such slow. Wow performance tanked. Much complaining RAID controller. I popped the top lid off the server and removed the card; the Lithium-Ion battery was bloated. Dead, dead, dead.
Another friend/hardware sugar daddy found an HP P410 SmartArray controller and sent it my way as well. The P410 has an external battery pack; finding a place to tuck it in a 1U rack server that wasn’t designed for it was interesting. Right now it’s sitting on top of the motherboard. Fingers crossed that it won’t break off anything.
There’s one problem with the P410: it refuses to allow me into its BIOS. I can’t enter its configuration utility to set up the array. The P410 is supposed to pop up a message prompting me to press F8, but it doesn’t. The card runs its OptionROM, pauses as it takes its time “Initializing…” the card, and immediately goes to the next screen.
Fortunately, there’s a way around this. HP offers an offline Array Configuration Utility, which is a live CD that allows you to configure the controller. I downloaded an old version here (126.96.36.199 [22 Jun 2011], newer versions seem to have dropped support for the P410) and ran it. It seems you need a license key for RAID6/60, but I was happy with RAID10 + 2 spares, so I didn’t need a license.
Once I got the array configured, the controller detected the logical drive just fine at boot and soon I had ESXi installed.
My theory is that there’s an incompatibility between the x4150’s BIOS and the P410’s BIOS. The x4150 seems to grab the F8 key for its boot-device selection menu. I don’t know what the equivalent key is on HP servers. It’s possible a firmware update for the card would fix this, but I haven’t tried that yet.
Regardless, if you’re trying to use a HP P410 SmartArray controller in non-HP hardware, try the above configuration utility. There may also be OS-level tools you can use to manage your array after you’ve set it up for the first time.
So I went to the Intel Developer Forum this year. It was just as fun as last year, although this time I went with a few friends and we ducked out to explore San Francisco a little.
- I got another Intel Galileo board. (Yes, I’m aware the sensor page may be down. My server probably went offline at the campus.)
- We learned a little bit about I2C and got to play with the Intel Edison platform.
- I won an Asus ZenPad C 7 Android tablet. (Naturally, it uses Intel tech.) This is particularly cool to me; it’s my second modern Android tablet. (The first one I still need to blog about. Coming soon!)
There was a major focus on the Intel RealSense technology this year. One notable example was an “infinite run” style video game on a cell phone connected to a large television. The cameras on the cell phone detected the player’s position in front of the television and used that data to control the player’s character. Dodging left/right made your character dodge left/right on screen. It was impressive, especially considering that it was running on a cell phone, although the lag was noticeable. (The Intel employee had to change out cell phones because the first one was getting too hot!)
There were also robots. Terrifying robots. Terrifying six-legged four-foot-tall arachnid robots, plus a small army of its brethren doing a synchronized dance. It was creepy, but in an awesome way. Then there was the Savioke Relay. If this is what the robot uprising looks like, I have bad news: we’re all doomed, because its cuteness is disarming. They gave it eyes. They gave it happy little digital eyes that somehow manage to convey more expression than certain people I know.
Don’t worry though, it has one weakness: carpet. Whoever did the interior decoration this year at Moscone put in some absurdly thick carpeting in places, and it was giving humans trouble. Now imagine a little top-heavy robot scooting around on wheels. Poor Savioke tried crossing from linoleum to carpet at full speed. Fortunately, he didn’t tip, and his sensors made him stop and proceed at a slower clip.
If Savioke is the next Skynet, I for one welcome our future robot overlords.
Update: After about a week, HughesNet pushed out a potential fix to my specific modem. Although I don’t know exactly what bits got flipped, the problem is now solved, and the fix will be pushed out to other subscribers’ modems in the future. Thank you, HughesNet!
This post is for the benefit of HughesNet tech support/engineering personnel. These are my troubleshooting notes.
Large (>1MB) downloads have been failing for me on certain URLs, although smaller downloads fail as well. HTTPS fails more than HTTP. This has been happening for at least a month. Other users are reporting the same issue.
Files that usually fail to download:
A third, that occasionally produces the problem, is an HTTP download:
Occasionally, GitHub repos will fail to fetch for the same reason. This is particularly annoying because Git offers no way to resume a partially-fetched repository.
This Git repo is known to do so when cloned using HTTPS or using the Git protocol:
Troubleshooting steps taken:
- Disable/Enable Web Acceleration (no change in HTTPS downloads, didn’t test HTTP download with Web Acceleration yet).
- Several modem reboots (soft reboots from the control panel and hard reboots by pulling the plug from the wall and waiting several minutes).
- Several router reboots (soft reboots from the control panel and hard reboots by pulling the plug from the wall and waiting several minutes).
- Router bypass (connecting a laptop directly to the modem).
- Cable swaps (replacing modem LAN cable with a known good one).
Problematic operating systems and browsers:
- Windows 7 (IE 11, Firefox, Chrome)
- Windows 8.1 (IE 11, Firefox, Chrome)
- Ubuntu Linux 14.04 (Firefox, wget)
- Ubuntu Linux 12.04 (wget)
- SuSE Linux 12.3 (wget, and yeah yeah, I know, 12.3 isn’t getting security updates anymore…)
IPv4 and IPv6 connections are both susceptible to failure.
A number of people on the HughesNet support forum have also reported issues with these files. A number of them are on the HughesNet’s AMA IP gateway. Another is on the Tuscon, Arizona gateway. The HughesNet Community support thread is here.
The HTTPS downloads fail whether Web Acceleration is enabled or not.
Packet Captures (Wireshark format):
HughesNet alters TCP packets mid-flight. :( You can see this in the packet captures above. Something between my client and the server has taken upon itself to act as a TCP endpoint. Not nice.
I’m not the biggest fan of social networking, but I’ve gotten into using my Twitter account recently. I’ve found a lot of great programming and security resources the past few days via my Twitter feed.
If you want to get ahold of me fast these days, Twitter is the way to do it.
I retired my old router a few days ago. (I might’ve cried.) For the past few years my router has been a computer powered by a 900MHz Celeron CPU (512MB RAM, 2x gigabit PCI-X NIC, 2x 100MBit NIC). And it worked great. It used a Compact Flash card as its hard drive via a CF->IDE adapter and could push 300MBit/s across any two interfaces; it used CentOS 6 and packages like dnsmasq to provide various services on the network: dhcp/dhcp6, dns, ntp, etc. I had built the router as a project to learn about Linux and networking, and I had chosen that old computer because it had a serial port for a 28.8k modem.
The problem was, it used a lot of electricity for what it did; my Kill-A-Watt meter reported about 45W idle and 65W when under full load, compared to 5W-10W for modern routers. So I found a Cisco-Linksys WRT160Nv3 router for $5 at a thrift store and slapped OpenWRT on it.
The WRT160Nv3 is at least mostly supported by OpenWRT; I built my own custom image for it as I did for the RouterBoard RB411 awhile back. I’m not confident that the BCM4716B0 Wireless N radio is fully supported; there are several drivers for Broadcom chipsets and since I only needed G speeds (for a guest network), I selected the open-source b43 wireless firmware instead of the proprietary Broadcom wireless driver, out of principle. The Broadcom driver likely supports N speeds. There is a third driver (brcm80211) but I didn’t test it; you might have more luck than I did. You can switch between all of these drivers in the OpenWRT menuconfig system.
I also added some goodies to the image: iftop, tcpdump, and LuCI, the OpenWRT web UI, because it has totally spoiled me rotten. Adding them to the image instead of as modules saves space, which is important because this router has only 4MB flash. I shaved off space by disabling kernel debug symbols and removing a few other things, including opkg and ppp support.
Flashing was a minor challenge because the bootloader’s handling of TFTP flashing is broken. (This makes bricking a little easier, so be careful!) If flashing the OpenWRT from the stock web UI doesn’t work, flashing DD-WRT onto the device and then flashing OpenWRT via DD-WRT’s UI will.
After installing, I set up each port on its own VLAN, one for each subnet, and configured each one so the device would be a drop-in replacement for the Celeron box. It took a day or so of tweaking but I’m finally happy with the configuration. I even have a spare port to spin up whenever I want, although currently it’s disabled.
So how well does it work?
The wireless connection is stable; I can push 16-20Mbit/s through it without issues, which is decent for Wireless G. The device can push about 50-60Mbit/s tops between interfaces before its little 300MHz CPU gets saturated. Considering that my Internet connection is 10Mbit/s max, and the things the other ports are connected to top out about 40-50Mbit/s, it works out. For the price and the amount of electricity the device uses, I couldn’t be happier.
There’s two caveats.
One: whoever designed the case probably didn’t take a course on thermodynamics. There are actual vents, which is nice (two of my older routers have nothing more than tiny slots), but they’re around the edge of this plastic dome that collects heat at the top. Ooops! Wishing I had my Dremel tool still. It hasn’t overheated yet, but just in case I raised it up on two mechanical pencils to allow more airflow.
Two: it’s silent. That’s usually a good thing, except the Celeron router wasn’t; it had a single moving part (the power supply fan) that made a comforting whisper 24/7. I’ve had that sound in my ear for the past 4 years whenever I sleep. Now I have to get used to barking dogs and the garbage truck going past at 3AM–or find something else to make an equivalently soft noise. Preferably a something that doesn’t draw 45W power constantly!
Every semester, some of my students come up to me and ask questions along the lines of, “How can I learn more about networks?” or “I’ve heard Ruby is cool, where should I start?” or “I feel like the class is just scratching the surface, how should I learn more?” If you’re one of these students, this post is for you!
A developer at IBM Watson Innovation Labs, Iheanyi Ekechukwu, recently wrote a great article intended for incoming Computer Science students at Notre Dame. It discusses the ways computer science students can expand their knowledge and build confidence in their skills as developers. He has excellent, concrete suggestions; you can (and should) read his article here.
The key? Learning outside of the classroom whenever possible. If you’re asking questions like those at the top of the article, the curriculum isn’t going to give you enough.
Now, don’t get me wrong, the computer science program at CSUS has its strengths. We have a strong set of electives in computer graphics, video game architecture, and artificial intelligence. Our capstone Senior Project imparts valuable experience in software engineering. But there is so much more to computer science than what is covered in our degree, and no four-year computer science program can possibly hope to do more than scratch the surface. If you want to learn more, you need to look beyond the classroom, and that’s exactly what Ekechukwu’s article describes. (Go read it!)
Learning outside the classroom can be challenging. It takes discipline and time, but it’s also great fun when you find a project or subject you enjoy. The experience you’ll get will far surpass what you’ll learn in class. Most of what I’ve learned I gathered from projects outside of classes, from building my own Linux-based router (to share a dialup modem connection–yeah, that was an experience) to writing a PHP/MySQL web app on my first job (and last I heard, they’re still using it).
So, to my students: I highly suggest reading Ekechukwu’s article. If you need ideas for side projects, drop me an email or visit during office hours; we can find a project that kindles your interests.
I cleaned up the Python source code for the Intel Galileo temperature sensor I blogged about earlier this summer, and pushed the source up to my GitHub account. The code is very brute-force and make-do (it was written in a very short period of time during a very crazy semester and hasn’t been touched since) but I think it will be more useful online than sitting on my drives.
The temperature sensor is a TMP36 analog temperature sensor. I followed the directions at Adafruit to assemble a circuit and hook it up to my Galileo Board 2. I used the same circuit diagram as shown–the Galileo Board series and the Arduino shown are electronically compatible–although I added a 0.1uf capacitor between VCC and GND as suggested by a commenter at SparkFun and the TMP36 datasheet to cut down on noise. (No, I’m not sure this is necessary, but the capacitor was cheap and the resulting measurements seemed accurate within a degree or so. Maybe I should take some EEE classes next semester to find out what it’s actually doing?)
The daemon itself is written in Python 2 and runs under Yocto Linux (the Intel-provided full-sized Linux SD card image for the Galileo Board 2). A LSB script does a little configuration in /proc to give access to the analog input pins and then starts the daemon. The daemon listens on port 8080 for HTTP requests and serves up temperature data for the past 15 minutes, the past hour, the past day, and the past week, returning the data as a JSON feed that can be graphed by a library like MetricsGraphics.
Instructions for setting up the daemon are on the GitHub page. You’ll need the Galileo SD Card Image to install Yocto Linux. The newest version, 1.0.4 at the time of writing, is much smaller than the version I have. You made need to go to the old downloads page and download version 1.0.3 if the newest version does not have Python available.
In case you missed it, the feeds from the project can be seen in graph form here.
For my next Unvanquished map, I’m doing something different, something more challenging than yet another space station that starts with the letter “S”. (No, making two of them wasn’t intentional…it just happened.) This map is a dystopian, futuristic urban center, something that hasn’t been covered yet in Unvanquished and wasn’t very common in its spiritual predecessor, Tremulous.
I call it “Freeway”. Because something something originality something.
The map is inspired by the Detroit city hub in Deus Ex: Human Revolutions, which is one of my favorite games because it allows you to be sneaky and non-lethal. (Except in the parts that it doesn’t–the boss battles! Argh!) I’ve always liked skyscraper architecture and my love-hate relationship with real life cities is swinging more and more often towards “love”, so a city was the natural choice for a new Unvanquished map.
The theme isn’t the only new thing here. I’m also doing something different development-wise. Most early Tremulous maps (including mine) were developed in heavy secrecy, which was great for authors that were worried that their ideas would be stolen but bad for actually making a releasable map. It was very common for a new mapper to show up on the forums, post some enticing screenshots, and then disappear before a build was ever uploaded. In addition, many authors (cough not me I swear cough) went months between updates. That’s a long time to go without gameplay feedback from the community; often the feedback was provided by a few select beta testers–a very small number of eyes.
Worse, most mappers only packaged the compiled map and not the original, editable .map file, which meant that updating the map to fix bugs or to take advantage of new engine capabilities became next to impossible over time, as the mappers lost access to the original files. We lost access to a lot of good Tremulous maps that way. There are very fun maps out there that have no source available, which means we can’t update them to work with the new game engine.
Unvanquished mappers have gotten pretty good about packaging the source along with the compiled map (the source file compresses well, being plain text, so there’s no technical reason not to), but I’d like to tackle the first problem too. Unvanquished is an open-source game and I’m one of its developers; it should follow that my maps should be developed openly as well. I plan to garner feedback from the Unvanquished developers and community in an iterative, prototypical fashion. I also plan to keep old versions of the map so people can go back and see the progress it’s made from its inception to its current form.
Us programmers have a tool that’s great for this sort of thing: Git! The parts of a map that change most often are plain text (the shaders and the map source itself); the binary assets (textures, sounds) usually don’t change once they are added to the project, making Git an ideal choice for managing different versions of an Unvanquished map.
I’ve set up a repository on my GitHub account here. You can browse the Wiki for development updates and screenshots, which are contained in a separate branch. Viewing the screenshots from the earliest ones onward provides an inside look at my map development process, starting with the broad-stroked “sketch” of the main arena to where it is currently. There’s even an alpha to download, although it’s primitive at the moment. (Only the outdoor arena has any semblance of completion.)
I hope to see more Unvanquished mapper using an open development model like this.
My household uses surprisingly little electricity during the summer, probably because we don’t run the AC much. Air conditioning is expensive, especially with an elderly outdoor furnace/AC unit that was manufactuered before Facebook was a thing. Without the AC’s 3.5kW draw, our electric bill stays nice and low most months–except when it doesn’t, for (seemingly) neither rhyme nor reason.
While investigating the months of higher electricity usage, I noticed that the house draws about 420W on average–at night. I was surprised that our baseline usage was that high, so I bought myself a Kill-A-Watt meter to see what was going on. The device is dirt simple. Suppose you have a thing, and you’d like to measure its electricity draw. You plug the meter into a wall socket. You plug the thing into the meter. You push buttons to switch between amperage, wattage, overall power consumption (in kilowatt-hours), and time since the meter was plugged into the wall. Bam. Profit.
With the last two items, you can estimate the average electricity draw over a long period of time. Dividing the total consumption (in kilowatt-hours) by the time the equipment has been running (in hours) nets you the average electrical draw over that time period (in kilowatts). This is useful for devices with intermittent load, like refrigerators.After some unscientific experiments, I got some numbers on instantaneous power draw from my old hardware:
- LCD monitors use about 30W apiece when on.
- Wireless routers use 3-5W of electricity when idle.
- Wired router (a Celeron desktop) uses 45W idle and 65W when under full CPU load.
- T43 laptop uses 5W in standby (not charging), 25W idle, and 50W under load.
- Desktops use about 5W when off/standby.
- i5 + nVidia GeForce 650TI box uses 45W idle and 90W playing Unvanquished.
- i7 + AMD 6950HD box uses 99W idle (!), 240-260W playing Deus Ex: Human Revolutions, and 300W (!!) playing Bioshock: Infinite.
- Core 2 Quad server with four hard drives uses 150W of electricity.
- Acer h340 NAS uses 56W under full load with the case fan at 100% and all four hard drives spinning.
Immediately I spotted some things that were using far more electricity than I had assumed. The Core 2 Quad box is one. In retrospect, the waves of heat pouring out the back of the case should’ve been a hint. I took its 4 hard drives and put them in the NAS. It uses a third the current, and while its CPU is sloooooow, it does do what I need it to do, and I can accept the loss in performance. The wired router is another device that could be replaced to save. In addition, LCD monitors use more electricity than I realized; I’m making a concentrated effort to turn them off when I’m not around, or to configure their power saving modes more aggressively.
I’m going to go back and measure every device I have using a more rigerous method, noting down phantom power (draw when off), standby power (when applicable), idle draw, and loaded draw (under various CPU, hard drive, and GPU workloads). Eventually I’ll get a table of hardware set up here with various figures.