Category Archives: PC

Rise of the Centaur (2015 Documentary)

I recently happened to come across a documentary/movie on CPU manufacturer VIA (or rather, Centaur Technology), Rise of the Centaur.  Although it’s been out for a few months, I can’t find much mention of it across the internet, and it doesn’t even have an entry in IMDB.

Having some interest in CPU micro-architectures, and being in the mood for playing the sacrificial ‘beta tester’, I decided that I’d give it a try.  The movie is available on Vimeo for US$3 (rent for a week) or US$15 (“own”).  As I don’t particularly like streaming video, I went with the latter option.  US$15 is a bit more than I’d comfortably pay for a movie, especially one I can’t find feedback for, but it’s ultimately quite a reasonable amount.

CPU design is very much a technical topic, so I expected the documentary to avoid most of the low level details, to be palatable for a non-technical audience.  Then again, VIA CPUs aren’t particularly well known, even by those who assemble PCs, so maybe they target a slightly more technical crowd?

VIA x86 CPUs

VIA is perhaps the third (?) largest x86 CPU manufacturer, after the more well known Intel and AMD.  They’ve actually been around for a while and have produced a number of x86 chips over the years.  They’ve tended to focus on their own little niche (at least in the past 10 years or so), small form factor, low power designs, but now with both Intel and AMD spending big R&D bucks on the same thing, one does wonder whether how relevant VIA can stay in the area.

A number of x86 manufacturers have existed in the past, most now non-existent but VIA is an interesting exception.  Being able to stay in the game for so long, competing with companies many times its size, is certainly no small feat.  In fact they recently introduced an Eden X4 CPU only a few months ago, so they’re definitely still alive and kicking.

From a technical standpoint, VIA’s recent CPUs seem fine.  I’ve toyed around with a VIA Nano CPU (released in 2009) before, which easily outperformed the Intel Atom (1st generation) at the time (though using more power).  But both Intel and AMD have started pouring resources into low power designs, making significant improvements over the last few years.  Intel’s Silvermont µarch (aka “Bay Trail” Atom) is a significant improvement over the old Atom (Bonnell µarch).  I haven’t been able to find many benchmarks on the recently released Eden X4, but it seems to be comparable to Silvermont.

Whilst perhaps technically sound, pricing on the VIA CPUs seems unfortunate.  Their CPU+motherboard combo starts at US$355, whilst a comparable Intel J1900 solution can be had for US$60.  Understandably, there’s issues with economies of scale, in particular, high fixed costs.  Most of the cost in CPU manufacturing probably goes into R&D, a large fixed cost that Intel/AMD can more easily amortize across larger volumes, that VIA simply cannot do.  But going up against this sort of natural monopoly highlights one of the major challenges for a small CPU manufacturer.  The Eden X4 may have some redeeming points in some niches, e.g. supporting more RAM or AVX2 instruction support (no AMD CPU supports AVX2 yet!), but ultimately, this would be a small target market I’d imagine.

With all that in mind, it would be somewhat interesting to see what plans VIA has to deal with these challenges, and how they intend to stay relevant in the market.

The Documentary

Rise of the Centaur gives an overview of the history of CT (Centaur Technology), the company itself, aims/goals, some overview of the process of CPU design, mostly delivered via staff interviews.  It shows some of the struggles CT faces, how they solve issues and how they go about doing procedures such as testing, as well as an inevitable dose of corporate propaganda (from their HR recruitment policy to the notion of the little guy facing the multi-billion dollar Intel).  It’s probably what one would expect, more or less, from such a documentary.

Whilst it gives context to allow less technically knowledgeable viewers to understand the general idea, I think having some technical interest, particularly in CPUs, is definitely beneficial when watching the film.

Unfortunately, the film doesn’t explore much of CT’s future plans or even current plans, neither does it mention anything about its customers (i.e. who buys VIA chips?), how successful their CPU(s) are (e.g. market share, performance, comparisons with current Intel/AMD chips), something that’d be of interest I’d think.  As far as plans go, I suppose some of this may be trade secret, and perhaps a small company has a lot more scope for suddenly changing direction.  But the original goal of CT making low-cost x86 CPUs no longer seems to be the case, considering their relative price, so I’d have liked a comment on that at least.

Overall it seems to be a decent documentary, providing a rare insight into what goes on inside a small CPU manufacturing firm, competing with the likes of Intel.  It does lack some details, that I would have liked addressed, though their selection of information is fair.  The documentary isn’t for everyone, but I think it does a fair job at targeting a reasonable chunk of the tech minded audience.  As a software developer, it’s interesting to peek at what goes on the other side, albeit it being just a peek without much technical depth.

TLS/SSL CPU Usage for Large Transfers

There have been many claims around the internet of SSL/TLS adding negligible CPU overhead (note, I’m only considering HTTPS here).  Most of these focus on many small transfers, typical of most websites, where performance is dominated by the handshake, rather than the message encryption. Although SSL may be less useful for typical large transfers, as HTTPS becomes more pervasive, we’re likely going to see this happen more often.

However, after seeing a CPU core get maxed out during a large upload*, I was interested in the performance impacts for single large transfers.  Presumably reasonable CPUs should fast enough to serve content to typical internet clients, even on a 1Gbps line, but how close are they to this?
* As it turns out, the reason for this was actually SSL/TLS compression, so if you’re reading this after seeing high CPU usage during SSL transfer and the figures don’t match empirical speeds, check that it isn’t enabled!

So I decided to run a few (relatively unscientific) tests on a few dedicated servers I happen to have access to at the moment.  The test is fairly simple – create a 1GB file and measure CPU usage over SSL.  Note that I’ll be measuring the usage of the client rather than the server, since the latter is a little more difficult to perform – presumably the client should give a decent ballpark of the CPU usage of the server.

Test Setup

The 1GB file was created using dd if=/dev/zero of=1g bs=1M count=1024

This file was served by nginx 1.4/1.6 on Debian 7. SSLv3 was disabled, as it seems to be out of favour these days, so the test is only over TLS.  I tested various cipher suites using the ssl_ciphers directive:

  • No SSL: just as a baseline (transfer over HTTP)
  • NULL-MD5: another baseline
  • ECDHE-RSA-AES256-GCM-SHA384: labelled “Default“, this seems to be the preferred cipher if you don’t give nginx a ssl_ciphers directive
  • RC4-MD5: clients may not accept this, but perhaps the fastest crypto/hashing combo that might be accepted (unless the CPU supports crypto h/w accel)
  • AES128-SHA: probably the fastest cipher likely accepted by clients
  • ECDHE-RSA-AES128-GCM-SHA256: labelled “AES128-GCM” (no-one has space to fit that in a table; oh, and why does this WordPress theme have a limited column width?!); this is likely just a faster version of Default

The following commands were used for testing:

  • CPU benchmark:
    openssl speed [-decrypt] -evp [algorithm]
  • Wget download:
    time wget --no-check-certificate https://localhost/1g -O /dev/null
  • cURL download:
    time curl -k https://localhost/1g > /dev/null
  • cURL upload:
    time curl -kF file=@1g https://localhost/null.php > /dev/null

For checking CPU speed, the ‘user time’ measurement was taken from the time command.  I suspect wget uses GnuTLS whilst cURL uses OpenSSL for handling SSL.

I ran the test on 4 rather different CPUs:

  • VIA Nano U2250
    • Note that this is a single core CPU, so transfer speeds will be affected by the webserver performing encryption whilst the client does decryption on the same core
    • OpenSSL was patched to support VIA Padlock (hardware accelerated AES/SHA1/SHA256)
  • AMD Athlon II X2 240
  • Intel Xeon E3 1246v3
  • Marvell Armada 370/XP
    • A quad core ARMv7 CPU; quite a weak CPU, perhaps comparable to a Pentium III in terms of performance

CPU Benchmark

To get an idea of the speed of the CPU, I ran some hashing/encryption benchmarks using OpenSSL’s speed test.  The following figures are in MB/s, taken from the 8192K column. CPUs across the top, ciphers down the side.

Nano Athlon Xeon Armada
RC4 235.80 514.05 943.37 98.84
MD5 289.68 551.29 755.16 141.54
AES-128-CBC 899.14 227.45 854.48 50.05
AES-128-CBC (decrypt) 899.56 218.91 4871.77 48.61
AES-256-CBC 693.24 159.82 615.08 37.95
AES-256-CBC (decrypt) 696.25 162.48 3655.11 38.14
AES-128-GCM 51.38 68.63 1881.33 24.37
AES-256-GCM 41.61 51.48 1642.22 21.22
SHA1 459.06 413.71 881.87 105.54
SHA256 396.90 178.01 296.98 52.73
SHA512 100.98 277.43 464.86 24.42

(decryption for RC4 and AES-GCM is likely the same as encryption, them being stream(-like) ciphers and all)

Test Results


  • wget doesn’t seem to like NULL-MD5 or “AES128-GCM”
  • Columns:
    • Transfer (MB/s): download/upload speed, probably not useful, but may be interesting
    • CPU Speed (MB/s): = 1024MB ÷ (user) CPU Time (s)
  • I’ve included pretty graphs for management type people who can’t read tables; the speed is log scale though, so stay awake!

VIA Nano U2250

Cipher Wget download cURL download cURL upload
Transfer CPU Speed Transfer CPU Speed Transfer CPU Speed
No SSL 495 4129.03 457 1580.25 55.9 1706.67
57.1 144.88 40.4 155.06
Default 14.3 16.95 17.7 37.83 15.5 37.61
RC4-MD5 29.7 46.65 44 103.18 32 106.00
AES128-SHA 19.2 23.62 48.9 96.49 37.7 145.95
21 45.55 18.1 45.41

Speed Graph

AMD Athlon II X2 240

Cipher Wget download cURL download cURL upload
Transfer CPU Speed Transfer CPU Speed Transfer CPU Speed
No SSL 1782 10240.00 1975 12800.00 404 13473.68
308 438.36 211 416.94
Default 40.7 41.07 46.9 49.55 43.1 49.35
RC4-MD5 86.6 88.43 263 346.88 189 340.43
AES128-SHA 59.1 60.00 118 127.11 98 130.15
55.8 65.56 56.7 64.65

Speed Graph

Intel Xeon E3 1246v3

Cipher Wget download cURL download cURL upload
Transfer CPU Speed Transfer CPU Speed Transfer CPU Speed
No SSL 4854 32000.00 5970 32000.00 1363 51200.00
556 638.40 452 677.25
Default 88.6 88.55 997 1312.82 699 1422.22
RC4-MD5 182 185.91 514 587.16 420 587.16
AES128-SHA 128 128.00 556 643.22 449 664.94
1102 1497.08 723 1641.03

Speed Graph

Marvel Armada 370/XP

Cipher Wget download cURL download cURL upload
Transfer CPU Speed Transfer CPU Speed Transfer CPU Speed
No SSL 223 882.76 182 544.68 44.4 403.15
44.3 62.48 25.7 60.24
Default 7.01 7.23 16 18.52 13.7 18.56
RC4-MD5 20.5 22.14 32.6 41.90 23.1 41.80
AES128-SHA 9.16 9.62 21.5 24.11 16.2 23.63
17.5 20.15 14.8 20.15

Speed Graph


On slower CPUs (okay, I’ll ignore the Armada here), it does appear that SSL can have a significant impact on CPU usage for single large transfers.  Even on an Athlon II, the effect can be noticeable if you’re transferring at 1Gbps – whilst the CPU can achieve it, if you’re using the CPU significantly for other purposes, you may find it to be a bottleneck.  On modern CPUs (especially those with AES-NI) though, the impact is relatively low (and may be lower once Intel’s Skylake platform comes out (though I suspect GCM modes will still be faster, it may help clients that don’t support TLS1.2)), unless you’re looking at saturating a 10Gbps connection on a single connection (or CPU core).

Cipher selection can be quite important in making things fast if you’re on a slower CPU, although most of the time it’s AES crypto with a choice of SHA/GCM for integrity checking, if you want client support.

The crypto library likely has a noticeable effect, but this wasn’t particularly tested (it appears that OpenSSL is usually faster than GnuTLS, but this is only a very rough guess from the results).

Oh and AES-GCM is ridiculously fast on CPUs with AES-NI.

Stuff I left out

  • Triple DES ciphers: cause they’re slow, and caring about IE6 isn’t popular any more
  • AES crypto without AES-NI on modern Intel CPUs: this is perhaps only useful for those running inside VMs that don’t pass on the capability to guests, but I cbf testing this
  • Ads throughout the post

Loud Seagate Drives

I don’t think Seagate has particularly had a good reputation for the noise of their hard drives, but I’m going to tell my story nonetheless.

A while ago, a 2TB WD Green drive of mine started developing bad sectors.  I performed a disk scan, which would’ve marked sectors as bad, fixed the corrupted files and hoped that more wouldn’t develop.  However, it wasn’t long before this inevitably occurred, so I went out and bought a 4TB Seagate to replace it.  Things were fine.

More recently, my 1.5TB WD Green drive came to a similar fate (it seems that all my newer drives have been failing within a few years of usage).  As it so happened, Dick Smith was running a Valentine’s day special, putting on a 15% discount on their hard drives, so I grabbed a 4TB Seagate Expansion drive for $170 (it’s unlikely you’ll be able to find the drive at any store here for under $195, so that was an excellent price) to replace the old WD.

Plug the drive in, and you’re greeted with quite an audible powering up, followed by whirring.  At first, I thought there was a fan inside the enclosure, considering the noise and that there’s breather holes (more like a mesh) on the back, but I couldn’t feel any airflow, so concluded that the whirring is actually the drive itself.  The sound is definitely quite noticeable, louder than the rest of my PC and I can easily hear it from 3 metres away.  I have a 3TB WD Elements sitting right beside it which is almost silent – I can hear the drive when it spins up, but it’s still much quieter than this new Seagate.  Another thing that’s interesting is that, despite my internal 4TB Seagate having the same model number as the external, the internal drive seems pretty quiet; it’s possible that the case is blocking some noise, but even with it open, I can’t seem to hear the drive distinctively above the other noises in the case.

Now whilst I could just get used to the noise, I don’t really want to have to make that compromise.  On the other hand, I didn’t feel like going to the effort of returning the drive and then paying more for a replacement.  So I decided to try tweaking the drive’s AAM/APM settings to see what I could achieve.  Seagate conveniently doesn’t allow you to change the drive’s AAM (or they simply don’t support it, whatever), however APM is changeable.

Most search results on ‘Seagate’ with ‘APM’ seem to be people complaining about Seagate drives making audible noises when spinning down, where they’re looking to disable APM.  I’m a bit surprised that I can’t seem to find anyone complaining about normal operating noise of these when not even being accessed.  As I’m using this only as a backup drive, I don’t mind it spinning up only when it is actually accessed, so turning down the APM value, if it would stop the whirring, could work for me.

HDParm for Windows doesn’t seem to detect USB drives (side note: interesting that they use /dev/sd[a-z] to identify drives, despite being on Windows), but I did eventually find that CrystalDiskInfo would set the APM for the drive.  Changing the default value of 128 to 16 seemed to do the trick – the drive would spin down soon after becoming idle, making the drive silent.  Success!

…except that the drive would reset its APM value whenever it lost power.  Urgh, what to do?

Turns out, the CrystalDiskInfo guys thought of this – the mysteriously worded “Auto AAM/APM Adaption” option basically makes CDI sets the APM value of the drive every now and then (okay, it’s mentioned in the manual, but it’s not exactly easy to find).  This does mean that CDI has to stay running in the background, but as I have 16GB of RAM in this machine, I’m not too worried about that.

The drive does exhibit some “weird” behaviors (well supposedly understandable but still silly) – such as spinning up before you standby the PC, then quickly spinning down.  Also, the Auto APM setting sometimes takes a while to kick in after resuming from hibernate.  As my backup routine is basically a scheduled disk sync, the drive spins up for short periods when this occurs, but it’s a tradeoff I’m willing to take.  One thing to note is that the drive seems to spin up on any activity, even reading SMART metadata; CDI, by default, polls the drive’s SMART info to check for issues, but it’s easy to disable automatic refreshing to avoid the drive whirring up every 30 minutes.

tl;dr if you’ve got a Seagate external, can’t stand the whirring, and don’t mind it being spun down when idle, install CrystalDiskInfo, turn down the APM, enable to auto APM setting, get CDI to load on startup and disable automatic polling of SMART info on the drive.


Side note: CrystalDiskInfo provides a “Shizuku Edition” of the application.  As I couldn’t find information on what the difference was with the Standard Edition, I ended up downloading both, curious over the ~65MB size difference.  Turns out, Shizuku is just a anthropomorphised mascot for the application, the size difference being mostly high resolution PNGs depicting her in the theme that comes packed in the Shizuku version (the ‘Ultimate’ version contains multiple copies at multiple resolutions – presumably ‘Simple’ and ‘Full’ don’t contain the higher resolution copies, although one wonders whether multiple copies were really necessary).  The devs even went to the effort of getting the character voiced, which means you get a cutesy voice warning you if your hard drive is about to die (assuming you know enough Japanese to take a stab what what’s being said).
Though despite my enjoyment of moe anime, I’m fine with the Standard Edition.  Hats off for the effort, nevertheless.

Side-side note: The voices from above were encoded using Opus, my favorite audio codec as someone interested in data compression.  Yay for not going MP3.  Now if only they could get those images down to a reasonable size…

Moved Server

Helloo~ another rare post from me!

Recently shifted my websites from my old dedicated server to this VPS server – a move which I’ve been too lazy to do for like 2 years.

The dedicated server was rather overkill for the website I’m running (originally had other plans, but didn’t follow them through) so have been paying too much for hosting for quite a while.

This new VPS is a Xen 1GB RAM, 30GB HDD, 1.5TB/mo from ChicagoVPS, using the awesome deal here.  Asked support to increase space to 50GB which they did for only $1.75/mo extra (awesomesauce).  They also agreed to supply a further yearly prepayment discount if I switch to an annual billing cycle, which I plan to do soon.  Been happy with speeds and I/O performance; CPU is a Xeon X3450 (Xeon equivalent of i7 920) so pretty cool too.

Now the fun part: setting the thing up.  Previously using CentOS 5 64-bit, but after using Debian, I somewhat like the setup better, so decided on Debian 6 32-bit for this server.  Server stack software:

Running an nginx frontend proxying to an Apache backend, with PHP module.  Historically had issues with CGI/FastCGI, which is why I decided to go with the more familiar Apache PHP module, although the last time I tried FastCGI was years ago.  But nginx was great and allows me to run a minimalist Apache which works well for me.  Also I get the advantage of accelerated proxy responses in XThreads, although I’ve removed all the big downloads I used to have to fit in the 50GB disk space.

Unfortunately, different from my other installs of Apache with PHP module, it seems that Apache was leaking memory on this setup.  Went tweaking a few PHP configuration variables and seems to have magically gone away, me not knowing why.  Nevertheless, I decided on using a higher *SpareChildren configuration and a very low MaxRequestsPerChild to get around any possible memory leaks.  Apache itself only has 3 modules active (some configuration needed to be modified to accomodate this minimalist setup): mod_dir, mod_rewrite and mod_php5

Also have gotten nginx to send HTTP Expires headers, so pages will also load faster (since Firefox won’t be sending check requests waiting for HTTP 304 responses for static files).  But otherwise, configuring two servers is a bit more of an issue, especially with rewrite rules, but manageable.

Database Server
Have decided to go with MariaDB instead of MySQL here.  As with MySQL, the MariaDB defaults are a bit overkill for a 1GB server, so my.cnf needs tweaking.  Unfortunately though, there are many MySQL tweaking articles out there, but I didn’t find any for MariaDB – although MySQL largely translates over, there are parts which don’t.  So configuration took a bit more time and effort to get right.

Whilst disabling InnoDB and tweaking buffers is probably enough for a standard MySQL setup which only runs MyISAM tables, MariaDB includes, and activates by default, a number of other plugins which probably need to be disabled (such as PBXT).  Aria being the new internally used storage engine cannot be disabled, and you need to remember to tweak down the default buffer size in addition to the MyISAM buffers.

Speaking of Aria, I decided to switch all my tables to Aria format as it’s essentially an improved version of MyISAM anyway.  Everything seems smooth sailing so far.

As for database backups, I’ve decided to move away from the mysqldump command I’ve been using for so long.  Although I’d disabled table locking when dumping, so that the website didn’t lock up for 15 minutes during the dump, I’m not sure how appropriate that really is, not to mention that it seems like a lot of unnecessary load.  Considering alternatives, there seems to be only two: mysqlhotcopy or a replicated slave which I can run mysqldump on.  Latter requires more configuration so am considering the former.  However, mysqlhotcopy seems to lock all tables being dumped, which means the site locks up for about a 30 seconds whilst the database gets copied.  I’m not really worried about the downtime, but the fact that requests queue up on the server and quickly chews through RAM is something I do have to take into consideration.  As the mybb_posts table will obviously be the one taking the longest, and locking the table will only really affect new posts, it seems better to lock individual tables and copy, which will probably mean I’ll have to write my own script (or call mysqlhotcopy a few times).  There’s a slight possibility for data desynchronisation between tables, without referential integrity, but I’d presume this is somewhat rare.  Besides, if this really is an issue, it’s possible to group commonly used tables together.

Well the webserver/PHP and database server are the most exciting to configure since they’re the heart of website-server (trying not to say “webserver” again).  Went with postfix instead of sendmail, and the email configuration wasn’t as scary as I thought it would be.  Nothing else particularly worth mentioning otherwise…

Moving the server
Had originally planned to stagger the move.  Firstly moved over, so I could identify any issues (such as the Apache memory leak).  After that, moving everything else over went pretty quickly, even the EP database (well, I did move all the attachments over before closing down the forums; included with setting the domain’s TTL to 60 seconds, there wasn’t that much downtime).

Unfortunately, the EP tables were defaulting to latin1 encoding.  This seems to have caused an issue as UTF-8 data was stored in them, and the default encoding for this new server is UTF-8.  Which meant hours of downtime, me staying up into the wee hours of the night repairing the encoding.  And then after I did that, I forgot to switch the users table back to textual formats (from binary fields) so no-one could actually log in.  Other bugs which I didn’t have before needed some nginx proxy tweaking but otherwise, everything seems to be well.

Overall, server seems to never be going over 500MB RAM usage for normal situations, so glad I got 1GB for plenty of headroom.  Am also surprised at this relatively low memory usage, despite me being rather generous to buffer sizes, but I guess tweaking pays off.

Too Much Bandwidth (or maybe, just quota)

So, time for another pointless update on myself (well, I may as well post, otherwise this place would be entirely dead).

I’ve posted a number of times before about my internet connection and that, and how you’ve probably figured that I’ll never shut up about it until something like the NBN comes (if it ever will).  But anyway, this might be a bit of a turn.

Right now, I’m on a 1.5Mbps connection with 25GB peak downloads and 120GB off-peak (2am – 12pm) quota per month. (if you’re wondering, the annoying slowdowns have since mysteriously vanished)  Exetel (my ISP) have decided to be a fag and increase prices by $10/month, so their lowest (non-shit) plan is now $50/month.  They have somewhat “compensated” by increasing quotas to 30GB+180GB off-peak (which will become 2am – 2pm), however, I’m already finding it really difficult to use up my current quota.

I’ve looked around, but for 1.5Mbps connections, it seems there really isn’t much cheaper available (thanks to Telstra’s dominance in the area) – probably the most I could save would be $5/month which would also require bundling with a phone.  Oh well.

So, back to the issue of using up the quota.  I guess I don’t really have to, but I guess I’ve developed this idea that I should, and despite myself saying it’s unnecessary, I’m always trying to find something to exhaust the bandwidth.  So yeah… downloading useless stuffs.  Especially difficult with me as I try to be conservative with bandwidth usage.  Am really starting to run out of ideas over what I should do with the quota – perhaps I should convince myself not to bother with it (and save some electricity by not having the computer on at 2am downloading stuff).

PMPs – Why do People Ignore Compression?

One thing I notice is that many portable devices, companies sell higher capacity versions for exorbitant premiums, when flash memory really isn’t that expensive.  Seems to be less of an issue for players which do include an (mini/micro)SDHC expansion slot, as you can effectively increase capacity with a cheap add-on card.

But despite this, it seems that many people really do pay these excessive premiums for this increased storage.  I sometimes do wonder how people fill up so much space, eg getting a 32GB player over a 16GB one.  Surely these people have lots of videos and music, probably more than they need, and obviously, a higher capacity player allows them to carry more on the same device.

Whilst this is fine for the majority who aren’t so technically inclined, I do wonder about the people who are more technically inclined, and them overlooking the other side of the equation.  For example:

Amount of music that can be stored = Storage capacity ÷ per song size

Now we want to be able to store more music (again, even if it’s a lot more than we need), but the general approach of simply upping storage capacity is only one part of the equation – most people, even more technically inclined people, seem to ignore the fact that you can also store more stuff by reducing the file sizes of media!

Admittedly, compressing stuff can take effort.  In fact, I’ve had a number of motivations that most probably never had, including the old days of me trying to fit MP3s on floppies, squish as much as I could out of my 4GB harddrive, squeeze music on a 256MB MP3 player, and packing videos onto my 1GB PSP memory stick.  However, with a bit of reading, it’s mostly sticking your music/videos into a batch converter and then copying everything across.  It’s slightly less convenient when you add stuff (you probably need to pass these through a converter too), though, personally, I’m used to doing this, so I don’t mind.

But does compression really yield much benefit?  From what I’ve seen, I’d say so.  It seems most people just dump their 128/192/256/320kbps MP3s (usually more 320kbps as this is a popular size in P2P) on the device and that’s all they care about.  From the fact that most people cannot tell defects in 128kbps MP3s (let’s just say it’s LAME encoded), and my own listening tests, I’d say that most people cannot hear defects in 56-64kbps HE-AAC (encoded with NeroAAC).  Support for this format is limited though (difficulty of implementing SBR on embedded devices), though I believe Rockbox supports it, along with the latest iDevices (pre-late-2009 do not support HE-AAC).  Next in line would be 80-96kbps OGG Vorbis, if your player supports it.  In fact, I cannot personally hear defects in 128kbps Vorbis, so even audiophiles could use a big space saving by using higher bitrate Vorbis.  But support for Vorbis is surprisingly low, considering that this is a royalty free codec.

For an audio format with a fair bit of support, would be LC-AAC (aka “AAC”) which achieves similar quality to 128kbps MP3 at around 96-112kbps (using NeroAAC or iTunes).  Failing that, using LAME to encode MP3s with a variable bitrate can yield decent quality with average bitrates around 112kbps.

Now if we assume that the average song is a 320kbps MP3 and the listener really can’t hear defects in 128kbps MP3s, and the underlying player supports HE-AAC, we could get a massive 320-56 = 264kbps saving (82.5% smaller!) by being a bit smarter in storing our music.  This equates to being able to store over 5 times more music in the same amount of space.  But of course, this is an optimal situation, and may not always work.  Even if we’re more conservative, and say that the average MP3 is 192kbps, and the underlying player only supports LC-AAC, we can still get a 50% reduction in size by converting the 192kbps MP3 to 96kbps LC-AAC, which equates to a doubling in storage space.

Videos are perhaps more difficult to get right as the parameters involved in video encoding is significantly more complex than audio encoding (also note that videos often include audio).  But from what I’ve seen, significant space savings can be gained by encoding videos more intelligently, but it’s hard to provide rough figures as most people do convert videos for their portable devices, but use a wide variety of applications and settings.  For reference, I see a lot of >100MB PSP encoded anime episodes, however, I can personally get them to around 30-40MB using a x264 crf of 25 and ~8MB audio stream (allowing me to easily store a 12 episode anime series on a 1GB stick, with plenty of space to spare).

So for those who don’t compress their media, maybe give it a bit of a shot and see what space savings you can get.  You may be surprised at how much 16GB can really store.


Why would anyone buy an iMac?

People who know me probably know that I’m a lot more anti-Apple than I am anti-Microsoft, but that’s besides the point here.

Was browsing some ads that got sent to my house today and I saw an ad for an iMac (as Apple tightly controls prices, I would expect them to be similar across stores) and, seriously quite shocked at what was on offer.  The cheapest system had:

Intel i3 3GHz CPU
4GB RAM (probably DDR3)
500GB Harddisk
256MB ATI Radeon HD 4670 GPU
21.5in screen
MacOSX 10.6

All for AU$1598!  To put this in perspective, my current computer, which I bought in 2008 when the AUD crashed cost me less, and is still more powerful than the above.  This is what I paid:

Intel Core2Quad Q6600 [$295] (FYI: a C2D E8500 was about $285 at the time – comparison with i3)
4GB DDR2 1066MHz Kingmax RAM [$95]
640GB Samsung F1 7200rpm HDD [$89]
512MB ATI RadeonHD 4670 GPU [$119]
Gigabyte EP45-DS4P motherboard [$199] (that’s a rather expensive motherboard BTW)
Antec NSK6580 case with 430W Earthwatts PSU [$128]
Logitech Desktop 350 (basic kb+mouse) [$22]

…which totals $947.  If we add in a 21.5in screen [probably under $200 at the time] and a DVD burner [around $30 at the time], and even add in a copy of Windows (around $200) it’s still significantly cheaper than the iMac today even disregarding the fact that the AUD was worth 60% of what it’s worth today, relative to the USD.  Oh, and yes, my system pretty much beats the iMac in every way, not to mention it’s far more customisable and not as locked down as anything Apple make.

Okay, Apple’s stuff is absurdly expensive, this is probably nothing new.  From what I’ve heard, people may buy Apple stuff for its design.  But is the design really any good?  I personally don’t think so.

Our Uni recently replaced all library computers with iMacs (different to the one advertised, so I may be a little misinformed here) and I really don’t like their design in a number of ways.  After using one for a while, this is my thoughts so far:

The Screen and Machine

  • It’s big, heavy and somewhat cumbersome.  It appears you can only tilt the screen forward and backwards.  Although most screens (especially cheaper ones) don’t seem to be terribly adjustable, I much prefer the Dells in the IT labs, where you can adjust the height, swivel horizontally and rotate the screen itself on the stand.
  • It’s glossy.  I don’t know WTF people make glossy screens.  If I wanted to see my own face, I’d look in a mirror.  If I wanted to see that bright light behind me, which is reflecting off this stupid glossy screen, I’d look directly at it (but I wouldn’t, I’m not that stupid).  But when I’m looking at a screen, I want to see what’s actually on there.
  • I can’t seem to find any controls on the screen.  Maybe there’s some on the back, but I didn’t look too much.  Not that screen controls should be on the back anyway.
  • USB ports.  The last time I used a computer which didn’t have USB ports at the front was made about 10 years ago.  Apple helps you bring back those memories by not putting USB ports at the front (or sides).  As for the back USB ports, the number of them is somewhat limited…
    I did actually later realise that there were USB ports on the side of the keyboard.  I guess that’s a reasonable way to do things, though I still would be concerned whether these ports supply enough power for a portable HDD.
  • Actually, make it that there’s nothing useful on the front or sides of the screen.  The power button is conveniently located at the back of the screen, so if you want to turn it on, you’re going to have to pull the screen forward, and then turn it around so you can reach the button (making sure you don’t pull out any cords), then do the reverse to return the screen to its original position.
  • The back doesn’t appear to have that many ports, though I didn’t check much (not easy to), and certainly looks a lot less than what my Gigabyte EP45-DS4P motherboard supplies.
  • I still haven’t managed to find where the optical drive is…

The Keyboard

  • Is small and flat – very much like a laptop keyboard.  Maybe some people prefer laptop keyboards, but I don’t.
  • Has very little extra keys.  Fair enough I guess, but overall, seems like a cheapish keyboard and hardly anything I’d pay a premium for.  Overall quite usable though.
  • Doesn’t have a Windows key, for all those planning to install Windows on it (the Uni library iMacs all run Windows).  Fair enough from an Apple standpoint I guess.

The Mouse

  • The trackball is quite small.  At first I didn’t like it, but after a while of using it, it seems okay.  In fact, it being a ball allows you to horizontally scroll quite nicely, despite many applications not supporting horizontal scrolling, but I guess that’s not the mouse’s fault.
  • One-button design.  Despite its looks, the mouse can actually distinguish left, centre (the ball) and right button clicks reasonably well, however, only if you push your fingers in the right place.  Unfortunately, as this is a single button design, there isn’t really any clear way to feel where the right place is without looking, apart from finding the ball with your fingers and distinguishing left and right portions from there.  If you push too close to the centre though, you can inadvertently get the mouse to press the wrong button.
  • From the above, you cannot click the left and right mouse button at the same time.  Not important for most applications perhaps, though I know some games require (or can be enhanced with the ability) both buttons to be pressed at the same time.
  • Like the keyboard, the mouse is fairly basic and has no extra side buttons and the like.  Hardly anything I’d pay a premium for.

So there’s my thoughts on the iMac.  Seriously overpriced and badly designed.  Unless you absolutely must use OSX (and unwilling to build a Hackintosh) or just an avid Apple fanboi, I can’t see why anyone would rationally buy this hunk of junk.

New USB Stick

I’ve had a number of USB sticks in the past, and from historical situations, they tend to last around 2 years for me.  My current (well, actually, previous, now) USB is a Transcend 8GB, and I’ve already been using it for over 2.5 years, so I’ve been wondering if this thing is going to die.  Maybe it’s better, maybe it’s just luck, but I decided to leave out that risk factor and get myself a new USB just in case. (yes, I do manually backup data, but backups are only so good)

Anyway, one of the things bothering me with this Transcend stick is the horrible speeds it has.  Running portable apps like Firefox Portable takes forever to load, and saving anything on the USB has a noticeable latency lag.  As USBs are really cheap these days, I decided to look for a faster stick, rather than a large one.  I’m only using around 300-500MB anyway, and rarely go above 700MB unless I’m in the rare situation where I’m transferring some large files (in which case, I don’t mind bringing my USB HDD to do that), so I could easily live on a 2GB USB, perhaps 4GB for good measure.

Unfortunately, it seems all the faster USB drives are also large.  Looking around, the best that appealed to me were the 8GB Corsair Voyager and Patriot XT Xporter Boost from Umart (which now sell for around $25).  Drives like the OCZ Throttle and Corsair Voyager GT I could only find in at least 16GB sizes, which cost significantly more, and I seriously don’t need all that space.

Then I saw that MSY were selling a Patriot Xporter Rage 8GB for $25, so I decided to get one of them.  After some Googling though, I was a little worried on whether it delivered its advertised speed, finding a thread where users were complaining about the 16GB version’s write speeds, also hinting that the larger drives (64GB) may actually deliver on the advertised speeds (and I’m getting a smaller 8GB one).  But anyway, I went ahead and bought it (after they managed to get one in stock) for $24 (yay, $1 saving!).

Bringing it home, it’s formatted as FAT32 with a 64KB sector by default.  I do seem to get around 25MB/sec on sequential writes (woot!).  64KB sector is a bit excessive, but as I don’t really care about space, I don’t mind it.

As for the physical drive itself, it’s slightly smaller than the Transcend, and its capless design, I actually like.  On my old stick, it’s a little slider at the side, which you push forward to push out the USB connector.  On this one, you push the entire back part of the casing forward to reveal the USB connector.  A thing about the capless designs is that applying pressure to the USB port can cause it to retract (a pain if it gets loose and you don’t quite fit the connector in properly), but with the new Patriot drive, you’re naturally going to be applying pressure from the back of the USB stick, so it doesn’t really matter.  Anyway, the outside is also slightly rubbery, though I don’t think the additional grip is much importance.  The thing I don’t like is that it no longer has an indicator activity LED.

So, now that I have a 8GB stick, what to fill it up with?  As this is supposedly a fast drive, I decided to stick some bootable stuff on it, just in case I ever need it (unlikely, but oh well).  I’m too lazy on how to read up on making Linux boot drives, so I just used this and added some stuff that might come in handy – UBCD, System RescueCD and Ubuntu 10.10 (Knoppix and Bart’s PE might’ve been nice; would be nice to have a quick booting text based Linux distro which runs a shell script at bootup – might be useful for quickly performing some offline actions on a PC).

Unfortunately, the formatting process also reverts the drive’s sector size to 4KB, but it seems that Acronis Disk Director, which I happened to have installed, is able to convert sector sizes, so I upped it to 64KB.  First time I tried, it didn’t work (maybe cause I didn’t reboot the PC as it asked me to).  Out of interest, I noticed that Disk Director allowed creating multiple filesystems on a USB (Windows disk management doesn’t allow this), however, it seems that Windows just ignores other filesystems on the drive…  Anyway, reformatted and recreated the drive a second time, upping the sector size to 64KB and it worked.  Except that I got some warnings in the bootloader about the sector size > 32KB.  Despite that everything worked, I decided to just convert the thing down to 32KB for good measure anyway.

So that’s the wondrous story of my new USB, where Firefox Portable doesn’t take forever to load.  Maybe it’ll mean that I take up more space, since I used to stick everything in self extracting EXEs on my old drive (would extract stuff to C: drive and run from there as sequential reads on the USB were reasonable, as opposed to random reads).

Oh, and I’m also running a git repo on there too, with SmartGit as my portable Git client. (tip, you don’t need the full MSYS Git for it to work, just git.exe and libiconv.dll seem to be enough)

Horrible Excel save times on USB

I don’t have the fastest USB drive, in fact, it’s probably crapish, primarily the horrible latencies it has.

But when saving a ~450KB .xls file takes 2 minutes, using Excel 2007, something can’t be right.  Copying the file to a local harddrive and saving there only takes one second.  Copying the file back, another second (more or less).  Evidently Excel is doing some crazy seeking whilst writing the file or similar.  But why is it so bad???  Surely, these days, it could easily write the entire file to memory before physically writing it to disk?

Switched internet plans

Stuck in a churn request to switch internet plans a few days ago.  I’m moving from TPG’s 30+30GB (on/off peak) 512kbps plan to Exetel’s 15+”120″GB (on/off peak) 1.5Mbps plan (it’s more like 100GB off peak, since it’s like impossible to get 120GB through a 1.5Mbps line in 6 hours per day for 31 days).  Both plans cost AU$40/mo.

Transfer completed now, but unfortunately, I still seem to be getting slow speeds at night 🙁 so I guess the speeds are really a Telstra issue, rather than an ISP one, which sucks, cause Telstra never fixes anything.

It does seem to be mostly going fine during the off peak time though, peaking at around 156KB/sec, averaging around 135KB/sec (which I guess is kinda crap, but probably Telstra’s issue again).

Unfortunately, TPG seemed to want a 30 day notification period or something, so we get charged a bit for that 🙁