Rise of the Centaur (2015 Documentary)

I recently happened to come across a documentary/movie on CPU manufacturer VIA (or rather, Centaur Technology), Rise of the Centaur.  Although it’s been out for a few months, I can’t find much mention of it across the internet, and it doesn’t even have an entry in IMDB.

Having some interest in CPU micro-architectures, and being in the mood for playing the sacrificial ‘beta tester’, I decided that I’d give it a try.  The movie is available on Vimeo for US$3 (rent for a week) or US$15 (“own”).  As I don’t particularly like streaming video, I went with the latter option.  US$15 is a bit more than I’d comfortably pay for a movie, especially one I can’t find feedback for, but it’s ultimately quite a reasonable amount.

CPU design is very much a technical topic, so I expected the documentary to avoid most of the low level details, to be palatable for a non-technical audience.  Then again, VIA CPUs aren’t particularly well known, even by those who assemble PCs, so maybe they target a slightly more technical crowd?

VIA x86 CPUs

VIA is perhaps the third (?) largest x86 CPU manufacturer, after the more well known Intel and AMD.  They’ve actually been around for a while and have produced a number of x86 chips over the years.  They’ve tended to focus on their own little niche (at least in the past 10 years or so), small form factor, low power designs, but now with both Intel and AMD spending big R&D bucks on the same thing, one does wonder whether how relevant VIA can stay in the area.

A number of x86 manufacturers have existed in the past, most now non-existent but VIA is an interesting exception.  Being able to stay in the game for so long, competing with companies many times its size, is certainly no small feat.  In fact they recently introduced an Eden X4 CPU only a few months ago, so they’re definitely still alive and kicking.

From a technical standpoint, VIA’s recent CPUs seem fine.  I’ve toyed around with a VIA Nano CPU (released in 2009) before, which easily outperformed the Intel Atom (1st generation) at the time (though using more power).  But both Intel and AMD have started pouring resources into low power designs, making significant improvements over the last few years.  Intel’s Silvermont µarch (aka “Bay Trail” Atom) is a significant improvement over the old Atom (Bonnell µarch).  I haven’t been able to find many benchmarks on the recently released Eden X4, but it seems to be comparable to Silvermont.

Whilst perhaps technically sound, pricing on the VIA CPUs seems unfortunate.  Their CPU+motherboard combo starts at US$355, whilst a comparable Intel J1900 solution can be had for US$60.  Understandably, there’s issues with economies of scale, in particular, high fixed costs.  Most of the cost in CPU manufacturing probably goes into R&D, a large fixed cost that Intel/AMD can more easily amortize across larger volumes, that VIA simply cannot do.  But going up against this sort of natural monopoly highlights one of the major challenges for a small CPU manufacturer.  The Eden X4 may have some redeeming points in some niches, e.g. supporting more RAM or AVX2 instruction support (no AMD CPU supports AVX2 yet!), but ultimately, this would be a small target market I’d imagine.

With all that in mind, it would be somewhat interesting to see what plans VIA has to deal with these challenges, and how they intend to stay relevant in the market.

The Documentary

Rise of the Centaur gives an overview of the history of CT (Centaur Technology), the company itself, aims/goals, some overview of the process of CPU design, mostly delivered via staff interviews.  It shows some of the struggles CT faces, how they solve issues and how they go about doing procedures such as testing, as well as an inevitable dose of corporate propaganda (from their HR recruitment policy to the notion of the little guy facing the multi-billion dollar Intel).  It’s probably what one would expect, more or less, from such a documentary.

Whilst it gives context to allow less technically knowledgeable viewers to understand the general idea, I think having some technical interest, particularly in CPUs, is definitely beneficial when watching the film.

Unfortunately, the film doesn’t explore much of CT’s future plans or even current plans, neither does it mention anything about its customers (i.e. who buys VIA chips?), how successful their CPU(s) are (e.g. market share, performance, comparisons with current Intel/AMD chips), something that’d be of interest I’d think.  As far as plans go, I suppose some of this may be trade secret, and perhaps a small company has a lot more scope for suddenly changing direction.  But the original goal of CT making low-cost x86 CPUs no longer seems to be the case, considering their relative price, so I’d have liked a comment on that at least.

Overall it seems to be a decent documentary, providing a rare insight into what goes on inside a small CPU manufacturing firm, competing with the likes of Intel.  It does lack some details, that I would have liked addressed, though their selection of information is fair.  The documentary isn’t for everyone, but I think it does a fair job at targeting a reasonable chunk of the tech minded audience.  As a software developer, it’s interesting to peek at what goes on the other side, albeit it being just a peek without much technical depth.

TorGuard Experience

Having never used a paid VPN provider before, I decided to take advantage of TorGuard’s current 50% off lifetime offer to see what one is like.  Apparently TorGuard is reasonably well respected and widely used, so I wasn’t really expecting to be disappointed; so plonked down 6 months’ payment (since they give no further discounts for yearly payments – just a free email account which is of no interest to me).

Pre-ordering

They seem to skimp a little on details – I couldn’t find a list of server locations, beyond the flags list they have on their order page. There’s a mention in the FAQ to find the list of servers in your account, which isn’t terribly useful if you want to run ping tests to check latency to their servers.
After ordering, I did find that there is a list of countries on the order page, at the bottom after clicking the “COUNTRIES” link – since I was Ctrl+F’ing for “location”, I missed that.  Oops, but they could’ve made the list easier to find.  Still, there’s no hostnames/IPs that you can lookup and test.

They sell both VPN and proxy services, as well as a bundle (VPN+proxy), but trying to order a bundle just sends you to the same page as ordering a proxy (although the cart ID is different?).  However, the previous blog post seems to mention that the VPN service includes proxies…  Confusing, but after ordering, it seems that this is indeed true, despite the proxy service not being advertised in the VPN features listing.

Also their website is proxied through CloudFlare, meaning that you get stuck in a redirect loop trying to access it until you enable cookies and Javascript.  I would’ve expected a privacy serious service to be less reliant on 3rd party services (which may or may not keep logs or be complicity to US interests, since CF is US based), but oh well.

Ordering

Overall, fairly straightforward and standard process. There’s a checkbox for “TG Viscosity”, which, after a bit of searching, turns out to be a free license to a VPN client they provide, but it would help to at least put a few words to explain what it is.

Pricing on their dedicated IP option seems whack – they as for US$8 recurring regardless of your selected billing period, which presumably means $8/month if you pay monthly, or $8/6 months if you pay semi-annually?  Interestingly, the price is different if you select annual payment (US$55/year).  Probably a bug/oversight somewhere.

I paid via credit card, and they ask the usual name + billing address (this is not asked for other payment methods) as you’d expect.  However, they seem to require a phone number, which I believe isn’t necessary for CC payments.  I entered a junk value here and it seemed to be accepted fine anyway.  It’s nice that they give you the option whether to store the CC details for recurring payments or not.

The ToS seems to be fairly standard as expected, including terms expressly forbidding the service to be used for various illegal activities that I expect many VPN users would explicitly use it for anyway.

After payment, you get redirected through to download their software, which should probably be the easiest way to get the VPN up and running.

Services

After registering, I found that neither VPN nor proxies worked (couldn’t authenticate it seems).  Put in a support ticket, to which they responded in a matter of minutes.  Apparently my using an email address with a ‘+’ in it causes issues, which they fixed by removing it. (Gmail and Hotmail use ‘+’ for email aliases)

VPN

In your account, you can see the list of locations (copy posted below) supported, although clicking on the “1250 servers” link definitely doesn’t show you a list of all the servers.

After installing the VPN client provided by TorGuard, you get a fairly simple and straight forward interface, which seems to address the concerns that I’d expect most users to have.  The server list here is somewhat nicer than the one listed in your account as it breaks things down by cities instead of just countries, but no hostname/IP provided, so you have to go testing them yourself to figure them out.  Still nowhere in the range of 1250 entries though (and if they have multiple servers per location as one might expect, there’s no indication of how many are there, bandwidth availability etc).

They seem to support the standard VPN protocols, however, SSTP support is limited to only a few locations – something which I would imagine shouldn’t be too difficult to support on all their servers.  It seems like OpenVPN supports connecting over a HTTP proxy though, which I suppose could be an alternative to cases where you’d need SSTP – still, they should support all protocols at all locations IMO.
(as someone who has never set up a VPN server, I don’t know the limitations of needing to support this stuff, so there’s maybe reasons behind this)

From posts I’ve read, some nice things they support, that various other services don’t, include multiple VPN connections and port forwarding.  I can’t seem to find the port forwarding option though, apart from an announcement which points you to unavailable options.  Maybe I’ll ask support if I ever need it.

I’ve otherwise done minimal testing of the VPN and no speed/latency tests at this stage.

Proxy

Disappointingly, many of the VPN servers don’t support HTTP/SOCKS proxies (see list below).  Seems like something that should be easy to support, in fact, I imagined that they’d have the same locations for both before ordering.  Whilst it’s possible to set up a VM with a HTTP/SOCKS server running over the VPN, it’s somewhat of a convoluted setup which they should be doing instead (I haven’t looked into whether it’s possible to use routing rules to bypass the need for a VM, but a HTTP/SOCKS server which can speak over a specific interface would still be necessary).

It does seem to match the list of countries mentioned on their proxy info page, so the VPN service is actually bundling their complete proxy service as opposed to some watered down version.

Anyway, I decided to give a SOCKS server a go.  Presumably they need authentication, although no instructions are given on the listing page.  Attempting to use them without authentication indeed doesn’t work.  Unfortunately Firefox doesn’t support SOCKS proxies with auth, so I tested PuTTY, using the registered email address and password instead, which seemed to work as expected (after I had my username changed due to it containing a ‘+’).

TorGuard doesn’t support SSH tunneling unfortunately – it would be nice to have an encrypted non-VPN proxy (SSH tunneling may also offer stuff like reverse port forwards, though this may be problematic with a shared IP).

DNS

TorGuard provides its own DNS resolvers (91.121.113.58 and 91.121.113.7) although they don’t mention why one should use them.  As DNS isn’t authenticated, I suppose anyone could use the resolvers.

The resolvers don’t appear to be anycast routed, don’t support DNSSEC validation and have no IPv6 address, so there doesn’t really seem to be any reason to use them over public resolvers like Google DNS or OpenDNS (unless you don’t trust those companies).  I suppose one may question the logging performed by public resolvers, but supposedly this should be less of a concern if you’re using a VPN.  The only other thing I can think of may be if you’re concerned with the initial connect to the VPN, since TorGuard gives you hostnames rather than IP addresses, it may be possible to correlate resolver logs to your IP connecting to a particular VPN server…

Certificate

There’s a “TG Certificate” link, which, when clicked, attempts to install a root cert to your browser.  Don’t know what this is for (SSTP perhaps? have personally never used it so am clueless) – so it could be useful to have an explanation.  After a bit of a search, a KB article suggests that it may be for OpenVPN.

Apps

They seem to have their own apps for all the major platforms (Windows/Mac/Linux x86), including Android, which shows they went to a bit of effort to make things easy in that regard.  Lack of an iOS app is understandable (can still just use regular OpenVPN there).

I haven’t really bothered testing the apps, so no comment there.  I also haven’t tried the Viscosity client yet either.

Conclusion

As I haven’t done much testing at this stage, I can’t make much of a conclusive remark yet, but  TorGuard’s basic VPN service seems to offer more or less what you’d expect.  It’s difficult to recommend them as a proxy service though, with the limited locations they provide.

Their listing of locations needs to be improved – should be more accessible and provide more details (city and transit provider would be nice (I know you can get these details yourself, 1 by 1)), as well as provide a more complete listing for users who do not wish to use the official client.  I haven’t tried just using IPs of VPN endpoints from the official client in other VPN clients yet, which I suppose could work, but you shouldn’t need to have to do this.

The standard monthly pricing (US$10/month) seems quite overpriced compared to other VPN providers and even at US$5/month, paid semi-annually seems a bit of a stretch.  US$2.5/month, the price after the 50% discount is somewhat more palatable to me.
Of course, a cheap US$10/year VPS can provide a lot more flexibility/functionality than most VPN services, provided you can set things up yourself and only need one location.  Heck, if you’re just someone who needs a US VPN for streaming video, you can do it for freeVPNGate also seems to be a nice free VPN service, although I’ve not tested it for anything beyond web browsing.
For me personally, it was the niche locations that I wanted to have access to, where servers are typically hard to get and/or expensive (like Australia).

Considering the effort gone into the apps, the somewhat extensive knowledge base of guides (which, sadly, lacks a search feature) and the quick support responses, TorGuard does seem to be quite friendly to beginners and those less technically inclined (who probably don’t care so much about knowing hostnames/IPs etc).

tl;dr: seems to be okay, a bit expensive (unless you get 50% off) and nothing really seems to stand out.  I haven’t tried other services for comparison though.

Update (19th Dec 2015)

I’m nearing the end of the 6 month subscription now, and I won’t be renewing the service.  I primarily use the Australian servers (TorGuard provides two – one in Sydney and one in Melbourne) for low ping and minimal speed penalty with using a VPN.  The service has been fine and speeds are pretty good, but there’s one annoying problem:

TorGuard forces OpenDNS on the Australian servers.  By ‘force’, I mean they redirect all UDP port 53 traffic to OpenDNS, which makes it impossible for one to use their own DNS server.  Support does not provide any workaround (I suggested forcing to Google’s DNS instead but they wouldn’t do it) and claims that this is due to Australia implementing internet filtering (which, at the time of writing, is not true).  Ironically OpenDNS does perform its own filtering (e.g. kat.cr is intercepted).  In my opinion, this forced redirect shouldn’t be necessary – DHCP can suggest DNS servers, and/or these servers could be configured by their VPN software, enabling more advanced users to override the choice of DNS server if necessary.  Furthermore, it is unknown whether Australia’s new mandated ISP filtering policy will be DNS based, since it has yet to be implemented, meaning that a DNS override may be unnecessary.

Support, whilst they respond fast, don’t always show much technical competence.  I get that tickets get handled mostly by lower level support before being escalated, but it does mean that technically advanced users may need some back & forth to get what you need.

Overall, my main complaint is the forced DNS redirect on Australian servers, which, unfortunately, is a deal breaker for me.  If this isn’t an issue for you, TorGuard seems to be a fine service otherwise.

Continue reading

Updated WordPress

Once you figure out how to set it up correctly, it isn’t as much of a nightmare that I thought it could be.  The insistence on using FTP to perform updates is annoying, even if you make all the files writable, but it turns out a simple change to wp-config.php is all that’s required (would be nice if the documentation mentioned it).

I also decided to update the theme, just to make all the ‘Updates available!!!’ prompts go away, and… I really don’t like the updated Redline theme.  Switched to a more standard one, though I missed the old theme; the theme being used here now just seems a little too basic for my liking :/  Oh well, it works.

I must say though, I’m quite impressed with the theme editor in WordPress now, and how easy it is to preview and make changes to themes.  The “live preview editor” is a bit basic at this stage, but I presume it will improve.

Oh, and there’s now a highly relevant image added to the header.  I’m glad you like it.

Anyway, that’s it for a rarely seen blog post from me.

TLS/SSL CPU Usage for Large Transfers

There have been many claims around the internet of SSL/TLS adding negligible CPU overhead (note, I’m only considering HTTPS here).  Most of these focus on many small transfers, typical of most websites, where performance is dominated by the handshake, rather than the message encryption. Although SSL may be less useful for typical large transfers, as HTTPS becomes more pervasive, we’re likely going to see this happen more often.

However, after seeing a CPU core get maxed out during a large upload*, I was interested in the performance impacts for single large transfers.  Presumably reasonable CPUs should fast enough to serve content to typical internet clients, even on a 1Gbps line, but how close are they to this?
* As it turns out, the reason for this was actually SSL/TLS compression, so if you’re reading this after seeing high CPU usage during SSL transfer and the figures don’t match empirical speeds, check that it isn’t enabled!

So I decided to run a few (relatively unscientific) tests on a few dedicated servers I happen to have access to at the moment.  The test is fairly simple – create a 1GB file and measure CPU usage over SSL.  Note that I’ll be measuring the usage of the client rather than the server, since the latter is a little more difficult to perform – presumably the client should give a decent ballpark of the CPU usage of the server.

Test Setup

The 1GB file was created using dd if=/dev/zero of=1g bs=1M count=1024

This file was served by nginx 1.4/1.6 on Debian 7. SSLv3 was disabled, as it seems to be out of favour these days, so the test is only over TLS.  I tested various cipher suites using the ssl_ciphers directive:

  • No SSL: just as a baseline (transfer over HTTP)
  • NULL-MD5: another baseline
  • ECDHE-RSA-AES256-GCM-SHA384: labelled “Default“, this seems to be the preferred cipher if you don’t give nginx a ssl_ciphers directive
  • RC4-MD5: clients may not accept this, but perhaps the fastest crypto/hashing combo that might be accepted (unless the CPU supports crypto h/w accel)
  • AES128-SHA: probably the fastest cipher likely accepted by clients
  • ECDHE-RSA-AES128-GCM-SHA256: labelled “AES128-GCM” (no-one has space to fit that in a table; oh, and why does this WordPress theme have a limited column width?!); this is likely just a faster version of Default

The following commands were used for testing:

  • CPU benchmark:
    openssl speed [-decrypt] -evp [algorithm]
  • Wget download:
    time wget --no-check-certificate https://localhost/1g -O /dev/null
  • cURL download:
    time curl -k https://localhost/1g > /dev/null
  • cURL upload:
    time curl -kF file=@1g https://localhost/null.php > /dev/null

For checking CPU speed, the ‘user time’ measurement was taken from the time command.  I suspect wget uses GnuTLS whilst cURL uses OpenSSL for handling SSL.

I ran the test on 4 rather different CPUs:

  • VIA Nano U2250
    • Note that this is a single core CPU, so transfer speeds will be affected by the webserver performing encryption whilst the client does decryption on the same core
    • OpenSSL was patched to support VIA Padlock (hardware accelerated AES/SHA1/SHA256)
  • AMD Athlon II X2 240
  • Intel Xeon E3 1246v3
  • Marvell Armada 370/XP
    • A quad core ARMv7 CPU; quite a weak CPU, perhaps comparable to a Pentium III in terms of performance

CPU Benchmark

To get an idea of the speed of the CPU, I ran some hashing/encryption benchmarks using OpenSSL’s speed test.  The following figures are in MB/s, taken from the 8192K column. CPUs across the top, ciphers down the side.

Nano Athlon Xeon Armada
RC4 235.80 514.05 943.37 98.84
MD5 289.68 551.29 755.16 141.54
AES-128-CBC 899.14 227.45 854.48 50.05
AES-128-CBC (decrypt) 899.56 218.91 4871.77 48.61
AES-256-CBC 693.24 159.82 615.08 37.95
AES-256-CBC (decrypt) 696.25 162.48 3655.11 38.14
AES-128-GCM 51.38 68.63 1881.33 24.37
AES-256-GCM 41.61 51.48 1642.22 21.22
SHA1 459.06 413.71 881.87 105.54
SHA256 396.90 178.01 296.98 52.73
SHA512 100.98 277.43 464.86 24.42

(decryption for RC4 and AES-GCM is likely the same as encryption, them being stream(-like) ciphers and all)

Test Results

Notes:

  • wget doesn’t seem to like NULL-MD5 or “AES128-GCM”
  • Columns:
    • Transfer (MB/s): download/upload speed, probably not useful, but may be interesting
    • CPU Speed (MB/s): = 1024MB ÷ (user) CPU Time (s)
  • I’ve included pretty graphs for management type people who can’t read tables; the speed is log scale though, so stay awake!

VIA Nano U2250

Cipher Wget download cURL download cURL upload
Transfer CPU Speed Transfer CPU Speed Transfer CPU Speed
No SSL 495 4129.03 457 1580.25 55.9 1706.67
NULL-MD5
57.1 144.88 40.4 155.06
Default 14.3 16.95 17.7 37.83 15.5 37.61
RC4-MD5 29.7 46.65 44 103.18 32 106.00
AES128-SHA 19.2 23.62 48.9 96.49 37.7 145.95
AES128-GCM
21 45.55 18.1 45.41

Speed Graph

AMD Athlon II X2 240

Cipher Wget download cURL download cURL upload
Transfer CPU Speed Transfer CPU Speed Transfer CPU Speed
No SSL 1782 10240.00 1975 12800.00 404 13473.68
NULL-MD5
308 438.36 211 416.94
Default 40.7 41.07 46.9 49.55 43.1 49.35
RC4-MD5 86.6 88.43 263 346.88 189 340.43
AES128-SHA 59.1 60.00 118 127.11 98 130.15
AES128-GCM
55.8 65.56 56.7 64.65

Speed Graph

Intel Xeon E3 1246v3

Cipher Wget download cURL download cURL upload
Transfer CPU Speed Transfer CPU Speed Transfer CPU Speed
No SSL 4854 32000.00 5970 32000.00 1363 51200.00
NULL-MD5
556 638.40 452 677.25
Default 88.6 88.55 997 1312.82 699 1422.22
RC4-MD5 182 185.91 514 587.16 420 587.16
AES128-SHA 128 128.00 556 643.22 449 664.94
AES128-GCM
1102 1497.08 723 1641.03

Speed Graph

Marvel Armada 370/XP

Cipher Wget download cURL download cURL upload
Transfer CPU Speed Transfer CPU Speed Transfer CPU Speed
No SSL 223 882.76 182 544.68 44.4 403.15
NULL-MD5
44.3 62.48 25.7 60.24
Default 7.01 7.23 16 18.52 13.7 18.56
RC4-MD5 20.5 22.14 32.6 41.90 23.1 41.80
AES128-SHA 9.16 9.62 21.5 24.11 16.2 23.63
AES128-GCM
17.5 20.15 14.8 20.15

Speed Graph

Conclusion

On slower CPUs (okay, I’ll ignore the Armada here), it does appear that SSL can have a significant impact on CPU usage for single large transfers.  Even on an Athlon II, the effect can be noticeable if you’re transferring at 1Gbps – whilst the CPU can achieve it, if you’re using the CPU significantly for other purposes, you may find it to be a bottleneck.  On modern CPUs (especially those with AES-NI) though, the impact is relatively low (and may be lower once Intel’s Skylake platform comes out (though I suspect GCM modes will still be faster, it may help clients that don’t support TLS1.2)), unless you’re looking at saturating a 10Gbps connection on a single connection (or CPU core).

Cipher selection can be quite important in making things fast if you’re on a slower CPU, although most of the time it’s AES crypto with a choice of SHA/GCM for integrity checking, if you want client support.

The crypto library likely has a noticeable effect, but this wasn’t particularly tested (it appears that OpenSSL is usually faster than GnuTLS, but this is only a very rough guess from the results).

Oh and AES-GCM is ridiculously fast on CPUs with AES-NI.

Stuff I left out

  • Triple DES ciphers: cause they’re slow, and caring about IE6 isn’t popular any more
  • AES crypto without AES-NI on modern Intel CPUs: this is perhaps only useful for those running inside VMs that don’t pass on the capability to guests, but I cbf testing this
  • Ads throughout the post

South Park – Stick of Truth

Due to the amount of popularity attracted by this game, I decided to give it a try, and finished the game a while ago.

It’s probably been written about heaps of times by others, so I won’t dwell on much other than my own thoughts.

Overall, it’s surprisingly decent for a game based on a TV show.  In short, South Park fans should probably love it, otherwise it should still be a decent game.  I fall somewhat in the latter category, having only really seen a few episodes of the TV show.

The game’s highlight is perhaps it getting the general South Park feel of humor, without ruining much of the play experience.  As these two aims (humor and game play) are somewhat at odds with each other at times, which means curbing each of them back to achieve a decent balance (for example, you can parody RPGs, but not so much if you want to be a somewhat serious RPG).  In my opinion, the creators have done a reasonable job at this, though this does mean that there is a bit of a compromise here.

Whilst the humor aspect is somewhat unique (at least to me), the actual game play side of things only seemed mediocre.  Nothing particularly stood out to me above a typical RPG – perhaps the ability to use an item without using up your turn was an interesting mechanic, but that’s about it.  Whilst this may actually be intentional (to make it easier to poke fun at RPGs), it certainly is a weak point.

In fact, the combat, in particular, felt rather gimmicky.

  • Do we really have to mash a button every time to use an ability?  I don’t particularly mind the button mashes elsewhere in the game, as they don’t occur often, but it can get annoying pretty quick in combat.
  • There doesn’t seem to be a whole lot of consistency between how you activate an ability.  Sometimes you left click, other times you need to right click, press a button, mash a key, mash a combination of keys or play some other game.  Remembering seems like an unnecessary chore, though fortunately it’s mentioned what you need to do when you’re about to use an ability.  But seriously, is this even really necessary?
  • And the whole ‘click when you see a *’ thing seems… well… I don’t mind it so much, but it feels like a cheap trick to try to keep the player engaged and alert, rather than sitting back and issuing commands without much thought.  It kinda reminds me of Final Fantasy 8’s summon Boost mechanic
    • If you don’t know what it’s like, basically summons had rather long animations – around 30 to 80 seconds in length (yes, an 80 second animation every time you activated an ability; well, I suppose it could be better worse – imagine if this played (duration-wise, not content-wise) every time you went into battle…).  So to keep the player engaged whilst they watch the same animation for the 50th time, you have the option to boost the power of your attack by repeatedly pressing a key when indicated on screen.  SoT’s click timing (as well as some of the games) feels somewhat similar

The balance/difficulty seems like something that could be improved too:

  • Stats seem to vary drastically between levels – by level 15, I had nearly 9000 health, compared with the ~100 or so (whatever the amount was) you start off with at level 1
  • I found the game generally got easier later on, possibly due to the above issue
  • …or maybe that you can remove 75% of an enemy’s armor, in just two turns using an ability you get halfway through the game…
  • There were times where all my attacks did 1 damage to the enemy, however I was still able to win just by relying on status ailments.  Actually, perhaps this isn’t so bad…

 

To re-iterate, a relatively unique experience (at least for me) in a game.  Comedy aspects entertained me, though it was a little subdued, game play was passable and the combat system was perhaps the weak point which felt a little gimmicky.  Overall a decent game that’s probably worthwhile just for the experience.

Centre screen subtitles?

Subtitles, for video content, are pretty much always aligned to the bottom-centre of the video frame.  There, it doesn’t interfere much with the visual content, but I can’t think of any other particular reason why it should be there (perhaps there’s some history I’m missing out on).  Top-centre alignment is rare (although it seems to be as feasible as bottom-centre) – often only used for translating signs or placing lines of a secondary speaker.

However, a problem I’ve noticed with all the anime that I’ve watched, is that this positioning really draws your focus towards the bottom of the screen, especially if you’re a slow reader like I am.  It means that I have to rely more on my peripheral vision to see the actual content, or, as I quite often do, frequently pause the video to give me enough time to look up and see what’s going on.  This is perhaps one of the key reasons why I prefer dubs over subs.

And, if anything happens to be aligned top-centre, such as translation notes or secondary speaker lines, it’s much easier to miss if your attention is at the bottom of the video frame. Though this could be easily solved by placing all subtitles at the bottom of the screen and using styling to differentiate the lines.

So What?

A more radical idea just came to me though: what if subtitles were centred on the screen?  This could make things easier to read by keeping you focused on the content.  Semi-transparency could be used to mitigate most the downsides of the text obscuring content, and it’s not hard to do.  ASS, the standard subtitle format used these days for anime fansubs, already supports both these features (and a lot more), unlike many other formats such as DVD subtitles, which don’t provide this flexibility and may have made this idea less practical.

Here’s a list of pros & cons of centre aligning subtitles that I could come up with:

Pros:

  • Keeps your focus towards the centre of the screen
  • Makes it easier for slow readers to read subtitles whilst watching what’s actually going on in the video
  • Generally an easy change to make (see below for a quick guide)
  • Subtitle authors have the option of embedding two subtitle tracks or supplying an alternatively styled subtitle file, which means that nothing changes for the viewer unless they want to experiment with centre aligned subtitles

Cons:

  • May be more distracting
  • May obscure content unless specifically authored to not do so, either through positional or transparency adjustments
  • Adjusting positioning to avoid the above could mean that the viewer has to look around for subtitles at times, though, since the the idea is to draw attention towards the centre of the screen, this probably isn’t much of an issue
  • Semi-transparency could make text harder to read
  • People aren’t used to it, and hence seems weird
  • May require subtitle format to support alignment and semi-transparency settings, though the user usually has the option to specify these for simpler formats like SubRip
  • This probably isn’t applicable everywhere – I’m only considering anime fansubs here

Test Run

Well, it’s easy to do, so why not test it?  After doing a quick test, I did find that I could actually read the subtitles without pausing the video like I usually do.  It did feel weird though, but I’d imagine that one could get used to it.  Here’s a before and after screenie:

Subtitle alignment comparison

Top: default styling, bottom: centre aligned with semi-transparency

The following would probably need some specific attention though, as aligning to the centre doesn’t seem the most appropriate here.  (Note that the translation isn’t for the text on screen, rather it’s a translation of what’s being spoken)

Subtitle alignment comparison 2

Conclusion

I don’t know whether anyone else has thought and tried this before – a very quick web search doesn’t seem to turn up anything, and I’ve certainly never heard of anyone looking into this idea.

So if you read this and am interested, I certainly would love to hear your thoughts and experiences.  Personally, this idea seems to be worthy of consideration, and I’d like to try it out more.  Or perhaps not everyone has the same issues as I do…

How to Test it Yourself

As my interest here is for anime, I’m only going to provide a rough guide on how to modify a fansubbed video for centre screen subtitles.  I’m also going to assume that it is an MKV file with ASS subtitles (and alignments aren’t forced every line etc etc), which is what most fansubbers distribute in.

  1. First, you need to extract the subtitle stream – you can use MKVExtractGUI for this (or just the mkvextract CLI utility if you computer is anti-Microsoft)
  2. Open the extracted ASS file with Aegisub
  3. Visit the Subtitles menu and the Styles Manager option
  4. On the right, there’s a big list box of all the defined styles, and you’ll need to identify the main one(s) (usually Default) and Edit them.  This could be difficult depending on how the styles are set up and may require you to edit multiple styles
  5. In the Style Editor dialog (that pops up after clicking the Edit button), add semi-transparency by specifying opacity in the Colors frame (it’s the textboxes under the colour buttons).  Values range from 0 (opaque) to 255 (transparent) – 96 is a good value to start with for Primary, and perhaps try 128 for Outline and 224 for Shadow
  6. In the same dialog, set the alignment to be the centre of the screen (5) and then click OK
  7. Close the Styles Manager dialog and save the file
  8. Open your video and use the “load subtitle file” option (for MPC, it’s File menu -> Load Subtitle) and select the edited subtitle file
  9. Hopefully it all works, if not, you may need to go back to Aegisub and try editing other styles
  10. Watch the video and submit your thoughts in the comment box below.  Remember that the last step is always the crucial one

Loud Seagate Drives

I don’t think Seagate has particularly had a good reputation for the noise of their hard drives, but I’m going to tell my story nonetheless.

A while ago, a 2TB WD Green drive of mine started developing bad sectors.  I performed a disk scan, which would’ve marked sectors as bad, fixed the corrupted files and hoped that more wouldn’t develop.  However, it wasn’t long before this inevitably occurred, so I went out and bought a 4TB Seagate to replace it.  Things were fine.

More recently, my 1.5TB WD Green drive came to a similar fate (it seems that all my newer drives have been failing within a few years of usage).  As it so happened, Dick Smith was running a Valentine’s day special, putting on a 15% discount on their hard drives, so I grabbed a 4TB Seagate Expansion drive for $170 (it’s unlikely you’ll be able to find the drive at any store here for under $195, so that was an excellent price) to replace the old WD.

Plug the drive in, and you’re greeted with quite an audible powering up, followed by whirring.  At first, I thought there was a fan inside the enclosure, considering the noise and that there’s breather holes (more like a mesh) on the back, but I couldn’t feel any airflow, so concluded that the whirring is actually the drive itself.  The sound is definitely quite noticeable, louder than the rest of my PC and I can easily hear it from 3 metres away.  I have a 3TB WD Elements sitting right beside it which is almost silent – I can hear the drive when it spins up, but it’s still much quieter than this new Seagate.  Another thing that’s interesting is that, despite my internal 4TB Seagate having the same model number as the external, the internal drive seems pretty quiet; it’s possible that the case is blocking some noise, but even with it open, I can’t seem to hear the drive distinctively above the other noises in the case.

Now whilst I could just get used to the noise, I don’t really want to have to make that compromise.  On the other hand, I didn’t feel like going to the effort of returning the drive and then paying more for a replacement.  So I decided to try tweaking the drive’s AAM/APM settings to see what I could achieve.  Seagate conveniently doesn’t allow you to change the drive’s AAM (or they simply don’t support it, whatever), however APM is changeable.

Most search results on ‘Seagate’ with ‘APM’ seem to be people complaining about Seagate drives making audible noises when spinning down, where they’re looking to disable APM.  I’m a bit surprised that I can’t seem to find anyone complaining about normal operating noise of these when not even being accessed.  As I’m using this only as a backup drive, I don’t mind it spinning up only when it is actually accessed, so turning down the APM value, if it would stop the whirring, could work for me.

HDParm for Windows doesn’t seem to detect USB drives (side note: interesting that they use /dev/sd[a-z] to identify drives, despite being on Windows), but I did eventually find that CrystalDiskInfo would set the APM for the drive.  Changing the default value of 128 to 16 seemed to do the trick – the drive would spin down soon after becoming idle, making the drive silent.  Success!

…except that the drive would reset its APM value whenever it lost power.  Urgh, what to do?

Turns out, the CrystalDiskInfo guys thought of this – the mysteriously worded “Auto AAM/APM Adaption” option basically makes CDI sets the APM value of the drive every now and then (okay, it’s mentioned in the manual, but it’s not exactly easy to find).  This does mean that CDI has to stay running in the background, but as I have 16GB of RAM in this machine, I’m not too worried about that.

The drive does exhibit some “weird” behaviors (well supposedly understandable but still silly) – such as spinning up before you standby the PC, then quickly spinning down.  Also, the Auto APM setting sometimes takes a while to kick in after resuming from hibernate.  As my backup routine is basically a scheduled disk sync, the drive spins up for short periods when this occurs, but it’s a tradeoff I’m willing to take.  One thing to note is that the drive seems to spin up on any activity, even reading SMART metadata; CDI, by default, polls the drive’s SMART info to check for issues, but it’s easy to disable automatic refreshing to avoid the drive whirring up every 30 minutes.

tl;dr if you’ve got a Seagate external, can’t stand the whirring, and don’t mind it being spun down when idle, install CrystalDiskInfo, turn down the APM, enable to auto APM setting, get CDI to load on startup and disable automatic polling of SMART info on the drive.

 

Side note: CrystalDiskInfo provides a “Shizuku Edition” of the application.  As I couldn’t find information on what the difference was with the Standard Edition, I ended up downloading both, curious over the ~65MB size difference.  Turns out, Shizuku is just a anthropomorphised mascot for the application, the size difference being mostly high resolution PNGs depicting her in the theme that comes packed in the Shizuku version (the ‘Ultimate’ version contains multiple copies at multiple resolutions – presumably ‘Simple’ and ‘Full’ don’t contain the higher resolution copies, although one wonders whether multiple copies were really necessary).  The devs even went to the effort of getting the character voiced, which means you get a cutesy voice warning you if your hard drive is about to die (assuming you know enough Japanese to take a stab what what’s being said).
Though despite my enjoyment of moe anime, I’m fine with the Standard Edition.  Hats off for the effort, nevertheless.

Side-side note: The voices from above were encoded using Opus, my favorite audio codec as someone interested in data compression.  Yay for not going MP3.  Now if only they could get those images down to a reasonable size…

Blog Revival!

For the zero or so people who like visiting zingaburga.com you have now seen that my old blog has been put back into action.

I stopped updating this for a while, and when the old server died, and I realised that I was too stupid to keep a backup copy of my server configurations, I didn’t bother with trying to put this blog back up.

So now that I have, are more updates coming?  Maybe – depends on what I feel like really.  Since I’ve just done yet another server move, I decided to set this up again, as I did recently have some thoughts on writing articles to no-one in particular.  (on the topic of servers, cheap $15/year VPSes are really decent these days!)

It’s funny reading my old posts though – relives some old memories but makes you feel a little silly at times.

Moved Server

Helloo~ another rare post from me!

Recently shifted my websites from my old dedicated server to this VPS server – a move which I’ve been too lazy to do for like 2 years.

The dedicated server was rather overkill for the website I’m running (originally had other plans, but didn’t follow them through) so have been paying too much for hosting for quite a while.

This new VPS is a Xen 1GB RAM, 30GB HDD, 1.5TB/mo from ChicagoVPS, using the awesome deal here.  Asked support to increase space to 50GB which they did for only $1.75/mo extra (awesomesauce).  They also agreed to supply a further yearly prepayment discount if I switch to an annual billing cycle, which I plan to do soon.  Been happy with speeds and I/O performance; CPU is a Xeon X3450 (Xeon equivalent of i7 920) so pretty cool too.

Now the fun part: setting the thing up.  Previously using CentOS 5 64-bit, but after using Debian, I somewhat like the setup better, so decided on Debian 6 32-bit for this server.  Server stack software:

Webserver
Running an nginx frontend proxying to an Apache backend, with PHP module.  Historically had issues with CGI/FastCGI, which is why I decided to go with the more familiar Apache PHP module, although the last time I tried FastCGI was years ago.  But nginx was great and allows me to run a minimalist Apache which works well for me.  Also I get the advantage of accelerated proxy responses in XThreads, although I’ve removed all the big downloads I used to have to fit in the 50GB disk space.

Unfortunately, different from my other installs of Apache with PHP module, it seems that Apache was leaking memory on this setup.  Went tweaking a few PHP configuration variables and seems to have magically gone away, me not knowing why.  Nevertheless, I decided on using a higher *SpareChildren configuration and a very low MaxRequestsPerChild to get around any possible memory leaks.  Apache itself only has 3 modules active (some configuration needed to be modified to accomodate this minimalist setup): mod_dir, mod_rewrite and mod_php5

Also have gotten nginx to send HTTP Expires headers, so pages will also load faster (since Firefox won’t be sending check requests waiting for HTTP 304 responses for static files).  But otherwise, configuring two servers is a bit more of an issue, especially with rewrite rules, but manageable.

Database Server
Have decided to go with MariaDB instead of MySQL here.  As with MySQL, the MariaDB defaults are a bit overkill for a 1GB server, so my.cnf needs tweaking.  Unfortunately though, there are many MySQL tweaking articles out there, but I didn’t find any for MariaDB – although MySQL largely translates over, there are parts which don’t.  So configuration took a bit more time and effort to get right.

Whilst disabling InnoDB and tweaking buffers is probably enough for a standard MySQL setup which only runs MyISAM tables, MariaDB includes, and activates by default, a number of other plugins which probably need to be disabled (such as PBXT).  Aria being the new internally used storage engine cannot be disabled, and you need to remember to tweak down the default buffer size in addition to the MyISAM buffers.

Speaking of Aria, I decided to switch all my tables to Aria format as it’s essentially an improved version of MyISAM anyway.  Everything seems smooth sailing so far.

As for database backups, I’ve decided to move away from the mysqldump command I’ve been using for so long.  Although I’d disabled table locking when dumping, so that the website didn’t lock up for 15 minutes during the dump, I’m not sure how appropriate that really is, not to mention that it seems like a lot of unnecessary load.  Considering alternatives, there seems to be only two: mysqlhotcopy or a replicated slave which I can run mysqldump on.  Latter requires more configuration so am considering the former.  However, mysqlhotcopy seems to lock all tables being dumped, which means the site locks up for about a 30 seconds whilst the database gets copied.  I’m not really worried about the downtime, but the fact that requests queue up on the server and quickly chews through RAM is something I do have to take into consideration.  As the mybb_posts table will obviously be the one taking the longest, and locking the table will only really affect new posts, it seems better to lock individual tables and copy, which will probably mean I’ll have to write my own script (or call mysqlhotcopy a few times).  There’s a slight possibility for data desynchronisation between tables, without referential integrity, but I’d presume this is somewhat rare.  Besides, if this really is an issue, it’s possible to group commonly used tables together.

Other
Well the webserver/PHP and database server are the most exciting to configure since they’re the heart of website-server (trying not to say “webserver” again).  Went with postfix instead of sendmail, and the email configuration wasn’t as scary as I thought it would be.  Nothing else particularly worth mentioning otherwise…

Moving the server
Had originally planned to stagger the move.  Firstly moved zingaburga.com over, so I could identify any issues (such as the Apache memory leak).  After that, moving everything else over went pretty quickly, even the EP database (well, I did move all the attachments over before closing down the forums; included with setting the domain’s TTL to 60 seconds, there wasn’t that much downtime).

Unfortunately, the EP tables were defaulting to latin1 encoding.  This seems to have caused an issue as UTF-8 data was stored in them, and the default encoding for this new server is UTF-8.  Which meant hours of downtime, me staying up into the wee hours of the night repairing the encoding.  And then after I did that, I forgot to switch the users table back to textual formats (from binary fields) so no-one could actually log in.  Other bugs which I didn’t have before needed some nginx proxy tweaking but otherwise, everything seems to be well.

Overall, server seems to never be going over 500MB RAM usage for normal situations, so glad I got 1GB for plenty of headroom.  Am also surprised at this relatively low memory usage, despite me being rather generous to buffer sizes, but I guess tweaking pays off.

Too Much Bandwidth (or maybe, just quota)

So, time for another pointless update on myself (well, I may as well post, otherwise this place would be entirely dead).

I’ve posted a number of times before about my internet connection and that, and how you’ve probably figured that I’ll never shut up about it until something like the NBN comes (if it ever will).  But anyway, this might be a bit of a turn.

Right now, I’m on a 1.5Mbps connection with 25GB peak downloads and 120GB off-peak (2am – 12pm) quota per month. (if you’re wondering, the annoying slowdowns have since mysteriously vanished)  Exetel (my ISP) have decided to be a fag and increase prices by $10/month, so their lowest (non-shit) plan is now $50/month.  They have somewhat “compensated” by increasing quotas to 30GB+180GB off-peak (which will become 2am – 2pm), however, I’m already finding it really difficult to use up my current quota.

I’ve looked around, but for 1.5Mbps connections, it seems there really isn’t much cheaper available (thanks to Telstra’s dominance in the area) – probably the most I could save would be $5/month which would also require bundling with a phone.  Oh well.

So, back to the issue of using up the quota.  I guess I don’t really have to, but I guess I’ve developed this idea that I should, and despite myself saying it’s unnecessary, I’m always trying to find something to exhaust the bandwidth.  So yeah… downloading useless stuffs.  Especially difficult with me as I try to be conservative with bandwidth usage.  Am really starting to run out of ideas over what I should do with the quota – perhaps I should convince myself not to bother with it (and save some electricity by not having the computer on at 2am downloading stuff).