Author Archives: ZiNgA BuRgA

The joys of enterprise class systems

Enterprise class systems.  I prefer to call them unnecessarily ridiculously complicated, bloated, and just generally overly restrictive applications.

Well, I have to deal with Oracle for one of my courses.  It’s a wonderful database, I mean, the basic free edition only requires 2GB of disk space to install, and only consumes around 880MB of RAM whilst running, with no connections or databases (beyond what the installer puts there) or anything.  I can’t being to imagine the “true enterprise edition” and how much resources it needs.  Contrast it with MySQL, which has far more modest requirements of like 40MB install and uses under 30MB here.

I admit, I have not bothered trying to configure Oracle.  I don’t really want to, to be honest.  But one would imagine that the free non-commercial version is probably aimed at students or similar, and thus would be configured towards such setups (as opposed to a dedicated database server).

I haven’t used it much, but already, one of my beefs with it are auto-incrementing fields.  It’s a common thing to do, so crappy systems like MS Access include an easy to select AutoNumber type, and MySQL allows you to specify auto_increment in the CREATE TABLE statement.  Being an enterprise class DBMS, Oracle chooses to use a far superior method – you need to make a sequence, attach it to the table or whatever, and then use a trigger to update the sequence when a new record is inserted.  The fact that it requires more work means that the DBA can charge more hours for configuring an Oracle database, so it’s obviously better (except to the firm (aka suckers) paying for all this).

Though I guess Oracle at least has triggers, whereas MySQL doesn’t (or I haven’t seen them at least).

Oh, and if the above isn’t enough, their wonderful license agreements with free products are wonderful too.  I’m currently writing this because I’m waiting for some 20MB component to download over this 6KB/sec internet connection, because other applications couldn’t include this Oracle component.  They make it easy to grab the component too, well, I mean, it’s free, so I guess the people getting it can suffer right?  Need to register (or use BugMeNot) and download won’t go in download manager cause it needs cookie based authentication.  Urgh, how lovely.

I had someone ask me a bit about a query in Teradata (another enterprise class DBMS).  For MySQL, a simple SELECT GROUP_CONCAT(name) FROM people query would’ve done the job.  Obviously this enterprise class system doesn’t have that cheap GROUP_CONCAT function.  It does have a POSITION function for finding the first instance of something in a string, however, but it doesn’t have some sort of REVERSE_POSITION (InStrRev?) type function, for finding the last instance.  I guess if it had a string reverse function, it wouldn’t be so much of an issue, but it doesn’t even have that.  Hooray for the huge selection of available string functions there…

Doom 3

As I have been playing some Doom recently, I decided to give the nowhere-near-as-popular Doom 3 a go.

When I started playing it, it reminded me somewhat of Prey and Half Life (though I probably haven’t played that many FPSes anyway), with the ability to interact with computer screens and such.  However, this only seems to be the initial part of the game – that is, the part where you just walk around doing what your superior tells you to do.  After that part, the game reverts to the FPS that it’s meant to be, full of people turning into zombies attacking you.

With that, it does somewhat maintain its original theme of a shooting lots of undead stuff, whilst modernising the gameplay (to that of the time it was made, a few years back).  Doom 3 adds a fair amount of gore, with blood all over the place, and plenty of dark places where zombies walk out unexpectedly to attack you.  There’s also a bit of having to get access to this and that, which does bear some resemblance with the old key system, or that where you need to visit some place to trigger a door.

Overall, whilst it does seem to be a natural progression from the old Doom games, I think the added gameplay complexity does a bit of harm to Doom 3.  I like the simplistic nature of the old Doom, and I think it would’ve been better if Doom 3 kept to this simplistic type of gameplay, rather than trying to fit in with the multitude of other FPSes around.

My revenge on a Cyberdemon [Doom]

Been playing a fair bit of Doom recently to kill random time (as I didn’t quite feel like doing anything else).  To be more specific, the Doomsday engine, playing Doom 2 Plutonia Experiments.

Discovered something interesting with the Cyberdemon at the end of level 6?

Download Video (~12.2MB) – cbf making a stream

BTW, he’s actually meant to be hovering in mid air, but it seems that loading a save file causes him to drop into the wall itself.

Okay, I’m guessing it’s a glitch in the game when you manage to trigger the lift when the guy happens to be standing on it.

So much love for triple channel DDR?

It seems that everyone getting an Intel 1366 platform (i7 9xx CPUs) has to, and absolutely must, get triple channel DDR3 RAM.  People suggesting builds always get it, and people who decide to only go with dual channel seem to get bonked.

Admittedly, RAM isn’t too expensive (despite prices going up recently) especially compared with the base X58 platofmr, and, as it does support triple channel, one might say “why not”.  I guess this may, however, be a point against choosing the typical i7 930/X58 platform over a generally cheaper i7 860/P55 platform; the main advantage of the X58 being triple channel RAM (and perhaps greater PCIe throughput, if you plan to go dual GPUs, but I don’t see that many people doing that).

So why is triple channel RAM so much better than dual channel?  From what I see, I think people have some idea that triple channel is like “3x speed” whereas dual channel is “2x speed”, so triple channel makes your RAM 50% faster than dual channel.  This is true in a sense, however, this only affects the bandwidth figure.

For transfering data, there are often two primary speed metrics: bandwidth and latency.  It’s the latter that is often ignored, however, more than not, it’s the latter that really matters.  Bandwidth can easily be increased by multiplexing multiple lines of channels (for example, the case with triple channel memory), however, latency is something which is fixed, and generally cannot be improved so easily.

High latencies of RAM have been a primary performance killer for a while (it takes in order of around 600 clock cycles for a CPU to retrieve a value from those DDR2/3 sticks) – that is, ignoring all the insane CPU optimisations (L1/L2/L3 cache, branch prediction etc) that have gone in to try to get around this issue of RAM being incredibly slow.  However, bandwidth has never been particularly much of an issue for most applications either – I’d still say that latency is still of primary importance for performance; various high performance applications, such as x264, explicitly optimise for this by trying to reduce the amount of cache misses.

My 2x2GB DDR2-800 sticks in dual channel can read at about 4.5GB/s if memory serves me correctly.  Very few applications even need anywhere need that speed – or in other words, constantly using a large portion of memory (most of the time, applications use a small portion of memory frequently at a particular point in time).  And even with that read speed, it’s unlikely that your CPU could process so much information, let alone your HDD writing it.

So going back to the issue at hand of whether triple channel is worth it or not.  There are two primary benefits of it over dual channel, one being increase bandwidth (discussed above), the other being greater SMP efficiency.  On this front, it means that the memory can serve multiple data requests simultaneously, rather than queue up the requests.  This may be useful if one is running a multi-threaded application, or several applications, which need to request memory that hasn’t been cached.  However, as alluded to above, most applications use a small portion of memory at a particular point in time, which means that the working memory set is most likely cached when it is used, so the benefit of more channels is perhaps limited, and gives minimal benefit only for certain applications.  However, as the number of CPU cores increase, the advantages of more channels of memory will become apparent, but for the typical quad core i7 930, I don’t think there’s much of a benefit.

Benchmarks also seem to agree with my assessment here as well.
(note, synthetic benchmarks may show a difference as they will often stress the RAM; of course, very few practical applications do this)

Conclusion: triple channel really does not provide as much benefit as many people seem to be thinking.

Disclaimer: I don’t claim to be an expert on this field, rather, I’m basing my reasoning from knowledge I’ve obtained.

Small upgrade to my internet plan

Saw that TPG upgraded their $40/month 512Kbps plan to 60GB quota per month (30GB off-peak, 30GB on-peak) from the old 10+15GB/month, so I contacted my ISP (Soul) to upgrade the plan in-line with the TPG plan.  Sent it through and all that, so just need to wait for the upgrade.

Unfortunately, this doesn’t fix the speed issues I’m having (Telstra issue?), but hey, free extra quota, so meh.  Anyway, the off-peak allowance pretty much means that off-peak traffic is unlimited, as it’s only for 5 hours a day (4am – 9am).  That is, if I constantly downloaded for those 5 hours, fully utilising the connection for 31 days a month, I’d have to download at 52.5KB/sec (assuming 30GB = 30 billion bytes).  I get around 51-53KB/sec usually anyway, so for all practical purposes, it’s impossible for me to deplete the off-peak quota.

Love for E8400 CPUs and Intel vPro?

Last year, our Uni upgraded all PCs in the IT labs to ones with a Core 2 Duo E8400 CPU and 4GB RAM.  A friend studying at the ANU said that his Uni upgraded all their PCs to E8400/4GB configurations, and noted the serious overkill with the CPU.  This year, my Uni upgraded some PCs in the business faculty’s computer labs to similar configurations.

Some (only a few actually) computers at my work got upgraded to E8400/2GB configurations – again, same overkill CPU but half the RAM.  Recently, it seems that BCC libraries upgraded most of their public computers to an E8400/1GB configuration (they appear to have 80GB HDDs).  Now seriously, this is an overkill CPU compared to RAM given.  Note that these PCs are only being used for web browsing, and maybe some document editing, and they have DeepFreeze installed to prevent viruses and the like slowing the computer down.  They also replaced their catalogue computers with these setups, even though the only task the catalogue computer needs to do display one website.

With the exception of my Uni’s IT labs computers (where you have computer geeks and people doing 3D rendering), the E8400 is definitely overkill for the basic tasks that get done.  But it’s not a cheap CPU either.  In fact, it’s being sold for AU$219 at Umart (which is more than the E8500 going for $210).  At this price, it costs more than an i3 540 ($165), slightly less than an i5 650 ($224) and C2Q Q9400 ($234) and costs more than a Phenom II X4 965 BE ($205; though these companies tend to never go with AMD).  All the while, it’s much more expensive than the cheap $55 Celeron E3300, which should be more than enough to run everything needed for the various tasks.  Okay, these machines probably have been assembled before the i3/i5 range, but even back in those days, it was fairly expensive compared to the rest.

So why so much love for the E8400, even pairing it with 1GB of RAM?  My guess would be Intel’s vPro platform, of which, these machines have a lovely stick advertising the fact.  This platform requires a Core 2 branded processor with Intel-VT, despite few applications (okay, screw Win7’s XPMode; businesses here still uses XP) really using it.  The Core 2 branding requirement is obviously trying to push businesses to buy the more expensive CPUs, as some of Intel’s cheaper “Core 2 based” CPUs (eg Pentium Dual Core E6xxx range and some Celerons) have Intel-VT.

Out of the Core 2 Duos, only the C2D E6xxx and E8xxx range support Intel-VT.  E6xxx is phased out, which leaves the latter, and the cheapest E8xxx is the E8400 (can’t seem to get E8200 and E8300 over here).  So perhaps that’s why everyone’s going with the E8400.

Now considering that these machines came with 1GB of RAM (around $50 maybe?) and 80GB HDD (around $30 probably), a cheap case+PSU, the Intel motherboard and CPU probably consume a huge proportion of the cost.  So is vPro really worth such a huge premium?

DDoS for n00bs

Okay, I admit I get a little annoyed when people talk like they know things (technically wise, in the field of IT), and that this is just another one of them.

Seems like so many little script kiddies are throwing the word “DDoS” around, usually using it as a means to threaten webmasters or whatnot.  A quick read of the Wikipedia page should show that they’re using the term quite wrongly.

No, running some PHP script on a shared hosting provider which continually requests a certain page from a server is NOT a DDoS attack.  Gawd.  At best, it’s a DoS attack (there’s no distributed component here), unless you’re running it from two servers (then I guess you could consider it distributed, although for all practical reasons, it isn’t much better).  Of course, being a shared hosting provider, they quickly find they reach the host’s limits, though there are plenty of shabby shared hosting providers which don’t properly limit people…

Anyway, for the n00bs out there who really do want to perform a “DDoS attack”, let me give you a simple example which is probably way more effective than your stupid direct attack.

  1. Look for an intensive page on your target website.  This may be difficult to identify, but use intuition here.  Pick a page which is generated by a script (eg PHP) that outputs a lot of data – that’s probably ideal.  If not possible, try a search results page (if it doesn’t require POST request methods), or whatever.  I don’t care, I’m not doing this attack.
  2. Register on some very popular forums that allow you to stick invalid images in your signature.
  3. Put the URL of the target page in your signature, something like this:
    [img]http://example.com/forums/attachment.php?aid=3[/img]
  4. Repeat for the various popular forums, and then make posts, mainly ones that are likely to be viewed by lots of people.
  5. You can also try various other sites which allow image embeds.

Any half arsed script kiddie should be able to figure out how this one works, so I’m not going to explain the intuition behind this.

Oh, and if you’re an admin, there’s no particularly *easy* way to get around this, unfortunately.  You can try hotlink protection, but that will break most inbound links to your website, or you can try fiddling with URLs (which can do the same thing), but ultimately, this “attack” causes traffic to your server to rise quite a lot without actually following the patterns of a typical DoS attack.

Why is it so difficult to get complex ListView/TreeView controls?

I’ve seen many applications which make use of Windows ListView or TreeView controls which have far more functionality than those offered by the standard Windows API.

Perhaps one of the most notable (or just one I have in my mind right now), is a hierarchical ListView, or perhaps, a multi-column TreeView (eg a combination of the two controls).  There are so many applications which can use this control.  An example of such, would be Thunderbird’s mail listing (which can be hierarchical if you enable threaded view, as well as accept custom formatting per item).

However, there is no such functionality provided by the Windows API – to implement this, you typically have to either re-invent the wheel, or use “dirty” techniques like owner-drawing (which itself isn’t exactly that simple).  Why, oh why, hasn’t this been implemented as standard Windows functionality?  These controls have hardly changed for like 13 years.

Now one may say that one could just simply use controls other people have written (ActiveX OCXs for VB(A)), but of course, this isn’t always an acceptable solution.

XThreads v1.2 Released

As mentioned earlier, this plugin was due for an update after I did an update to my Soft Delete plugin.  I got around to finally pushing out an update last night, after pretty much spending a whole day bug testing & fixing, and finding even more bugs – I even managed to stumble on a security issue which I had not yet realised before (not that anyone would’ve probably noticed, as I bet no-one looks through my mess of code), so luckily, I patched that.

Though, as I somewhat expected, I did stuff it up, and this morning, woke up to find that my clever little pre-parser didn’t quite go a well as I thought it did.  I fixed it up, though I did make a bit of a mess with it, having to release 2 versions.  Well, now at v1.22, it seems stable enough, so I guess that’s over and done with.

As I haven’t written about this plugin here before, I’ll write some of my personal thoughts on it.  This is probably my most complex plugin ever (ignoring stuff like my Syntax Highlighter, which is quite algorithmically complex).  Currently at around 5,500 lines of code, this isn’t my biggest (MyPlaza has like >16,000) though I do code rather “compactly” and I’m a bit messy, and chuck stuff on fewer lines than what most other scripters would do.  What makes this so complex, is the amount of integration with MyBB it does, and the amount of core changes it makes to the forum script.  It has to do some rather elaborate hacks in many places to get desired behaviour.  A script like MyPlaza, on the other hand, is much simpler – for one, it mostly adds functionality, rather than change functionality, which is what XThreads tries to do.  However, the result is something I really like.  You can implement a variety of things, from Thread Prefixes to Download Systems to Youtube Video Galleries (many thanks to RateU for the examples) – all without any PHP whatsoever.  Furthermore, as this integrates tightly with MyBB, and in fact, uses many of its features, these systems will often inherit MyBB’s capabilities, such as permissions, ratings and comments, all without some coder having to explicitly implement them.  And that’s not to mention possible integration with other plugins…

Anyway, back to the update, it pretty much adds everything people have asked for, as well as some random ideas that have popped up in my head.  Now XThreads should have a solid base upon which customisations can be built with, as well as handling stuff like very big downloads much better than MyBB does.

So what else is there to do?  I’ve got a long list of stuff that would be nice to implement, but I probably won’t implement many of them unless someone actually wants them.  I think the biggest weakness of this plugin is it being targeted at more advanced users, or perhaps has a bit of a learning curve.  The majority of the MyBB community aren’t terribly knowledgeable, or probably just can’t be bothered with doing things required to make this plugin work.  So maybe something interesting would be a bit of a plugin API to allow third parties (or just the people actually contributing) to make simple modules which add-on functionality without the user having to do a lot of manual edits.  The main thing would be some admin interface – take for example, a gallery – the module would do the necessary edits like in the gallery example, but also have an admin interface for adding gallery components.  Or maybe, I’ll just fall back with a link to edit the gallery forum…  But anyway, something as simple as this will be able to easily leverage the flexibility of XThreads and MyBB to make a powerful system, and perhaps add further specialisation, which has a strong potential to surpass the capability of any other gallery system out there for MyBB.

I’m glad I managed to finalise and push this out this weekend, before I start my second job.  Managing two jobs might be a little taxing on the amount of free time I have, unfortunately.  So this may be the last update in a while.

PHP and Backslash Hell

Backslashes were a fun way to escape certain characters in languages like C (I don’t know if earlier languages have been using the idea, not that I care anyway), for example "\"\\" represents a string containing a double-quote and a backslash character.

Well, it would also make sense to use this for regular expressions (preg_*) functions too.  It’s just that… it gets a little awkward at times.  Once, I wanted to use a regular expression to match a double-backslash (\\) in a string.  For just that, you’d need to put in 8 backslashes in the regular expression – '\\\\\\\\'.  There’s two levels of escaping here – the first one is PHP string parsing (8 -> 4 backslashes) then regular expression parsing (4 -> 2 backslashes).

Nothing much, but, well, dealing with backslashes in regular expressions gets crazy sometimes, especially as most regexs look really confusing anyway.  For example, a bit of code used in my Reverse MyCode Parser, to match captured patterns in regexes:

while(preg_match('~.*(?:^|[^\\\\](?:\\\\\\\\)*)(\(([^?\\\\]|[^?].*?(?:[^\\\\](?:\\\\\\\\)*))\)([.*]\\??|\\?|\\{\d+(?:,\d*)?\\})?)~s', $pattern, $match, PREG_OFFSET_CAPTURE))