Migrated from Mephisto to WordPress

I’ve finally migrated this blog from Mephisto to WordPress. I’ve set up redirects, so ideally all the old URLs should still work fine. Apologies for the RSS feed spam, if you got hit with any. Once the dust has settled here, I’ll be migrating the Rubygame blog too.

Why the switch? Simply put, Mephisto wasn’t cutting it anymore. It’s not as usable or polished as WordPress, and it doesn’t feel like it’s going anywhere. Worse, the Rails-based setup was making it difficult to keep up and running, and it was prone to mysterious breakages that would come and go with no apparent cause. In the end, it just wasn’t worth it to me.

For the migration-curious, here’s how I did it. Continue reading Migrated from Mephisto to WordPress

Your Git Submodule and You

(Pssst. Check out my Git Submodules Cheat Sheet for a quick reference.)

This post is the result of my investigations into how Git submodules work and how to use them. My goal in investigating submodules was to decide if they would be an effective way to share specs among the various Ruby FFI implementations (Ruby-FFI for MatzRuby, JRuby’s FFI, Rubinius’ FFI, etc.). We wanted all the projects to be able to include the specs as a subdirectory of their main repository so that they could easily run them, yet we also needed an easy way to keep all the projects in sync. Continue reading Your Git Submodule and You

99% Pure Functional Programming

In my recent adventures in Haskell and Scheme, I was immersed in the concept of functional programming. Haskell in particular has a strong relation with the notion of pure functions, i.e. functions without side effects. A pure function does nothing except calculate and return some value based on the input parameters.

I like pure functions, and I try to write them where I can. But Haskell is billed as a “purely functional programming language”. This perplexed me. How could there be an entire programming language that was purely functional? Or, rather, how could you use such a language to actually do anything useful?

Consider this: if you’re restricting yourself to absolutely no side effects, you’re not allowed to write to files, send data over a network, print text to the console, draw images on the screen, or anything like that. (So much for any ideas of writing a game in a purely functional language!) A program with no side effects is a sort of black hole: parameters can go in, but nothing comes out (except the program exit status code, which is the “return value” of the program). The user doesn’t see anything, no files are changed, no packets sent over the network. The program leaves behind no legacy: the system is in much the same state after the program has run as it was before the program ran.

And yet, at least one useful bit of software has been written in Haskell: the darcs revision control system. So what’s the secret? How did they do it?

Well, it’s simple: Haskell isn’t entirely pure. It’s “tainted” with some functions that do have side effects, such as writing to files and printing text.

Granted, it’s noteworthy that Haskell can get by without many other common side effects, like editing strings or lists in-place, modifying object state, or keeping global variables. It does force you to think in a functional way, and to break many bad habits. And striving for functional programming, regardless of the language you are using, often results in cleaner, less buggy, and easier to maintain code.

But at the end of the day, if you want to accomplish anything, you have to make some concessions to imperative programming. There has to be a side effect somewhere.

Fun (and not-so-fun) with Haskell and Scheme

Lately I’ve been poking around at some new-to-me programming languages, Haskell and Scheme. I don’t expect to use either of them in a serious, practal project (except maybe Scheme for scripting GIMP), but they are both “weird” enough that it’s fun to learn them and expand my horizons.

For Haskell, I’ve been following along with the brilliant and funny Learn You a Haskel for Great Good. It’s in a similar spirit to Why’s (Poignant) Guide to Ruby, but more instructive and less nonsensical, without being any less funny. In addition to being a highly readable and excellent learning resource, it’s also home to golden quotes like this:

You also can’t set a variable to something and then set it to something else later. If you say that a is 5, you can’t say it’s something else later because you just said it was 5. What are you, some kind of liar?

And totally irrelevant but highly amusing illustrations like this:

A cartoon octopus playing Guitar Hero.

I therefore assert that Learn You a Haskell is the most awesome guide to any programming language, ever.

Even with a great guide, though, Haskell is a lot to wrap your head around. Partly because of its functional nature, and partly because of its outlandish concepts like curried functions, partial application, and folding, Haskell busted my brain halfway through chapter 6. (Okay, in my defense, it was also 4 AM.)

But it seems like a really interesting language once you grasp it, so I plan to revisit it later.

I also dabbled in Scheme a little bit, following Teach Yourself Scheme in Fixnum Days. Unfortunately, Teach Yourself Scheme isn’t nearly as entertaining or thorough as Learn You a Haskell. It’s very dry reading, and unless analyzing macro expansions gets you off, you probably won’t find it too enjoyable.

But, I was already familiar with Lisp, so most of the concepts weren’t terribly foreign, and I didn’t need as much handholding. For the most part, I just needed an introduction to the terms and function syntaxes peculiar to Scheme. But therein lies a problem: there are lots of Scheme implementations, and they are all peculiar.

SchemeWiki.org lists some 70 different implementations of Scheme, and none of them are authoritative. Perhaps in an attempt to remain apolitical, very few of the lists I’ve found give any sort of indication of which ones are any good. Which are stable, efficient, easy to install and use? Which ones have the most users? What features do they support? Are they still actively developed? Which (if any) of the 6 or 7 Scheme standards are they compliant with? What quirks or extra behavior do they have?

Am I expected to download and try every implementation, devise tests and benchmarks to determine compliance, efficiency, and feature set, and then decide which one I want to use to learn the language? Not gonna happen. Even reading the web sites for every implementation is more effort than I’m willing to spend. If there were 2 or 3 to choose from, sure, but not 70.

In the end, I just said “screw it” and installed Gambit on the highly scientific basis that it has a cool name. But, I might use MzScheme for learning, since that’s what Teach Yourself Scheme uses, and I wouldn’t know the difference.

I’m still interested in learning more Scheme (at least enough to find out what’s so great about call/cc), but the learning experience so far has not been nearly as rewarding or entertaining as it has been with Haskell.

Maybe Scheme needs a tutorial with a sense of humor and an octopus playing Guitar Hero?

It couldn’t hurt.

Revision numbers considered harmful

When using Subversion for vertion control, I was always conscious of revision numbers when committing. It was as if there was a limited supply of revision numbers, and I didn’t want to waste them making tiny commits. So, I’d often bend over backwards to make sure the change was significant enough to be “commit-worthy”.

It’s an awful habit, to be sure, but I know I’m not the only one. I recently had one of the programmers I manage (a very bright kid, if still a bit green) apologize to me for committing 20 revisions in a single day, as if it was wasteful or inconsiderate to make frequent commits! I think you become less concerned about “wasting” commits as you get more experienced, but it was still always in the back of my mind.

Not so with Git. The reason is simple: Git doesn’t use incremental revision numbers. Instead, it uses a long string of digits and letters which is totally meaningless to a human being. (Getting technical, it’s based on a SHA1 hash of this commit and previous commit(s), or some such thing. But it’s non-obvious to a human what the connection between 2499051dca30def85f5433c08519adea56a12a14 and its parent aaaa83fc4e9737b41a5c52b16e946b34dab63ede is.)

Since the commit identifier is totally meaningless to me, I don’t pay attention to it (except in the rare cases where I need it, such as checking the diff for that commit). And because it’s not a number that gets larger the more I commit, I don’t feel like I’m “wasting” numbers by commiting often – sometimes several times per minute.

Yet another way Git and other distributed version control systems assuage our obsessive-compulsive tendencies and promote good habits.

Migrating projects to Git

I’ve been using Git for a few weeks now, and it is friggin awesome. It is both the bees knees and the cat’s pajamas. I love the local commits, the branching, the stashing, the lightweightness of it. It has improved my workflow a lot. I’m not afraid to make lots of small commits anymore, because I’m doing everything in branches (so I don’t mess up ‘trunk’), and I’m doing it locally (so no one would even notice if I messed up ‘trunk’, anyway).

So now, I want to use it for all my personal projects! The trouble is, they’re all in a Subversion repository. That is, one Subversion repository, with a dozen or so projects in it. Eep! Fortunately, it’s a piece of cake to migrate specific directories from a Subversion repository, into their own Git repositories! You don’t have to migrate the whole thing and then trim it down, which is what I worried I might have to do.

I followed Jon Maddox’s invaluable guide, and it worked like a charm. You can even convert the Subversion user names to Git’s name & email style. Joy!

Now the only trick is to set up public access to the repositories that I want. I’m on shared hosting, so I fear that git-daemon is infeasible, and I don’t feel like putting every little project up on Github. I’ll probably have to fall back on HTTP access, which I hear is considerably slower. Ah well, such is life.

P.S. Yes, I’ll probably migrate + Githubbify Rubygame eventually, but there are a number of issues to sort out first.

It’s a brand new blog(s)!

You may notice that something seems different around here. Today I migrated my blog from that crusty old Typo to the shiny new Mephisto! Most things are working well, but excuse the dust.

You might also notice that the Rubygame posts have vanished mysteriously! Well, in fact, they have been moved to…the new Rubygame blog!

I’ve been sitting on the rubygame.org domain name for a while now, but I was never sure what to do with it. But then I figured, “Hey, let’s put all the Rubygame news on that domain, and use jacius.info for my other projects and personal stuff!” Evenutally, I’m also going to set up a copy of the documentation and downloads on rubygame.org, as well.

And it gets cooler: both blogs are running from the same Mephisto instance, using its multi-site feature! Pretty slick.

But wait, there’s more! Within the next day, everything will be using Mongrel, so it’ll load nice and fast.

Good news all around! … well, except for the old RSS and Rubygame post links not working.

P.S. I’ve tried to keep all the article URLs the same, with moderate success. Unfortunately, the old RSS feed link doesn’t work anymore. But that’s the way the cookie crumbles, I guess. Maybe I can set up a redirect to fix that. I’m also trying to get monthly archives listen in the sidebar, etc. etc. Also, the domain change for the Rubygame posts means that links to them at this domain will fail. Blech. I’ll see if I can set up redirects for that, too.

Update: I found a redirect solution on BlogFish. Hurray! Sorry for flooding your RSS readers, though.

P.P.S. You might be wondering what happened to my plans to use Radiant, since I was hacking on the Radiant-Comments extension earlier this week. Radiant is a really cool platform, but the extensions just aren’t solid enough for me to set up a proper blog without a lot of work. Maybe some day, I’ll switch this blog over to Radiant (and break the RSS feed again ;-) ) but in the meantime I needed a working setup, and Mephisto provided that with a minimum of fuss.

The Magic of Interwebs 2.0

[Update, May 25 – I changed over to Mephisto, and the sidebar doesn’t currently have the Ta-da lists anymore.]

If you look down at the bottom right side of my blog (i.e. where you are now, unless you’re reading this somewhere else… hmmm), you’ll see “Rubygame 3 Checklist (Public Ta-da list)”. Underneath that, you’ll see a whole load of things I have to do before I’ll consider Rubygame 3 to be ready for release. And the list is updated as I check things off… or more likely, add new items.

That’s the magic of the Interwebs 2.0, my friends: integration of your pointless online to-do list with your pointless online blog. Pretty neat, huh?

Right now, the list has 16 to-do items on it. Any bets on how long before it has grown to over 30 items? (“To-do lists: you’re doing it wrong.”)

P.S. I should update my pointless blog software sometime, this version is getting cobwebs.

When RGB is Not Enough, Redux

Here’s a simple example of when the RGB color model fails to accurately model real life interaction of light and color.

Low-pressure sodium vapor lamp

If you drive through any of the tunnels through the Appalachian mountains on the U.S. East coast, you’ll likely be greeted by the ugly yellow-orange glow of a low-pressure sodium vapor lamp.

LPS lamps only emit light around the yellow-orange wavelength. As a result, in situations where they are the only light source, such as deep within a mountain tunnel, everything loses its own color, instead becoming a shade of yellow-orange. A car which had a lovely blue hue in the full-spectrum light from the sun will suddenly look near-pitch black once you enter the limited spectrum lighting of the tunnel. A white car, which reflects light on multiple wavelengths, will be much more visible, but still entirely yellow-orange. A red car or a green car would probably be just a little bit more visible than the blue car, but you’d be unable to tell that they were red or green if you hadn’t seen them in daylight.

If we wanted to make a 3D animation of a car going through a tunnel, it wouldn’t be enough just to make all the lights in the tunnel yellow-orange. In the RGB model, yellow-orange light is just a mix of red and green light; if we sample a pixel from the photograph above, we find that its color makeup is R: 98%, G: 67%, B: 0%.

Unlike in real life, where a red car would appear to be a very dark yellow-orange, a red car in our computer model would still look like a red car. A green car would be a darker shade of green under this faux yellow-orange light, but it would still be green.

If we’re only concerned with the emotional impact that this unnerving yellow-orange scene will create in the viewer, then we’d probably end up just faking it in post-processing, by desaturating all the colors and throwing a solid yellow-orange layer with a multiplicative blending mode on top.

That would certainly succeed in turning everything the same hue, and would probably be satisfactory for most artistic purposes, but it wouldn’t be an accurate model of the real world. Even a more complex filter, which measured each pixel’s color-space distance from yellow-orange in order to derive its luminosity, wouldn’t be quite right.

The situation grows increasingly complex if we try to model situations with multiple hues of strongly-colored lighting in the same scene. For example, suppose we wanted to recreate the (incredibly disturbing) psychedelic scene on the paddleboat from the classic 1971 Willy Wonka & the Chocolate Factory. To properly model multiple light sources illuminating an object at different times, each source emitting a different mix of wavelengths of light, and each material reflecting certain wavelengths but absorbing others… yikes! Accomplishing such a task using the RGB model would be an astounding feat for even a seasoned professional!

When RGB is Not Enough

Over a year ago, my color studies class visited the Krannert Center for the Performing Arts at the University of Illinois, for a guest lecture/demo on theatre lighting, and the interaction of light with colored objects.

By the end of the demo, I had realized something: RGB just isn’t enough to describe the full range of color interaction. And it’s not just RGB that’s deficient; HSV, HSL, CMYK, etc. all suffer from the same limitation. In fact, any color model which tries to describe a color as a single point will fall short.

Why?

Color is conveyed through photons of various wavelengths. The full visible spectrum of wavelengths covers the familiar rainbow: the longest wavelengths we can see appear red; shorter wavelengths appear orange, yellow, green, and blue, with the shortest wavelengths we see appearing violet.

Generally speaking, any given lightsource will emit photons at multple wavelengths (see e.g. the diagrams at Wikipedia’s article on “emission spectrum”). A light which appears blue, for instance, might be emitting some violet, green, and even red photons, although the majority of the photons emitted will be blue.

Similarly, any given object will absorb photons at multiple wavelengths, while reflecting photons at other wavelengths. An object which appears green (say, a leaf) reflects primarily green photons, while absorbing lots of red and other colors of photons.

It’s the fact that color exists across this spectrum that means it’s impossible to model the full interaction of colored lights and objects; the RGB model completely ignores the effect of photons of all wavelengths except three.

Consider this scenario: there is a tight-focused spotlight shining through three filters. Each filter is made of a special material which fully absorbs photons in a specific, tiny range of wavelengths, while letting all the rest pass through; the first one absorbs reddish photons (say, in the range of 790 to 800 nm), the second absorbs greenish photons (540 to 550 nm), and the third absorbs blueish photons (470-480 nm). The spotlight uses a special bulb which emits light equally at all wavelengths in the visible spectrum.

When we look at the spotlight by itself, the light looks white. If we looked at the spotlight through the three filters, what color would it seem to be?

If we were modelling this scenario on a computer using the RGB color model, the answer would be: black; void; nothing. No light at all would make it through all three filters. The first filter would absorb all the red, letting through the green and blue (the light would appear cyan at this point). The second would absorb all the green, letting through the blue. The third would absorb all the blue, letting through nothing at all.

In the real world (or at least, a real world with these magical light bulbs and filters), you would see… white light. It would be insignificantly dimmer than the light that went in – even if we ignore such things as intensity falloff with distance – but it would appear to be white light, even though all the photons at specific red, green, and blue wavelengths have been filtered out. There would still be plenty of photons at other wavelengths to trigger the light receptor cells in our eyes; it’s doubtful we would even be able to tell that it was missing photons with wavelengths between 790-800 nm, 540-550 nm, and 470-480nm.

I need to get back to work, so I’ll get to the point(s). (“It’s about time,” you say.)

  • RGB is Good Enough for most uses in CG.
  • For special cases where increased realism/control is desired, it would be possible to simulate color interaction using the full visible spectrum.
  • A new color format with values for each of ROYGCBVM (or some subset thereof) would be an excellent compromise between data size and designer control.
  • The colors would need to be converted to RGB to be displayed on a monitor (or CMYK for print), but that could be saved for the last step.
  • Such a color format would be more future-proof, allowing for alternative photon-emitting devices and inks without excessively distorting color.