Tuesday, 3 September 2013

Undocumented Matlab

I'm not writing here nearly as often as I'd like, but as the theme of the latest few posts has been Matlab I thought it would be appropriate to continue in the same fashion.

I came across a very interesting blog about undocumented Matlab functions. It has all sort of tips and tricks: for example, did you know you can have Matlab to run using a more recent JVM? No? Me neither. (But there are very few reasons why anyone would want to do it.) Or, did you know you can force Matlab to use a new (still beta) version of the handle graphics (i.e., the component that draws the plots)? No? I had no idea either. This could be useful as the new graphics look a lot better than the usual ones.

Well, I've given you the source of all that, so dive in and enjoy...

Tuesday, 29 January 2013

Functional programming in Matlab

I came across a very interesting blog post recently. I'll just link it here so you can read it at the source. But to give you a cliffhanger, how would you write a min_and_max function in Matlab, that does this:

[extrema, indices] = min_and_max(y);

You could write an m-file but the alternative is the following mind-blowing and very clever one-liner:

min_and_max = @(x) cellfun(@(f) f(x), {@min, @max});

I love it. Check out the original blog post on Loren on the Art of Matlab for an explanation on how it works.

Thursday, 24 January 2013

Homebrew

I haven't used Linux for a long time now. I remember with mixed feelings compiling programs: partially, the satisfaction of having a piece of software exactly tuned for your machine, and partially the pain of stashing up files somewhere in the system without being exactly aware of how to get rid of them later on.

Fast forward 10 years (did I say 10? Great Scott!), I'm sitting here at my shiny MacBook Pro and thinking "Heck, I'd really need this or that program/library/whatnot".

Now, would I be happy to pile up files randomly in OS X? Not exactly. Would I be happy to create a ~/local folder where to save the binaries? Good luck with that if you need to link to a library. Here's where tools like fink or MacPorts come in handy. However, for a reason or another, neither of those never quite hit the spot for me.

The one that did hit the spot was Homebrew. I'm not exactly sure why. Probably because it's quite easy to use (not that the others aren't) but allows you to customize quite a lot of your installation (brew install --interactive package). Also, it's very easy to compile a package that isn't included in the distribution, but registering it within Homebrew so that it's easy to remove or update later on. Once you do that, you can also contribute to Homebrew itself if you're so inclined, that way you'll save others from going through the same pain you've suffered (I haven't gone that far yet, possibly because what I compiled so far required a few tricks that would take me too much work to recreate in an automated script).

So, here are a couple of tricks I learnt that weren't obvious from the documentation. Say you want to compile some package that comes with Homebrew, but for some reason you desperately want it to be tuned for your CPU, or you want to compile it with/without a certain feature which Homebrew doesn't/does include by default. The way to go is:

brew install --interactive package

At this point you're on your own to go through the usual configure; make; make install routine. Something that helps you in this is brew diy, which can be used to tell configure where to send your compiled binaries (it sets the --prefix option to configure). The command would be something like this:

./configure $(brew diy)

although of course you have to add your own options to that.

The other very useful trick is compiling packages that aren't included in Homebrew, registering them in Homebrew so that you can easily delete them later on. Say you want to install painfulcompilation 1.0. You decompressed the .tar.gz and now your terminal is on the folder painfulcompilation-1.0. This is what you need to do:

./configure $(brew diy)
make
make install
brew link painfulcompilation

and magically, painfulcompilation is installed in what is know as the Cellar, and all the binaries are linked to /usr/local/. How cool is that?

Naturally not everything is perfect. Something quite crucial is to make sure that your user has permissions to write in /usr/local otherwise you're going to have a bad time (although fortunately this type of error is quite easy to track down).

The good thing in all this is that it allows for all the dirtiest tricks that get something compiled. For example, I was trying to install the pfstools on my laptop (they're a set of tools to operate on HDR images, if you're interested). Funny thing, clang would compile most of them but then would fail in one of the tone mapping operators (Fattal's if you're interested) because it doesn't support OpenMP. On the other hand, llvm-gcc supports OpenMP but would give an error on another tone mapping operator. So I used two compilers to compile the whole package, by calling ./configure a second time after I got the first error. So that is the kind of stuff that you can get away with using homebrew.

Tuesday, 18 December 2012

Have you seen the pictures from Mars?

No? Ah, that's a shame. Here you go. First link, NASA photojournal. The photos there have already been processed and stitched, and I find the panoramas astonishing. It's just unbelievable that those pictures come from another world.

Second link, raw images. You probably want to select pictures from Mastcam or MAHLI if you want to see anything of interest. In the Mastcam photos you'll find also a few images taken in the near-infrared spectrum, if you're into multispectral imaging at all (here is a detailed description of the Mastcam, including the spectral sensitivity of the sensor and the spectral transmission of each filter).

For me, the drawback of the raw images is that for most of them only the visible-light high resolution version is available, while there's only a tiny thumbnail for the near-infrared. The reason for this is bandwidth. Yes, it doesn't take a lot of bandwidth to download a photo. But Curiosity is on Mars, and the interplanetary bandwidth is very limited. On top of this, the same antenna that communicates with Curiosity is used for all the other probes as well (or at least some of them, I'm not sure now and can't be bothered to google it). So, limited bandwidth and limited time slot too. The result is that at NASA engineers download only thumbnails of images, and then download the higher resolution version of those they deem worth the effort.

It's a shame, because I'd be very interested in putting my hands on some misty image of the horizon, where the near-infrared version is crisp and clear...

Monday, 3 December 2012

More about exponents

My previous post raised some interest (i.e., one person asked me about it), so I decided to follow up a little bit on that. In a nutshell, I suggested to use logarithms and exponents to raise large matrices to some non-integer power because it's faster. That is, exp(log(A)*x) is faster than A.^x, subject to the constraint that A>0 everywhere. Referring to this, I can confirm that it works on Octave as well as in Matlab. I wouldn't know about other programming languages though, if you feel like trying and letting me know in the comments, I'll create a new post about it.

To that, I wanted to add something else. If we want to raise a large matrix to some integer power, it's worth using the exponentiation by squaring. The following Matlab code is significantly faster than the .^ operator already when the exponent is 4.

function y = pow(x,p)

y = ones(size(x));

while (p > 0)
  if (mod(p,2) == 1)
    y = y.*x;
  end
 
  p = floor(p/2);
  x = x.*x;
end

Interestingly, this does not work for Octave, which probably implements a similar algorithm when it realises that the exponent is an integer. So, if you use Octave keep going with your .^ operator.

In Matlab the code doesn't look great, probably because most of its commands are native for double precision floating-point arithmetic, while here we're dividing the exponent which is an integer. Somehow, the same function written in C looks a little bit more neat.

float powN(float x, unsigned int p) {
  float y = 1.0;
 
  while (p > 0) {
    if (p & 1)
      y *= x;
  
    x *= x;
    p >>= 1;
  }
 
  return y;
}

Here we can use directly bitwise operators in the condition and in place of the division by 2. That said, this is a very low-level operation which is likely to be implemented more efficiently in whichever API you're using (except for Matlab, it seems). So I'd recommend you to double-check first, because it might very well be that you can use a vectorised version that outperforms this one.

Wednesday, 28 November 2012

.^ operator vs. logs in Matlab

Quick comparison test. Say you have a large matrix A, what's faster, A.^2.4 or exp(log(A)*2.4)?

Obviously, the fact that I'm asking should kind of give away the answer and the latter is faster. There's however a catch in all this, so it really depends on what you're doing, because you can't take log(0). But, for what I want to do this isn't a problem.

Let's say that our big matrix is actually an image in sRGB colour space, and I want to convert it to linear RGB, that is, I want to remove the gamma correction. From the Wikipedia page I find the formula, and the straightforward implementation is:

linear = srgb;
small = linear < 0.04045;
linear(small) = srgb(small) / 12.92;
linear(~small) = ((srgb(~small) + 0.055)/1.055) .^ 2.4;

But if the image is big (or your computer is slow) we can save seconds using logs.

small = srgb < 0.04045;
linear = exp(log((max(srgb,1e-3)+0.055)/1.055) * 2.4);
linear(small) = srgb(small) / 12.92;

This code, if in a function, is about three times as fast as the previous.

In this case, clipping the image to 0.001 works perfectly fine whether you use 8 or 16 bit precision images, because all those small values are only transformed linearly by this gamma function. Obviously, if you're not doing image processing this may not be good enough for you and you might have to stick to the .^ operator.

Anyhow, this works also for applying the gamma to a linear RGB image. The code is the following:

small = linear < 0.0031308;
srgb = exp(log(1.055) + log(max(linear,1e-3)) / 2.4) - 0.055;
srgb(small) = linear(small) * 12.92;

Monday, 26 November 2012

City of Norwich Half Marathon, the aftermath

I made it. I ran my first half marathon, 21 km (or 13 miles). I ran it all, with a very steady pace, just over 5'30" per km on average. Just under two hours (1:58'23") from start to finish, of which I'm very happy given my poor training. The arrival time was a couple of minutes longer because I started pretty far behind.

My legs ache though. They're better now, but yesterday shortly after the race I was limping around like an 80-year-old man (in fact I've seen 80-year-old men in a better shape than that!).

What's next? Who knows. I think I'll keep running, I'm not sure when I'm going to race next time. I'm really tempted to give the City of Norwich Half Marathon another go next year though!