Tuesday, 18 December 2012

Have you seen the pictures from Mars?

No? Ah, that's a shame. Here you go. First link, NASA photojournal. The photos there have already been processed and stitched, and I find the panoramas astonishing. It's just unbelievable that those pictures come from another world.

Second link, raw images. You probably want to select pictures from Mastcam or MAHLI if you want to see anything of interest. In the Mastcam photos you'll find also a few images taken in the near-infrared spectrum, if you're into multispectral imaging at all (here is a detailed description of the Mastcam, including the spectral sensitivity of the sensor and the spectral transmission of each filter).

For me, the drawback of the raw images is that for most of them only the visible-light high resolution version is available, while there's only a tiny thumbnail for the near-infrared. The reason for this is bandwidth. Yes, it doesn't take a lot of bandwidth to download a photo. But Curiosity is on Mars, and the interplanetary bandwidth is very limited. On top of this, the same antenna that communicates with Curiosity is used for all the other probes as well (or at least some of them, I'm not sure now and can't be bothered to google it). So, limited bandwidth and limited time slot too. The result is that at NASA engineers download only thumbnails of images, and then download the higher resolution version of those they deem worth the effort.

It's a shame, because I'd be very interested in putting my hands on some misty image of the horizon, where the near-infrared version is crisp and clear...

Monday, 3 December 2012

More about exponents

My previous post raised some interest (i.e., one person asked me about it), so I decided to follow up a little bit on that. In a nutshell, I suggested to use logarithms and exponents to raise large matrices to some non-integer power because it's faster. That is, exp(log(A)*x) is faster than A.^x, subject to the constraint that A>0 everywhere. Referring to this, I can confirm that it works on Octave as well as in Matlab. I wouldn't know about other programming languages though, if you feel like trying and letting me know in the comments, I'll create a new post about it.

To that, I wanted to add something else. If we want to raise a large matrix to some integer power, it's worth using the exponentiation by squaring. The following Matlab code is significantly faster than the .^ operator already when the exponent is 4.

function y = pow(x,p)

y = ones(size(x));

while (p > 0)
  if (mod(p,2) == 1)
    y = y.*x;
  p = floor(p/2);
  x = x.*x;

Interestingly, this does not work for Octave, which probably implements a similar algorithm when it realises that the exponent is an integer. So, if you use Octave keep going with your .^ operator.

In Matlab the code doesn't look great, probably because most of its commands are native for double precision floating-point arithmetic, while here we're dividing the exponent which is an integer. Somehow, the same function written in C looks a little bit more neat.

float powN(float x, unsigned int p) {
  float y = 1.0;
  while (p > 0) {
    if (p & 1)
      y *= x;
    x *= x;
    p >>= 1;
  return y;

Here we can use directly bitwise operators in the condition and in place of the division by 2. That said, this is a very low-level operation which is likely to be implemented more efficiently in whichever API you're using (except for Matlab, it seems). So I'd recommend you to double-check first, because it might very well be that you can use a vectorised version that outperforms this one.