Continuing on the fringes of photography, here are some initial results from an experiment involving long-term time-lapse photos.
To summarize, I've taken (with a phone) a picture of the Aalto University main building, from two different angles, (almost) every morning when arriving at work. So far, there are approximately 28 pictures from both angles, from 2014-02-11 to 2014-03-25. I'm still collecting more, and thinking about how to post-process the images, but here are some quick blends.
Unweighted, Unedited Averages
First, here are just regular unweighted averages of the unmodified RGB channels, for both views. Images have been aligned using automatically generated SIFT-based keypoints from autopano-sift-C, and the Hugin optimizer. There's some amount of parallax movement in the foreground elements, thanks to the location of the shots not being very fixed.
First view, raw average over 29 frames.
Second view, raw average over 27 frames.
Vertical Column Blend
Here's view #1 in a form quite often used for time-lapse photos: blending consecutive vertical columns out of each frame.
The blending here was done by weighting each frame with a Hamming window 200 samples wide, centered at 29 uniformly spaced locations from the left edge to the right. As chronologically consecutive images aren't usually especially similar (except for the snow, and even that went away and came back once), the images have been ordered by computing the average distances of corresponding pixels of each image pair, then using that as a distance matrix for a symmetric TSP instance, solved (exactly) with the Concorde TSP Solver.
The result... well, it has a little less banding than the chronologically ordered image, at least. Both are shown below.
Column blend view of location #1, chronological order.
Column blend with TSP-based ordering.
The TSP-based image sorting for the columnar approach is slightly bogus, in that it considers overall similarity between images, and not (directly) the smoothness of the transition between adjacent columns. My original idea was to have the distances also depend on the index of the edge on the path, making the problem an instance of the (harder) time-dependent traveling salesman problem (TDTSP), but implementing that needs some doing.
Instead of only trying to find existing smooth transitions, it would certainly be possible to (attempt to) normalize the exposure between images. The lighting conditions are occasionally quite challenging, though, as the sun rises right behind the building.
Making up a reasonable video would probably need a bit more frames than what I currently have.