A new kind of music video: Radiohead's "House of Cards"

Radiohead’s latest music video, “House of Cards”, was shot using LIDAR (Light Detection and Ranging) rather than cameras:

Learn more about how it was made at the Google Code Radiohead page. The coolest part is that Radiohead is making the data available to anyone who wants to play with it.

They provide some sample Processing code to view the data, but for whatever reason I couldn’t get it to work, so I wrote a simple little iPhone OpenGL ES application:

I’m unable to test it on a real iPhone since I don’t have a paid developer account, but presumably it should work.

View other people’s visualizations at the YouTube House of Cards group.

Multitouch JavaScript "Virtual Light Table" on iPhone v2.0

Now that iPhone 2.0 is out I started playing around with some of the new web features, and soon found that I had created the prototypical virtual light table that’s an essential demo for any new multitouch technology.

It’s about 100 lines of JavaScript. It grabs the 10 latest photos from Flickr’s “interesting photos” API and randomly places them on the screen for you to play with:

This is great if you have an iPhone with the 2.0 software, but desktop browsers should get some multitouch love too. So I started writing a little bridge that fakes multitouch events in desktop browsers. It’s far from complete, but it’s just good enough to get the virtual light table demo working.

So go ahead and load it up in the new iPhone MobileSafari or Safari 3.1+ / WebKit nightly (requires CSS transforms):

http://tlrobinson.net/iphone/lighttable/

In desktop browsers it uses the previous clicked location as a second “touch”, so you can click a photo then click and drag another spot on the photo to resize and rotate (notice the yellow dot).

For a good overview of touch events and gestures, check out this SitePen blog post and Apple’s documentation.

Here’s the source for the fake multitouch bridge:

http://tlrobinson.net/iphone/lighttable/multitouch-fake.js.

Clearly the reverse of this bridge would be even more useful, since iPhone only sends mouse events under specific conditions. The mousedown, mouseup, and mousemove events could be emulated using the touch equivalents to make certain web apps work on the iPhone without much additional work. Of course you would need to either cancel the default actions (i.e. panning and zooming) on touch events, or have some way to manage the interactions between them.

Mac OS X, Web Sharing / Apache, and Symlinks

Mac OS X comes with an Apache installation which is very handy, but by default it’s configured not to follow symlinks. A lot of times I have projects in other directories which I want to share via the web server, but end up getting errors such as the following:

Forbidden

You don’t have permission to access /~tlrobinson/Editor/ on this server.

And in the error log file:

[Wed Jun 25 16:17:14 2008] [error] [client ::1] Symbolic link not allowed or link target not accessible: /Users/tlrobinson/Sites/Editor

To enable following of symlinks, edit your account’s configure file located at /private/etc/apache2/users/username.conf

Here’s the default:


Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all

You can either add “FollowSymLinks” to the Options directive (“Options Indexes MultiViews FollowSymLinks”), or change the AllowOverride directive to “All” (“AllowOverride None”) and place a .htaccess with it’s own Options directive (“Options FollowSymLinks”) in your Sites directory.

Then just restart Apache (“sudo apachectl graceful”) and symlinks should work.

Amazon S3 PHP helpers

The Amazon documentation for using S3 with PHP refers to an elusive function called “setAuthorizationHeader”. It’s apparently supposed to magically set the correct value for the Authorization header on a Pear HTTP_Request object. As far as I could tell, it didn’t actually exist — but I wanted it, so I wrote it:

Source

<?php

require_once ‘Crypt/HMAC.php’;
require_once ‘HTTP/Request.php’;

define("S3URL", ‘http://s3.amazonaws.com’);
define("AWSACCESSKEYID", ‘XXXXXXXXXXXXXXXXXXXX’);
define("AWSSECRETKEYID", ‘XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX’);

$s3_hasher =& new Crypt_HMAC(AWSSECRETKEYID, "sha1");

function s3_sign($StringToSign)
{
    global $s3_hasher;
    return hex2b64($s3_hasher->hash($StringToSign));
}

function hex2b64($str)
{
    $raw = ;
    for ($i = 0; $i < strlen($str); $i += 2) {
        $raw .= chr(hexdec(substr($str, $i, 2)));
    }
    return base64_encode($raw);
}

function setAuthorizationHeader($request)
{
    $headers = $request->_requestHeaders;
    
    $HTTP_Verb      = $request->_method;
    $Content_MD5    = $headers[‘content-md5’];
    $Content_Type   = $headers[‘content-type’];

    // Get the date, or set it if not already:
    if (!isset($headers[‘date’])) {
        $Date = gmdate("D, d M Y H:i:s T");
        $request->addHeader("date", $Date);
    }
    else {
        $Date = $headers[‘date’];
    }
    
    // Canonicalize the Amazon headers:
    $CanonicalizedAmzHeaders = ;
    $amz_headers = array();
    foreach ($headers as $key => $value) {
        if (substr($key, 0, 6) == ‘x-amz-‘) {
            if (isset($amz_headers[$key]))
                $amz_headers[$key] .= ‘,’ . $value;
            else
                $amz_headers[$key] = $value;
        }
    }
    ksort($amz_headers);
    foreach ($amz_headers as $key => $value)
        $CanonicalizedAmzHeaders .= $key . ‘:’ . $value . "\n";
    
    // Canonicalize the resource string
    $CanonicalizedResource    = ;
    $host = $request->_generateHostHeader();
    if ($host != ‘s3.amazonaws.com’) {
        $pos = strpos($host, ‘s3.amazonaws.com’);
        $CanonicalizedResource .= ‘/’ . ($pos === false) ? $host : substr($host, 0, $pos);
    }
    $CanonicalizedResource .= $request->_url->path;
    // TODO: sub-resources "?acl", "?location", "?logging", or "?torrent"

    // Build the string to sign:
    $StringToSign = $HTTP_Verb . "\n" .
                    $Content_MD5 . "\n" .
                    $Content_Type . "\n" .
                    $Date . "\n" .
                    $CanonicalizedAmzHeaders .
                    $CanonicalizedResource;
    
    $Signature = s3_sign($StringToSign);
    
    $Authorization = "AWS" . " " . AWSACCESSKEYID . ":" . $Signature;
    
    // Set the Authorization header:
    $request->addHeader("Authorization", $Authorization);
}

function s3AuthURL($Resource, $HTTP_Verb = ‘GET’, $seconds = 120)
{
    // Calculate expiration time:
    $Expires = time() + $seconds;

    // Build the string to sign:
    $StringToSign = $HTTP_Verb . "\n" .
                    "\n" .
                    "\n" .
                    $Expires . "\n" .
                    $Resource;

    $Signature = s3_sign($StringToSign);

    // Build the authorized URL:
    return  S3URL . $Resource .
            ‘?AWSAccessKeyId=’  . AWSACCESSKEYID .
            ‘&Expires=’         . $Expires .
            ‘&Signature=’       . urlencode($Signature);
}

?>

Note: this hasn’t been tested extensively, so use it at your own risk. Post a comment or contact me at if you find any bugs. Also, IANAPHPE.

Just replace the XX’s with your keys and it should work with this sample code.

There’s also a function for creating query string authorized URLs.

Using user stylesheets to highlight links to PDFs or other media, rel="nofollow", etc

Browsers allow you to define your own stylesheet that’s applied to every page you visit. For the longest time I’ve wondered why anyone would ever want this feature. I figured it would be useful for people with poor vision or other disabilities and that was about it.

But combined with some neat features of CSS, one can come up with interesting uses of user stylesheets. Consider the following:

a[rel~=”nofollow”] {
text-shadow: rgba(255,0,0,0.25) 1px 1px 1px;
}

This rule uses the partial attribute value selector to give all hyperlinks with the rel=”nofollow” attribute a slight red shadow (like the preceding link, if your browser supports text-shadow).

Why would you want this? Well, for me, pure curiosity. But SEOs or spammers may find it enlightening though.

For example, the first thing I noticed was that on the Hacker News homepage links to external sites newer than 3 or 4 hours have the nofollow attribute, but older ones do not – clearly a spam deterrent.

There are many other useful and interesting scenarios. Say you want to highlight all PDF or MP3 links (“$=” matches the end of an attribute):

a[href$=.pdf], a[href$=.mp3] {
text-shadow: rgba(0,255,0,0.25) 1px 1px 1px;
}

Or email links (“^=” matches the beginning of an attribute):

a[href^=mailto] {
text-shadow: rgba(0,0,255,0.25) 1px 1px 1px;
}

Note that I used text shadows, but anything styleable by CSS is fair game.

With April fools day approaching, one could imagine other creative uses. I’ll leave that as an exercise to the reader.

Using command line tools to detect the most frequent words in a file

Antonio Cangiano wrote a post about “[Using Python to detect the most frequent words in a file](http://antoniocangiano.com/2008/03/18/use-python-to-detect-the-most-frequent-words-in-a-file/)”. It’s a nice summary of how to do it in Python, but (nearly) the same thing can be accomplished by stringing together a few standard command line tools.

I’m no command line ninja, but I’d like to think I have basic command of most of the standard filters. Here’s my solution:

cat test.txt | tr -s ‘[:space:]’ ‘\n’ | tr ‘[:upper:]’ ‘[:lower:]’ | sort | uniq -c | sort -n | tail -10

I’ll explain it blow-by-blow:

cat test.txt

If you don’t know what this does you’ve got a lot to learn. “cat” simply reads files and prints them to standard output (concatenates), for use by subsequent filters.

tr -s ‘[:space:]’ ‘\n’

“tr” is a handy tool that simply translates matching characters from the first set to the corresponding character of the second set. The first instance turns all whitespace characters (spaces, tabs, newlines) into newlines (“\n”) so that each word is on a separate line (the -s option “squeezes” multiple runs of newlines into a single newline).

tr ‘[:upper:]’ ‘[:lower:]’

The second instance translates all uppercase characters into lowercase (note: the two “tr”s are separate for clarity, but they could be combined into a single one).

sort | uniq -c

“sort” and “uniq” do exactly as their names imply, but “uniq” only removes adjacent duplicates, so you often want to sort the input first. The “-c” option for “uniq” prepends each line with the number of occurrences.

sort -n

We sort the result of “uniq”, this time by numerical order (“-n”) to get the list of words in order of the number of occurrences.

tail -10

Finally, we get the 10 most frequently occurring words by using “tail” to take only the last 10 lines (since the “sort -n” puts the list in ascending order)

It’s not perfect, especially since punctuation is included in the words, but the “tr” commands can be tweaked as needed.

Chipmunk Physics engine running on the iPhone

A couple months ago I hacked together a demo of [Chipmunk Physics](http://wiki.slembcke.net/main/published/Chipmunk) engine running on the iPhone using the unofficial SDK. It shows the standard Chipmunk demos, but also it reads in accelerometer data using the method described on the [Medallia blog](http://blog.medallia.com/2007/08/iphone_accelerometer_source_co.html).

For the most part it was a fairly simple translation from standard OpenGL to the OpenGL ES the iPhone uses. If anyone is interested in the source code, let me know and I’ll see if I can find it… *edit: source is posted. see below*

[iPhone Chipmunk Physics demo application](http://tlrobinson.net/misc/iphone_chipmunk_physics/chipmunk_demos.tar.gz) – run from the command line (either using MobileTerminal.app or over ssh), only tested on iPhone software 1.0.2.

*Update: [the source](http://tlrobinson.net/projects/iphone/Chipmunk-iPhone.tar.gz) – this is just modified from the original Chipmunk Physics demo, so the original license applies. I can’t support this at all, so you’re on your own. I believe I used the hacked iPhone toolchain version 2 (or 3?), so you may have to edit the Makefile if you have a different version or installed it somewhere other than /usr/local/bin/.*

What's wrong with Yahoo's OpenID implementation

Today Yahoo [launched support](http://open.login.yahoo.com/) for [OpenID](http://openid.net/). On the surface this seems great for OpenID. Unfortunately there are a number of problems with it.

For those unfamiliar with OpenID, it is a [single sign-on](http://en.wikipedia.org/wiki/Single_sign_on) system, which allows users to remember a single username and password for signing in to any site which supports OpenID . There are two basic parts to the OpenID system: sites which wish to allow users to sign in using an OpenID (the “relying party”), and sites which host your OpenID (the “OpenID provider”). Yahoo has chosen to be the latter, an OpenID provider.

Most OpenID providers give their users a simple, easy to remember OpenID like “username.livejournal.com” or “username.wordpress.com”. However, by default Yahoo provides their users with an obscure OpenID like “me.yahoo.com/a/1bjkvd893414lka09i23”, impossible for any normal person to remember. Why not use “me.yahoo.com/username” like most other OpenID providers, you ask? Simple: so Yahoo can force other sites (the “relying parties”) into placing “Sign in using Yahoo” buttons on their login pages. If a site wants to allow millions of Yahoo users to easily sign in, they must include this button. Free advertising for Yahoo.

If other OpenID providers follow this trend we’ll soon end up with login pages covered with dozens of “Sign in using ________” buttons. This is definitely *not* then intention of OpenID. Any user with any OpenID provider should be able to type their OpenID into any site which supports OpenID, and it should just work.

Additionally, Yahoo has chosen not to be a relying party themselves. This means that users who have OpenIDs from any number of other providers can’t sign into Yahoo using their existing OpenID. They’re basically saying “Yeah we support OpenID… as long as WE’RE in control”.

To become an acceptable OpenID provider, Yahoo should:

* give users https://me.yahoo.com/username by DEFAULT, not as an option buried somewhere in the settings.
* educate users to type either me.yahoo.com/username or yahoo.com into OpenID login pages, NOT have Yahoo-specific buttons.
* become an OpenID relying party, i.e. allow other people to log into Yahoo using their OWN OpenIDs.

In the meantime, I suggest getting an OpenID from an [another provider](http://openid.net/get/) such as [myopenid.com](https://www.myopenid.com/). If you have a personal website or blog, you can easily use it’s URL as your OpenID via delegation. Sam Ruby has an [excellent overview of various OpenID options](http://www.intertwingly.net/blog/2007/01/03/OpenID-for-non-SuperUsers).