Zap

Laptop’s back from the shop. It has been a cascade of technology failure for me lately. Last week, my iPod just back from refurbishment (HD was flaky), I excitedly plugged it into the USB port on my laptop. Zzztt. Immediate shut down. Wisp of electrical smoke snaking out of the side vent. Lovely.

The laptop booted, but without working Bluetooth or USB ports. Funny thing was, I missed the Bluetooth way more than the USB. That’s got to be some kind of milestone for me personally. I realized, outside of my system-frying iPod, that I never plug anything into the USB. Mouse is Bluetooth; printer at work is; phone connection is; headset (for Skype) is. Long live the golden age of wireless.

And yes, if you’re counting, this is motherboard death #2 in calendar year 2005. Somethin’ ain’t right.

Import > Life

A corrupt iPhoto preferences file caused me to have to re-import nearly 10,000 digital images recently. Watching them all get sucked in and displayed for a fraction of a second might be like what having one’s life flash before his eyes is like, if that in fact happens. (Reminds me of that scene from Flash Gordon when Dr. Hans Zarkov is having his brain probed and displayed on a screen.) Weird which images burn in to the brain as the rest flicker by. I’d call those the Important Moments if they were not so completely random, mundane, or titillating. Wait, maybe those are the Important Moments.

The Look-At-Me Cellphone Axiom

The amount that a person wants to look like he or she is using a cellphone in a public place — that is, how overt the person is about being on a call — is directly proportional to how advanced the receiver/speaker technology is. For example, people using cellphones in a normal fashion (handset-to-ear) are mostly unconcerned about letting people know that they are using a cellphone. (Though people using cellphones in this way can often be rude they are usually not deliberately so.) In contrast, people who use lavalier microphones are usually loud and demonstrative about the fact that there is no phone at their ear, waving the phone around like a prop to alert passersby to their hands-free-edness. And those with a Bluetooth headset? More theatrical still. Following the slope of wirelessness/overtness, it is fair to assume that when cellphone conversations can be beamed directly to the brain callers will be indistinghuisable from raving lunatics, gesticulating vigorously to let others know that, in fact, they have voices in their heads.

Humanities supercomputing

Some of the readership of this blog are people who work in the humanities — literature, criticism, art, museology — and some work in technology. Some work at the intersection of both, like me. So I figure this is a great place to pose a question that hit me like a hammer today.

Are there problems in the humanities that can only be solved by a supercomputer or some sort of distributed massive computing platform?

Anything that requires heavy doses of processor-crunching? Large corpus text analysis or image analysis? Help me here.

Protein folding, deep space radio astronomy, thermonuclear explosion modelling, meteorological forecasting and brute-force decryption cannot possibly be the only uses for supercomputing.

Do tell, do tell!

Google’s Ride Finder

My oh my how I am loving this arms race between Google, Yahoo, and Amazon. Google Labs is playing with an enhancement to Maps that plots the real-time position of a city’s cabs on the street grid. Here’s an example from around my building in Chicago. Plenty of other cities available too. Just more proof that flexible, open design almost always foments new innovation.

Now if you could only flag one of the cabs via the Maps interface it’d be perfect. Hear that, fleet operators?

[Via Gapers Block]

Arcade symphony

Call it recombinant audio archaeology. Andy Hofle has recorded the noises of classic arcade games of the 1980’s from the available ROM emulations and then mixed and layered them into a stunning simulacrum of the experience of being in an arcade. He’s got background noise, coin changers, and even people talking. A current-day casino might come close, but you’d hear so much more in a casino: slots cha-chinging, recorded voices entreating you to play, and more realistic noises. An arcade in 1983, on the other hand, was all about synthetic bleeps, bloops, and blow-ups. And this is why I love it. The background radiation of my youth.

Thomas breaks through

The Henry Ford cultural complex in Dearborn, Michigan hosts a children’s day where a life-sized version of Thomas the Tank Engine comes to visit. So do many places. But they also host a day for children suffering from autism and Asperger’s Syndrome. Turns out, Thomas the Tank Engine is a source of special fascination with these special kids. A report from 2000 explains why this might be. Some highlights:

  • Children with autism are often attracted to objects arranged in lines (like cars on a train), as well as spinning objects and wheels.
  • The unique stop-action photography of the videos allows the background and scenery to remain still, allowing for greater focus on the “big picture” with less distraction.
  • Thomas and the other characters have friendly faces, often with exaggerated expressions. In the videos, the expressions are set for some time and are often accompanied by simple narration explaining the emotion (“Thomas was sad.”), allowing children to identify the feelings and expressions.

I’d wager that this is what makes Thomas appealing to all children, but the particular ways that Thomas “breaks through” to kids with ASD would be a fascinating subject for deeper study. For instance, what about linear (and cyclical) arrangements is so attractive? And are there implications outside of the ASD world? Does this tell us something about human cognition with regards to drama, storytelling, and visual composition?

Smell me a story

Febreze, makers of perfumey aerosols that I associate with covering up the stench of cat urine, have a pretty interesting product on the market. ScentStories is their attempt at creating narrative through smell alone. The ScentStories gizmo lets you pop in discs that contains five odor zones, each of which is wafted to you in sequence every half hour. As far as riveting narrative goes you’ll probably want to stick to other media, as the ScentStories are basically meant to calm you and/or put you to sleep. “Wandering barefoot on the shore” (well, of course having your shoes off creates a different smell), “relaxing in the hammock,” (I’m visualizing Homer Simpson) “shades of vanilla,” (sounds like a painting — talk about synaesthesia!) and so on. I wonder if there’ll be third-party discs to explore the full spectrum of stink? “Trip to the farm,” “trying not to touch the sleeping guy next to me on the subway,” and “discovering you used the last diaper two hours ago” — these would be fascinating explorations of stinktales.

Still, I think this is an interesting idea. Certainly narrative can be embedded in anything, the arc of a musical composition, the flow of a buildling facade. But visiting the Febreze website you get the sense that they don’t believe they can actually pull it off. The site is drenched in ethereal, blissed-out visuals. Don’t know what a walk on the beach smells like? Well look here. This is what it smells like. And their model is clearly musical. The wafter mechanism looks like a CD-player and they’ve recruited Shania Twain to “compose” a disc of scents. I think if anything ScentStories are to traditional beginning-middle-end narrative what ambient music is to, say, sonata form. I’d have called them ScentScenes, I suppose. More like an odor tableau than a linear experience.

But then, I haven’t tried this. And I’m really tempted. I wonder if I could purchase and play a disc without knowing which story I had. To smell the next chapter and declare “why, yes, I am exploring a mountain trail!” without having previously encountered a visual or tagline to set the scene for me psychologically would be the true test.

See also: Sensory deprivation

Blog readers be afraid

I now own a camera phone.

Mainly I bought it for the relatively high-res cam (1.3 megapixel) and the EDGE network access. I like poaching WiFi nodes as much as the next guy, but too often I find myself away from a jack and not in a cloud. This should solve that.

More at Engadget.

Virtual wiring

There’s a nifty little program called GraphEdit that I have been using to, um, “work with” the rights management in TiVo-To-Go files. The interface allows a virtual re-wiring of the audio and video inputs and outputs for a video file. For example, if you want to compress a video you would grab the output “lead” from the video and wire it to the input of a box in the chart representing whatever compression you liked. This interaction modality achieves in a single view the ideal of being both intuitive (out connects to in, and on and on) and completely explicit (the flow that you manipulate represents exactly what the program is doing). As a bonus it is also kinda fun. The bastard child of Storyspace and Media Cleaner.

I employed an interface like this for a piece of software I wrote in graduate school, but the links I allowed between video files implied sequence in time not transformation of one node by another. This distinction highlights a unique opportunity. What if you could link two video files to a single output file, creating a merger of the two? The links themselves could function as the transform filters — overlay, embed, distort, etc. Now that would be interesting!

I once played with a program that did visual transforms of images using a spreadsheet interface. You placed files into cells and then put together formulas — essentially filters, as in Photoshop — between the cells. The resultant file in a new cell was your output, just like spreadsheets work. (Anybody know what this program was called?)