Fascinating! One of the questions it suggests to me is: if it’s useful to make sound visible in order to learn more about sound, why does it make any less sense to make light audible in order to learn more about light? The only such experiments that come to mind are the ones in recent years involving turning electromagnetic radiation from outer space into “sound.” But those efforts are too often characterized in the press as if EMR were sound, which of course it isn’t.

I’m always grappling with the ways we perceive light and sound differently, and trying to understand why sound so often gets subordinated. There might be a clue here, in our inclination to assume we can learn about sound via light, but not light via sound.

One thought

  1. Interesting point!
    I would say the main problem is that what the visible mean to us is not in the sequence of pixels/data, but in the shapes, colours, lines and relations they show. Converting that to sound would require not just a conversion from pixel to sample or grain, but an understading – or interpretation – of the image, and then a transposition of that description into sound, which evidently requires a choice, a set of rules.
    If we want sound to reveal more about light, something that our eyes can’t see, we previously need a map/rules of what sound means, which is always personal, often social, but still hard to grasp and standardize.

Leave a Reply