I had a good chat with Euan Semple yesterday about, amongst other things, how to design social web tools for visually-orientated people. Euan's been helping me figure out how to use blogs, wikis, forums and tagging to engage people in film and TV industries, and it really struck me how text-based most social web tools are. In many ways, web 2.0 is simply the web taken back to basics. At last we've stopped building websites using the rules of print and publishing, and we're extracting more value from simple hyperlinks again. But because of that, the semantic web requires us to be very textual in our thought patterns. There are some things that (visual impairments aside) can be communicated much more elegantly in colours, diagrams, sequences, videos or animations. And besides, doesn't it all that text just look at bit, um, boring?
At Skillset we created storyboard guides to the media industries that worked pretty well as a visual portal into the deeper site content. But they're still embedded as Flash pop-ups in text-based pages, and extracting content relationships from Flash movies is a bit like putting a comic through text-recognition software. Hyperlinked text and tag clouds are easily mapped, and navigation systems can use those relationships easily enough. But what about physical proximity on the screen? Or relative position in a narrative sequence? Or just things that look similar?
Microsoft's Photosynth and other similar projects (and possibly the OU's Compendium) are beginning to offer some answers, but it's still early days. So, how long before we can create navigation systems that are as flexible and granular as hypertext, but as visually appealing as a style magazine? How long before visual storytelling takes its place alongside text linking in the paradigm of the social web?