Over the several months we worked on Project Haiku, one of the questions we were repeatedly asked was “Why not just make a smartphone app to do this?” Answering that gets right to the heart of what we were trying to demonstrate with Project Haiku specifically, and wanted to see more of in general in IoT/Connected Devices.
This is part of a series of posts on a project I worked on for Mozilla’s Connected Devices group. For context and an overview of the project, please see my earlier post.
One of IoT’s great promises is to extend the internet and the web to devices and sensors in our physical world. The flip side of this is another equally powerful idea: to bring the digital into our environment; make it tangible and real and take up space. If you’ve lived through the emergence of the web over the last 20 years, web browsers, smart phones and tablets - that might seem like stepping backwards. Digital technology and the web specifically have broken down physical and geographical barriers to accessing information. We can communicate and share experiences across the globe with a few clicks or keystrokes. But, after 20 years, the web is still in “cyber-space”. We go to this parallel virtual universe and navigate with pointers and maps that have no reference to our analog lives and which confound our intuitive sense of place. This makes wayfinding and building mental models difficult. And without being grounded by inputs and context from our physical environment, the simultaneous existence of these two worlds remains unsettling and can cause a kind of subtle tension.
As I write this, the display in front of me shows me content framed by a website, which is framed by my browser’s UI, which is framed by the operating system’s window manager and desktop. The display itself has it own frame - a bezel on an enclosure sitting on my desk. And these are just the literal boxes. Then there are the conceptual boxes - a page within a site, within a domain, presented by an application as one of many tabs. Sites, domains, applications, windows, homescreens, desktops, workspaces…
The flexibility this arrangement brings is truly incredible. But, for some common tasks it is also a burden. If we could collapse some of these worlds within worlds down to something simpler, direct and tangible, we could engage that ancestral part of our brains that really wants things to have three dimensions and take up space in our world. We need a way to tear off a piece of the web and pin it to the wall, make space for it on the desk, carry it with us; to give it physical presence.
Assigning a single function to a thing - when the capability exists to be many things at once - was another source of skepticism and concern throughout Project Haiku. But in the history of invention, the pendulum swings continually between uni-tasking and multi-tasking; specialized and general. A synthesizer and an electric piano share origins and overlap in functions, but one does not supersede the other. They are different tools for distinct circumstances. In an age of ubiquitous smart phones, wrist watches still provide a function, and project status and values. There’s a pragmatism and attractive simplicity to dedicating a single task to an object we use. The problem is that as we stack functions into a single device, each new possibility requires a means of selecting which one we want. Reading or writing? Bold or italic text? Shared or private, published or deleted, for one group or broadcast to all? Each decision, each action is an interaction with a digital interface, stacked and overlaid into the same physical object that is our computer, tablet or phone. Uni-tasking devices give us an opportunity to dismantle this stack and peel away the layers.
The two ideas of single function and occupying physical space are complementary: I check the weather by looking out the window, I check the time by glancing at my wrist, the recipe I want is bookmarked in the last book on the shelf. We can create similar coordinates or landmarks for our digital interactions as well.
Our sense of place and proximity is also an important input to how we prioritize what needs doing. A sink full of dishes demands my attention - while I’m in the kitchen. But when I’m downtown, it has to wait while I attend to other matters. Similarly, a colleague raising a question can expect me to answer when I’m in the same room. But we both understand that as the distance between us changes, so does the urgency to provide an answer. When I’m at the office, work things are my priority. As I travel home, my context shifts. Expectations change as we move from place to place, and physical locations and boundaries help partition our lives. Its true that the smart phone started as a huge convenience by un-tethering us from the desk to carry our access to information - and its access to us - with us. But, by doing so, we lost some of the ability to walk away; to step out from a conversation or leave work behind.
Addressing these tensions became one of the goals of Project Haiku. As we talked to people about their interactions with technology in their home and in their lives, we saw again and again how poor a fit the best of today’s solutions were. What began as empowering and liberating has started to infringe on people’s freedom to chose how to spend their time.
When I’m spending time on my computer, its just more opportunities for it to beep at me. Every chance I get I turn it off. Typing into a box - what fun is that? You guys should come up with something… good.
This is a quote from one of our early interviews. It was a refreshing perspective and sentiments like this - as well as the moments of joy and connectedness that we saw were possible - that helped steer this project. We weren’t able to finish the story by bringing a product to market. But the process and all we learned along the way will stick with me. It is my hope that this series of posts will plant some seeds and perhaps give other future projects a small nudge towards making our technology experiences more grounded in the world we move about in.