Lessons from art school

The process of making art is all about looking at things from different angles and seeing what other people might not see. To bring that into the world you need to capture a thought or insight quickly and accurately, and make it real - be it drawing/sculpting/writing or whatever. It’s easy to get stuck in a rut, struggle to make something really new or just wind up staring into space 2nd-guessing yourself. So at least at the schools I went to, instruction was all about learning practical techniques to observe the world, to explore, refine and above all keep creating and learning. Probably there were other things I was supposed to learn, but 30 years on these are the lessons that have stuck with me.

In some contexts these are not to be taken literally. Nowadays I mostly work on software. Looking at code in a mirror won’t help you. But hopefully the essence of the idea is there and you’ll find ways to accomplish the same thing in whatever form your work takes.

Seeing with fresh eyes

Step back from your work, try looking at it in the mirror to see it again for the “first time”.

Work with a long stick to force a distance between you and your work.

Step outside for a while, sneak a look over your shoulder as you walk away.

Work on something else. Sleep on it. Approach it as a new problem the next day. Lots of problems solve themselves as soon as you stop thinking about them.

Ask someone to explain your idea/solution/code to you, or describe the problem and ask them to repeat the problem back to you.

Seeing what is really there

Draw the negative space: don’t look at the outline of the thing, focus on the shape of the space around the thing.

Seeing the actual shape of things is surprisingly difficult as our minds offer up shortcuts and preconceptions instead.

Re-frame a problem in terms of what shouldn’t happen? Rather than think about what a thing should do, think about what needs to not change.

Try writing the documentation for your code before you write the code, or write failing tests that each expect the behavior you want to end up with.

Make sure you can tell a story about what success would be. “I’ll know I was successful when…” You define the hole your solution needs to fit into. Seeing where you need to end up casts a new light on what problem you actually need to solve, and maybe which parts aren’t really as important as you first thought.

And acknowledge that there may be more than one truth. Which truth are you trying to find? Which question do you need to answer?

Everyone move one space to your left

Rotate tasks: after some time, hand off your work/patch/task to a peer, and take work from a peer - picking up where they left off.

There’s so much going on here. It’s a lot easier to fix something that is wrong or incomplete than to create something from scratch. And seeing others’ process is instructive, even if we end up with the same solution, we each might take a different path to get there. That path is the paydirt here, once this one problem is solved, how you got there is the thing with lasting value.

It’s also useful to learn to let go of ownership of a thing - which often gets in the way of the larger goal. Perhaps its the sunken cost fallacy, or just a sense of investment in seeing it finished. It shouldn’t matter who finishes it if the goal was just to bring a thing into the world.

Our creations are a shared, collaborative effort. Passing incomplete work between peers - warts and all - is a great way to learn from each other.

Commit to destroying your work as soon as it is done.

Code is cheap, ideas are cheap. But time is valuable, so budget some of it for exploring early on when the correct solution isn’t locked in. If you’ve ever accidentally lost some work in progress, you’ve probably discovered that it didn’t take as long as you thought to rewrite it. That’s because the discovery process was not lost even if the final implementation was.

You can’t really see what is needed until you’ve tried to implement it. Your first pass is always throwaway so unblock yourself by writing code/making a sketch/etc you plan to delete.

Its freeing to work on something you know is temporary. It shifts the focus onto understanding the shape of the problem and exploring solutions, rather than concerns about what people might think, or how this solution might work for other future problems.

And there’s a thing in here about being precious about your output. “It is special because I worked on it.” It isn’t. If it gets taken away you will make another one and it will probably be better.

Plans change; its normal to sometimes have to throw things away we worked hard to create. We bring value to each new task. We can’t only measure ourselves against the work that ended up shipping - especially when those decisions are outside your control.

Always be a beginner

Dive into something you know nothing about - frequently.

The start of the learning curve is always the steepest. That’s where we learn most. So unless you are learning new things often, you slowly stop learning. And that’s not fun at all.

There’s a thing in here about humility. We can be simultaneously confident in our skills and experience as well as complete n00bs at some other thing. No-one can possibly know everything. Its good to regularly experience the vulnerability of starting out as a beginner.

The flip-side is discovering how much of your experience does end up being applicable. And that learning and problem-solving are skills by themselves. You can get better at each over time. Knowing you can tackle whatever comes up gives you the confidence to move forward even when you initially have no insight into how to fix a particular problem.

And beginners have fresh eyes so they are likely to spot problems others no longer see.

In conclusion

That’s it. At least in the kind of software I work on, formal training in software engineering and computer science isn’t strictly necessary. Yes, you’ll learn some useful things, but so to will other backgrounds provide lessons and habits that can inform and support your success working on software.

I’ve told these art school stories many times and wanted to give them form and a URL I can refer back to. Hopefully there was something useful here for you too. I’ve had these notes on file for almost 5 years and this is what survived the pruning and editing over that time. Some is common sense and advice I’ve seen in different forms from lots of different disciplines. Some is maybe challenging - in the context of doing work within a human society there other constraints on how we work and what we work on. Through the lens of working with real people on real projects, probably some of this sounds hopelessly naive, insensitive, or just irrelevant or wrong. I’m trying to focus on the making things and solving tangible problems here - and getting out of our own way to do the best we can with what is put in front of us.

Ideas on a lower-carbon internet through scheduled downloads and Quality of Service requests

Other titles:

  • The impact of internet use and what we might do about it?
  • Opportunities for powering more internet use with renewables
  • I want this thing, but not until later
  • A story of demand-side prioritization, scheduling and negotiation to take advantage of a fluxuating energy supply.

I recently got interested in how renewable power generation plays into the carbon footprint of internet usage. We need power to run and charge the devices we use to consume internet content, to run the networks that deliver that content to us, and to power the servers and data centers that house those servers.

Powering the internet eats up energy. The power necessary to serve up the files, do the computation, encode and package it all up to send it down the wire to each of the billions of devices making those requests consumes energy on an enormous scale. The process of hosting and delivering content is so power hungry, the industry is driven to large extent by the cost and availability of electricity. Data centers are even described in terms of the power they consume - as a reasonable proxy for the capacity they can supply.

One of the problems we hear about constantly is that the intermittent and relatively unpredicatable nature of wind and solar energy means it can only ever make up a portion of a region’s electricity generation capacity. There’s an expectation of always-on power availability; regardles of the weather or time of day, a factory must run, a building must be lit, and if a device requests some internet resource the request must be met immediately. So, we need reliable base load generation to meet most energy demands. Today, that means coal, natural gas, nuclear and hydro generation plants - which can be depended on to supply energy day and night, all year round. Nuclear and hydro are low-carbon, but they can also be expensive and problematic to develop. Wind and solar are much less so, but as long as their output is intermittent they can only form part of the solution for de-carbonizing electricity grids across the world - as long as demand not supply is king.

There are lots of approaches to tackling this. Better storage options (PDF) smooth out the intermittency of wind and solar - day to day if not seasonally. Carbon capture and sequestration lower the carbon footprint of fossil fuel power generation - but raise the cost. What if that on-demand, constant availability of those data centers’ capacity was itself a variable? Suppose the client device issuing the request had a way to indicate priority and expected delivery time, would that change the dynamic?

Wind power tends to peak early in the morning, solar in the afternoon. Internet traffic is at its highest during the day and evening, and some - most - is necessarily real-time. But if I’m watching a series on Netflix, the next episode could be downloaded at anytime, as long as its available by the next evening when I sit down to watch it. And for computational tasks - like compiling some code, running an automated test suite, or encoding video - sometimes you need it as soon as possible, other times its less urgent. Communicating priority and scheduling requirements (a.k.a Quality of Service) from the client through to the infrastructure used to fullfill a request would allow smarter balancing of demand and resources. It would open up the door to better use of less constant (non-baseload) energy sources. The server could defer on some tasks when power is least available or most expensive, and process them later when for example the sun comes up, or the wind blows. Smoothing out spikes in demand would also reduce the need for so-called “peaker” plants - typically natural gas power plants that are spun up to meet excess energy demand.

“Kestler: While intermittent power is a challenge for data center operations, the development of sensors, software tools and network capabilities will be at the forefront of advancing the deployment of renewables across the globe. The modernization of the grid will be dependent on large power consumers being capable of operating in a less stable flow of electrons.

What’s Ahead for Data Centers in 2021

Google already experimented with some of this, and its a fascinating and encouraging read.

“Results from our pilot suggest that by shifting compute jobs we can increase the amount of lower-carbon energy we consume”

Our data centers now work harder when the sun shines and wind blows

There are clearly going to be hurdles for wide-scale adoption of this kind of strategy, and its never going to work for all cases. But with a problem at this scale, a solution that shaves off 1%, or a fraction of 1% can still translate into huge monetary and carbon savings. So, what would it take? Are there practical steps that us non-data-center-operators can take to facilitate this kind of negotiation betweeen the client and the massive and indifferent upstream infrastructure that supports it?

The low hanging fruit in this scenario is video streaming. It represents an outsized percentage of all internet traffic - and data center load. Netflix alone generates 15% of all global internet traffic. What if even 1% of that could be shifted to be powered entirely by renewable energy, by virtue of the deferred-processing at the supply-side, or scheduled download at the client-side? Often its the case that when I click to watch video, I need it right there and then - perhaps it is a live event, or I didn’t know I needed it until that minute. Sometimes not though. If it was possible to schedule the download ensuring it was there on my device when I did need it, the benefits would ripple through the whole system - content delivery providers would save money and maybe the grid itself would be able to absorb more intermittent renewable generation.

There are other opportunities and I don’t want to get too hung up on specifics. But the notion of attaching Quality of Service in some way to some requests to facilitate smarter utilization of seasonal, regional and weather-dependent energy generation fluxuations seems promising to me. Fundamentally, power demand from worldwide internet traffic is extremely dynamic. We can better meet that demand with equally dynamic low and zero carbon sources if we can introduce patterns and signals at all levels of the system to allow it to plan and adapt.

When I get to the end of a piece like this I’m always left wondering “what is the point?”. Is this just a rant into the void, hoping someone listens? Its certainly not an actionable plan for change. Writing it down helps me process some of these ideas, and I hope it starts conversations and prompts you to spot these kind of indirect opportunities to tackle climate change. And if you are in a position to nudge any of this towards really existing in the world, that would be great. I work at Mozilla, we make a web browser and have our own substantial data-center and compute-time bill. I’ll be looking into what change I can help create there.

Some References

I collected a large list of papers and articles as I looked into this. Here’s a smaller list:

On finding productivity

Recently, I joined a new-to-me team at Mozilla and started working on Firefox. Its not been an easy transition - from the stuff I was doing in the Connected Devices group to getting back to fixing bugs and writing code every day. And not just any code: the Firefox codebase is large and spread across a couple of decades. Any change is an exercise in code-sleuthing, to understand what it does today, why it was implemented that way and how to implement a patch that doesnt fix one thing while breaking a dozen others.

My intuition on how long a task should take has been proven so wildly wrong so many times in the last few months that I’ve had to step back and do some hard thinking. Do I just suck at this? Or am I pushing hard but in the wrong direction. Sometimes I think I’m just getting worse as a developer/software engineer over time, not better.

Haiku Reflections: Experiences in Reality

Over the several months we worked on Project Haiku, one of the questions we were repeatedly asked was “Why not just make a smartphone app to do this?” Answering that gets right to the heart of what we were trying to demonstrate with Project Haiku specifically, and wanted to see more of in general in IoT/Connected Devices.

This is part of a series of posts on a project I worked on for Mozilla’s Connected Devices group. For context and an overview of the project, please see my earlier post.

The problem with navigating virtual worlds

One of IoT’s great promises is to extend the internet and the web to devices and sensors in our physical world. The flip side of this is another equally powerful idea: to bring the digital into our environment; make it tangible and real and take up space. If you’ve lived through the emergence of the web over the last 20 years, web browsers, smart phones and tablets - that might seem like stepping backwards. Digital technology and the web specifically have broken down physical and geographical barriers to accessing information. We can communicate and share experiences across the globe with a few clicks or keystrokes. But, after 20 years, the web is still in “cyber-space”. We go to this parallel virtual universe and navigate with pointers and maps that have no reference to our analog lives and which confound our intuitive sense of place. This makes wayfinding and building mental models difficult. And without being grounded by inputs and context from our physical environment, the simultaneous existence of these two worlds remains unsettling and can cause a kind of subtle tension.

As I write this, the display in front of me shows me content framed by a website, which is framed by my browser’s UI, which is framed by the operating system’s window manager and desktop. The display itself has it own frame - a bezel on an enclosure sitting on my desk. And these are just the literal boxes. Then there are the conceptual boxes - a page within a site, within a domain, presented by an application as one of many tabs. Sites, domains, applications, windows, homescreens, desktops, workspaces…

Haiku Reflections: Web Clients and Web Resources

This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. For an overview, see my earlier post.

My understanding of the overarching goal for the Connected Devices group within Mozilla is to have a tangible impact on the evolution of the Internet of Things to maintain the primacy of the user; their right to own their own data and experience, to chose between products and organizations. We want Mozilla to be a guiding light, an example others can follow when developing technology in this new space that respects user privacy, implements good security and promotes open, common standards. In that context, the plan is to develop an IoT platform alongside a few carefully selected consumer products that will exercise and validate that platform and start building the exposure and experience for Mozilla in this space. Over the last few months, the vision for this platform has aligned with the emerging Web of Things which builds on patterns for attaching “Things” to the web.

From one perspective, the web is a just a network of interconnected content nodes. It follows that the scope for standardizing the evolution of the Internet of Things is to define a sensible architecture and build frameworks for incorporating these new devices and their capabilities to maintain interoperability, promote discoverability etc. This maps well onto connected sensors, smart appliances and other physical objects whose attributes we want to query and set over the network. Give these things URLs and a RESTful interface and you get all the rich semantics of the web, addressability, tools, developer talent pool - the list goes on and on and its all for “free”. In one stroke you remove the need for a lot of wheel re-inventions and proprietary-ness and nudge this whole movement in the direction of the interoperable, standardized web. Its a no-brainer.

In this context however, the communication device envisaged by Project Haiku is orthogonal. While you can model it to give URLs to the people/devices and the private communication channel they share, the surface area of the resulting “API” is tiny and has limited value. It is conceptually powerful as it brings along all the normal web best practices for RESTful API design, access control, caching and offline strategies and so-on. Still, the Haiku device would be more web client than web resource and doesn’t fit neatly into this story.

Reflections on Project Haiku: Accounts and Ownership

This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. By focusing on the user problem and not the business model, we quickly determined that we wanted as little data from our users as we could get away with. For context and an overview of the project, please see my earlier post.

When I was a kid, my brothers and I had wired walkie-talkies. Intercoms really. Each unit was attached with about 100’ of copper wire. One could be downstairs and with the wire trailed dangerously under doors and up stairs we could communicate between kitchen and bedroom. Later, in order to talk with a friend in the appartment block opposite us, we got a string pulled taut between our two balconies. With tin cans on each end of the string, you could just about hear what the other was saying.

RF-based wireless communication had existed for a long time already, but I bring these specific communication examples up because the connection we made was exclusive and private.

We didn’t need to agree on a frequency and hope no-one else was listening in. The devices didn’t just enable the connection, they were the connection. We didn’t sign up for a service, didn’t pay any subscription, and when we tired of it and it was given away, no contracts needed to be amended; the new owners simply picked up each end and started their own direct and private conversation. In Project Haiki, when we thought about IoT and connecting people, this was the analogy we adopted.

Reflections on Project Haiku: WebRTC

This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. We landed on a WebRTC-based implementation of a 1:1 communication device. For an overview of the project as a whole, see my earlier post.

This was one of those insights that seems obvious with hindsight. If you want to allow two people communicate privately and securely, using non-proprietary protocols, and have no need or interest in storing or mediating this communication - you want WebRTC.

Reflections on Project Haiku

I’ve written before on this blog about my current project with Mozilla’s Connected Devices group: Project Haiku. Last week, after close to 9 months of exploration, prototyping and refinement this project was put on hold indefinitely.

So I wanted to take this opportunity - a brief lull before I get caught up in my next project - to reflect on the work and many ideas that Project Haiku produced. There are several angles to look at it from, so I’ll break it down into separate blog posts. In this post I’ll provide a background of the what, when and why as a simple chronological story of the project from start to finish.

Other posts in this series:

Phase 0: Are we solving the right problem?

Back in March 2016, with Firefox OS winding down and most of that team off exploring the field of IoT and the smart home, Liz proposed a vision for a project that would tackle smart home problems in a way that was more grounded in human experience and recognized the diversity of our requirements from technology and our need to have it reflect our values - both aesthetically and practically. I had been experimenting with ideas like the smart mirror and this human-centric direction resonated with me. A team gathered around her proposal and we started digging.

Emoji + Voice Prototype

Project Haiku Update

At Mozilla, I’m still working with a team on Project Haiku. Over the summer we had closed in on a wearable device used for setting and seeing friend’s status. It took a while for that to crystallize though and as we started the process of building an initial bluetooth-ed wearable prototype, our team was handed an ultimatum: Go faster or stop.

We combined efforts and ideas with another Mozilla team that had arrived at some very similar positions on how connected devices should meet human needs. As I write we are concluding a user study in which 10 pairs of grandparents and school-age grandchildren have been using a simple, dedicated communication device.

48 Hours of Hacking in Chattanooga

I spent this past weekend in Chattanooga, Tennesse, in a whirlwind of planning, prototyping and generally collaborating on a pitch for the 48 Hour Launch event. I was invited to attend as one of several mentors from Mozilla, to help develop product and company ideas from the local community into something clear and compelling in just two days. For more info on the event, go read the wrap-up on Mozilla’s blog. I’m just going to detail some of my personal highlights.

About seven teams were at the kick-off Friday night, each giving an introduction to their concept and what they wanted to achieve over the weekend. After drifting around a bit and listening in to the conversations that emerged afterwards, I gravitated towards the “Inclusive Makerspace” project. Cristol Kapp is a librarian at a local elementary school, and one of the first in the region to set up a functioning makerspace in her library for the kids. But, there’s a problem: some of the students have conditions and disabilities which prevent them getting involved in the makerspace activities. The need for a steady hand, fine motor control skills to manipulate tools - are just 2 barriers that effectively exclude some of these kids from what should be fun, collaborative activities in the space. Cristol clearly felt this deeply, and was accomanied by a colleague - a special education teacher - who was also committed to fixing this. That stood out for me: a clear need expressed again and again at the school, and no doubt echoed at home. And people with the opportunity and drive to find, test, improve and promote a solution. (On the Sunday, this was reinforced again when the school principal visited the hackathon to support Cristol, listen to her plans and give feedback.)

I think I’ll keep this short and devote a separate post to the Inclusive I/O project itself (a renaming and branding that emerged from the weekend) and confine myself to the event here. Friday evening was spent narrowing down both the problem and set of solutions into something properly joined up and actionable. With a million ideas buzzing around all the participants heads, we needed to focus on telling a story with well defined characters, with a clearly defined problem and a solution that demonstrably addresses that problem. Of course, reality is never so simple, but for the purposes of this pitch - and to get this project into gear and actually moving down the road - we had to temporarily remove variables. We wound up Friday evening with a plan - sketched out on the back of a cupcake box (which I didn’t have the presence of mind to photograph) - and a consensus to make it so first thing in the morning.

I was pretty blown away by the level of energy, the collective good will and breadth of expertise that descended on the venue over the weekend. Although each team was ultimately competing for prizes, there was no hesitation in sharing tips or resources, getting each other unstuck or even devoting large chunks of time to contribute skills where they were needed. Over the Saturday and Sunday we divided and conquered - with Tamara and I hacking up a prototype, with the help of some great talent from the community. Meanwhile Cristol was moving efficiently though business planning, with cost and market estimates, branding and strategy, all the while tightening up the story we had started that first evening. By Sunday she had a great slide deck and a clear, concise telling of that story, practiced again and again.

It worked. Inclusive I/O was well received by the panel and awarded 2nd place. This is huge - not only for the cash and other resources it grants - but for the validation of the idea and its originator. And for the problem Cristol saw and its real need of a solution. Thanks to all whose names I either didn’t list, forgot or never learnt who helped out along the way. I hope to stay involved in this project in some capacity; watch this space.