<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://www.sam-i-am.com/blog</id>
    <title>Occasional notes / Sam Foster • Posts by &#34;mozilla&#34; tag</title>
    <link href="https://www.sam-i-am.com/blog" />
    <updated>2021-05-29T00:32:33.000Z</updated>
    <category term="dev" />
    <category term="family" />
    <category term="mozilla" />
    <category term="making" />
    <category term="sustainable" />
    <category term="project haiku" />
    <category term="dev, making" />
    <entry>
        <id>https://www.sam-i-am.com/blog/2021/05/lower-carbon-internet-qos.html</id>
        <title>Ideas on a lower-carbon internet through scheduled downloads and Quality of Service requests</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2021/05/lower-carbon-internet-qos.html"/>
        <content type="html">&lt;p&gt;Other titles: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The impact of internet use and what we might do about it?&lt;/li&gt;
&lt;li&gt;Opportunities for powering more internet use with renewables&lt;/li&gt;
&lt;li&gt;I want this thing, but not until later&lt;/li&gt;
&lt;li&gt;A story of demand-side prioritization, scheduling and negotiation to take advantage of a fluxuating energy supply.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I recently got interested in how renewable power generation plays into the carbon footprint of internet usage. We need power to run and charge the devices we use to consume internet content, to run the networks that deliver that content to us, and to power the servers and data centers that house those servers. &lt;/p&gt;
&lt;p&gt;Powering the internet eats up energy. The power necessary to serve up the files, do the computation, encode and package it all up to send it down the wire to each of the billions of devices making those requests consumes energy &lt;a href=&#34;https://www.forbes.com/sites/christopherhelman/2016/06/28/how-much-electricity-does-it-take-to-run-the-internet/?sh=52d174c51fff&#34;&gt;on an enormous scale&lt;/a&gt;. The process of hosting and delivering content is so power hungry, the industry is driven to large extent by the cost and availability of electricity. Data centers are even &lt;a href=&#34;https://www.cbre.us/research-and-reports/North-America-Data-Center-Report--H1-2020&#34;&gt;described in terms of the power they consume&lt;/a&gt; - as a reasonable proxy for the capacity they can supply.&lt;/p&gt;
&lt;p&gt;One of the problems we hear about constantly is that the intermittent and relatively unpredicatable nature of wind and solar energy means it can only ever make up a portion of a region’s electricity generation capacity. There’s an expectation of always-on power availability; regardles of the weather or time of day, a factory must run, a building must be lit, and if a device requests some internet resource the request must be met immediately. So, we need reliable &lt;a href=&#34;https://energyeducation.ca/encyclopedia/Baseload_power&#34;&gt;base load&lt;/a&gt; generation to meet most energy demands. Today, that means coal, natural gas, nuclear and hydro generation plants - which can be depended on to supply energy day and night, all year round. Nuclear and hydro are low-carbon, but they can also be expensive and problematic to develop. Wind and solar are much less so, but as long as their output is intermittent they can only form part of the solution for de-carbonizing electricity grids across the world - as long as demand not supply is king.&lt;/p&gt;
&lt;p&gt;There are lots of approaches to tackling this. &lt;a href=&#34;https://www.nrel.gov/docs/fy19osti/74426.pdf&#34;&gt;Better storage options&lt;/a&gt; (PDF) smooth out the intermittency of wind and solar - day to day if not seasonally. &lt;a href=&#34;https://19january2017snapshot.epa.gov/climatechange/carbon-dioxide-capture-and-sequestration-overview_.html&#34;&gt;Carbon capture and sequestration&lt;/a&gt; lower the carbon footprint of fossil fuel power generation - but raise the cost. What if that on-demand, constant availability of those data centers’ capacity was itself a variable? Suppose the client device issuing the request had a way to indicate priority and expected delivery time, would that change the dynamic? &lt;/p&gt;
&lt;p&gt;Wind power tends to &lt;a href=&#34;https://crsreports.congress.gov/product/pdf/IF/IF11257&#34;&gt;peak early in the morning, solar in the afternoon&lt;/a&gt;. Internet traffic is at its highest during the day and evening, and some - most - is necessarily real-time. But if I’m watching a series on Netflix, the next episode could be downloaded at anytime, as long as its available by the next evening when I sit down to watch it. And for computational tasks - like compiling some code, running an automated test suite, or encoding video - sometimes you need it as soon as possible, other times its less urgent. Communicating priority and scheduling requirements (a.k.a Quality of Service) from the client through to the infrastructure used to fullfill a request would allow smarter balancing of demand and resources. It would open up the door to better use of less constant (non-baseload) energy sources. The server could defer on some tasks when power is least available or most expensive, and process them later when for example the sun comes up, or the wind blows. Smoothing out spikes in demand would also reduce the need for so-called “peaker” plants - typically natural gas power plants that are spun up to meet excess energy demand.   &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Kestler: While intermittent power is a challenge for data center operations, the development of sensors, software tools and network capabilities will be at the forefront of advancing the deployment of renewables across the globe. The modernization of the grid will be dependent on large power consumers being capable of operating in a less stable flow of electrons.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://www.cpexecutive.com/post/whats-ahead-for-data-centers-in-2021/&#34;&gt;What’s Ahead for Data Centers in 2021&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Google already experimented with some of this, and its a fascinating and encouraging read. &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Results from our pilot suggest that by shifting compute jobs we can increase the amount of lower-carbon energy we consume”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.google/inside-google/infrastructure/data-centers-work-harder-sun-shines-wind-blows&#34;&gt;Our data centers now work harder when the sun shines and wind blows&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;There are clearly going to be hurdles for wide-scale adoption of this kind of strategy, and its never going to work for all cases. But with a problem at this scale, a solution that shaves off 1%, or a fraction of 1% can still translate into huge monetary and carbon savings. So, what would it take? Are there practical steps that us non-data-center-operators can take to facilitate this kind of negotiation betweeen the client and the massive and indifferent upstream infrastructure that supports it? &lt;/p&gt;
&lt;p&gt;The low hanging fruit in this scenario is video streaming. It represents an outsized percentage of all internet traffic - and data center load. Netflix alone generates &lt;a href=&#34;https://www.sandvine.com/hubfs/downloads/phenomena/2018-phenomena-report.pdf&#34;&gt;15% of all global internet traffic&lt;/a&gt;. What if even 1% of that could be shifted to be powered entirely by renewable energy, by virtue of the deferred-processing at the supply-side, or scheduled download at the client-side? Often its the case that when I click to watch video, I need it right there and then - perhaps it is a live event, or I didn’t know I needed it until that minute. Sometimes not though. If it was possible to schedule the download ensuring it was there on my device when I &lt;em&gt;did&lt;/em&gt; need it, the benefits would ripple through the whole system - content delivery providers would save money and maybe the grid itself would be able to absorb more intermittent renewable generation. &lt;/p&gt;
&lt;p&gt;There are other opportunities and I don’t want to get too hung up on specifics. But the notion of attaching Quality of Service in some way to some requests to facilitate smarter utilization of seasonal, regional and weather-dependent energy generation fluxuations seems promising to me. Fundamentally, power demand from worldwide internet traffic is extremely dynamic. We can better meet that demand with equally dynamic low and zero carbon sources if we can introduce patterns and signals at all levels of the system to allow it to plan and adapt. &lt;/p&gt;
&lt;p&gt;…&lt;/p&gt;
&lt;p&gt;When I get to the end of a piece like this I’m always left wondering “what is the point?”. Is this just a rant into the void, hoping someone listens? Its certainly not an actionable plan for change. Writing it down helps me process some of these ideas, and I hope it starts conversations and prompts you to spot these kind of indirect opportunities to tackle climate change. And if you are in a position to nudge any of this towards really existing in the world, that would be great. I work at Mozilla, we make a web browser and have our own substantial data-center and compute-time bill. I’ll be looking into what change I can help create there. &lt;/p&gt;
&lt;h2 id=&#34;Some-References&#34;&gt;&lt;a href=&#34;#Some-References&#34; class=&#34;headerlink&#34; title=&#34;Some References&#34;&gt;&lt;/a&gt;Some References&lt;/h2&gt;&lt;p&gt;I collected a large list of papers and articles as I looked into this. Here’s a smaller list:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://s22.q4cdn.com/959853165/files/doc_downloads/2020/02/0220_Netflix_EnvironmentalSocialGovernanceReport_FINAL.pdf&#34;&gt;Netflix Environmental Social Governance Report (2019) (PDF)&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;“In 2019, Netflix’s direct energy use was about 94,000 megawatt hours” (Direct energy usage, not including cloud services)&lt;/li&gt;
&lt;li&gt;Content delivery network includes “We partner with over a thousand ISPs to localize substantial amounts of traffic” so its partly local&lt;/li&gt;
&lt;li&gt;“indirect energy use was about 357,000 megawatt hours in 2019”&lt;/li&gt;
&lt;li&gt;167 million subscribers in 2019. Hours downloaded? (we have that 2011 study which claims 3.2billion hrs)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;(https://www.insight.com/content/dam/insight/en_US/pdfs/apc/apc-estimating-data-centers-carbon-footprint.pdf&#34;&gt;Estimating a Data Center’s Electrical Carbon Footprint&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;“Avoided emissions reflect the average activity of peaker plants in the local utility’s network”&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.sandvine.com/hubfs/downloads/phenomena/2018-phenomena-report.pdf&#34;&gt;Sandvine: Global Internet Phenomena Report 2018&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;Video is 58% of internet downstream traffic volume. &lt;/li&gt;
&lt;li&gt;Netflix is 15% of all internet downstream traffic&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.carbonbrief.org/factcheck-what-is-the-carbon-footprint-of-streaming-video-on-netflix&#34;&gt;Factcheck: What is the carbon footprint of streaming video on Netflix?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://henryhxu.github.io/share/mascots13.pdf&#34;&gt;Carbon-aware Load Balancing for Geo-distributedCloud Services&lt;/a&gt; (PDF)&lt;/li&gt;
&lt;/ul&gt;
</content>
        <category term="mozilla" />
        <category term="sustainable" />
        <updated>2021-05-29T00:32:33.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2017/10/on-finding-productivity.html</id>
        <title>On finding productivity</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2017/10/on-finding-productivity.html"/>
        <content type="html">&lt;p&gt;Recently, I joined a new-to-me team at Mozilla and started working on Firefox. Its not been an easy transition - from &lt;a href=&#34;http://www.sam-i-am.com/blog/tags/project-haiku/&#34;&gt;the stuff I was doing in the Connected Devices group&lt;/a&gt; to getting back to fixing bugs and writing code every day. And not just any code: the Firefox codebase is large and spread across a couple of decades. Any change is an exercise in code-sleuthing, to understand what it does today, why it was implemented that way and how to implement a patch that doesnt fix one thing while breaking a dozen others.&lt;/p&gt;
&lt;p&gt;My intuition on how long a task should take has been proven so wildly wrong so many times in the last few months that I’ve had to step back and do some hard thinking. Do I just suck at this? Or am I pushing hard but in the wrong direction. Sometimes I think I’m just getting worse as a developer/software engineer over time, not better.&lt;/p&gt;
&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;

&lt;p&gt;In truth, I have good days and bad days. On the bad days, the slightest snag, obfuscation of the problem, or ambiguity around how to proceed can freeze me up. I stare at it, futz with it. Procrastinate. Every possible action seems too complicated for my small brain, or to highlight something I haven’t learned well enough to proceed with. On these days, I count any movement forward at all as a success. Some trivial bug fixed, some observation noted down - its better than nothing.&lt;/p&gt;
&lt;p&gt;Then there are the good days. By their nature they are not as note-worthy or memorable. I work through the tasks in front of me, fixing bugs and getting stuff done. I follow the trail to the end, note the solution and implement it. Maybe I see opportunities for future improvements or help out a colleague. The day ends and I go home feeling satisfied and ready to go at it again the next day.&lt;/p&gt;
&lt;h2 id=&#34;Checklists-and-self-hacks&#34;&gt;&lt;a href=&#34;#Checklists-and-self-hacks&#34; class=&#34;headerlink&#34; title=&#34;Checklists and self-hacks&#34;&gt;&lt;/a&gt;Checklists and self-hacks&lt;/h2&gt;&lt;p&gt;I’ve tried out lots of ways of turning bad days into good days. I have a list of check lists that I sometimes have the presence of mind to consult. One example goes like this:&lt;/p&gt;
&lt;h3 id=&#34;For-extrication-from-the-weeds&#34;&gt;&lt;a href=&#34;#For-extrication-from-the-weeds&#34; class=&#34;headerlink&#34; title=&#34;For extrication from the weeds:&#34;&gt;&lt;/a&gt;For extrication from the weeds:&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Q: What needs to be accomplished? Is there a logical set of steps to get from here to there?&lt;/li&gt;
&lt;li&gt;Q: Has this been done before? What patterns already exist for solving this kind of problem?&lt;/li&gt;
&lt;li&gt;Q: How many problems are you trying to solve? (Hint, the answer should be one)&lt;/li&gt;
&lt;li&gt;Q: Could the next step be simplified and still be useful?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And there are others - for starting a new feature, for code reviews, for wrapping up and landing a patch. Check-lists are great - they are a concise way of distilling hard-won experience into something actionable and repeatable.&lt;/p&gt;
&lt;p&gt;I keep notes on each task or bug I’m working on. I find a good first step is to write down all the questions that pertain to the problem, however obvious or simple. This list of questions then forms a task list and I can start filling in answers. Finding an answer to a question like “Q: wtf is this function supposed to do?” is a discrete, achievable task that removes an unknown and builds momentum. (A search of the code repository and bug database can tell me when it was introduced, by who and what problem it solved at the time.) Further questions start to close up the gaps in my knowledge and point to a path forward.&lt;/p&gt;
&lt;p&gt;Sometimes, just re-writing out the problem as I understand it is enough to nudge me out of paralysis. Its a kind of &lt;a href=&#34;https://en.wikipedia.org/wiki/Rubber_duck_debugging&#34;&gt;rubber duck debugging&lt;/a&gt;. Re-reading my earlier notes might jog something. Other times, the best thing I can do I stand up and walk away for a bit - breathe some outside air and observe other humans going about their business.&lt;/p&gt;
&lt;h2 id=&#34;Pomodoro&#34;&gt;&lt;a href=&#34;#Pomodoro&#34; class=&#34;headerlink&#34; title=&#34;Pomodoro&#34;&gt;&lt;/a&gt;Pomodoro&lt;/h2&gt;&lt;p&gt;I’ve had stints of success with the &lt;a href=&#34;https://en.wikipedia.org/wiki/Pomodoro_Technique&#34;&gt;Pomodoro technique&lt;/a&gt;. I find breaking the day up into chunks, and having this focus and rhythm does sometimes help drive me forward. Again, its about building momentum. But, my experience is that sometimes its just not a good fit. I no longer attempt to do this every day, but treat it as a useful tool to be employed when the time is right.&lt;/p&gt;
&lt;h2 id=&#34;Riding-in-others’-slipstream&#34;&gt;&lt;a href=&#34;#Riding-in-others’-slipstream&#34; class=&#34;headerlink&#34; title=&#34;Riding in others’ slipstream&#34;&gt;&lt;/a&gt;Riding in others’ slipstream&lt;/h2&gt;&lt;p&gt;I’m a “remotee”. I work as part of a distributed team, spread across the globe and separated by distance and time-zones. I work alone most of the time. That has advantages and disadvantages. One of the things you miss is the collective energy of your co-workers and office neighbours that boost you and help ride out the bumps and troughs. When all the above has failed to light a spark, I sometimes go looking for that energy. It turns out watching someone else tackle problems engages those parts of the brain that have thus far failed to engage. It takes time out of the day, but if the day was otherwise shot, its time well spent. &lt;a href=&#34;https://hero.handmade.network/&#34;&gt;Handmade Hero&lt;/a&gt; and &lt;a href=&#34;https://mikeconley.github.io/joy-of-coding-episode-guide/&#34;&gt;Mike Conley’s Joy of Coding&lt;/a&gt; are two “channels” I turn to at these times. Both hosts have a knack for taking objectively difficult problems and proceeding to dismantle them into smaller, easier problems in a way that seems obvious with hindsight. And simply sharing this journey for a while is usually enough to clear the fog in my own brain and allow me to get back into the groove with my own work.&lt;/p&gt;
&lt;h2 id=&#34;The-swan-effect&#34;&gt;&lt;a href=&#34;#The-swan-effect&#34; class=&#34;headerlink&#34; title=&#34;The swan effect&#34;&gt;&lt;/a&gt;The swan effect&lt;/h2&gt;&lt;p&gt;Of course, history tends to only record successes. When you see a project launch, or a patch land - fully formed and functional - it represents the end-state of a process. There might have been many dead-ends, hours of head-scratching and frustration before finally finding success. This phenomenon is a variation of &lt;a href=&#34;https://www.goodreads.com/book/show/2272880.The_Drunkard_s_Walk&#34;&gt;the drunkards walk&lt;/a&gt;: why does he always end up in the ditch rather than just bouncing off the wall on the other side? He doesn’t. But, once in the ditch he’s not getting out, and he is only there to notice at all when that happens. Similarly with our efforts, to the observer we appear to glide gracefully on the surface, with the commit history showing neatly interlocking solutions stacking together until the goal is met. While the thrashing below the surface goes largely un-recorded.&lt;/p&gt;
&lt;p&gt;These are the things I remind myself. Its not supposed to be easy. I’ve done it before and I can do it again. I &lt;em&gt;do&lt;/em&gt; know how to do this and I’m privileged to work on a project where the outcome &lt;a href=&#34;https://www.mozilla.org/en-US/mission/&#34;&gt;really matters&lt;/a&gt;.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <updated>2017-10-18T17:40:12.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2017/04/reflections-on-project-haiku-experiences-in-reality.html</id>
        <title>Haiku Reflections: Experiences in Reality</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2017/04/reflections-on-project-haiku-experiences-in-reality.html"/>
        <content type="html">&lt;p&gt;Over the several months we worked on &lt;a href=&#34;https://wiki.mozilla.org/Connected_Devices/Projects/Project_Haiku&#34;&gt;Project Haiku&lt;/a&gt;, one of the questions we were repeatedly asked was “Why not just make a smartphone app to do this?” Answering that gets right to the heart of what we were trying to demonstrate with Project Haiku specifically, and wanted to see more of in general in IoT/Connected Devices.&lt;/p&gt;
&lt;p&gt;This is part of a series of posts on a project I worked on for Mozilla’s Connected Devices group. For context and an overview of the project, please see &lt;a href=&#34;http://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku.html&#34;&gt;my earlier post&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;The-problem-with-navigating-virtual-worlds&#34;&gt;&lt;a href=&#34;#The-problem-with-navigating-virtual-worlds&#34; class=&#34;headerlink&#34; title=&#34;The problem with navigating virtual worlds&#34;&gt;&lt;/a&gt;The problem with navigating virtual worlds&lt;/h2&gt;&lt;p&gt;One of IoT’s great promises is to extend the internet and the web to devices and sensors in our physical world. The flip side of this is another equally powerful idea: to bring the digital into our environment; make it tangible and real and take up space. If you’ve lived through the emergence of the web over the last 20 years, web browsers, smart phones and tablets -  that might seem like stepping backwards.  Digital technology and the web specifically have broken down physical and geographical barriers to accessing information. We can communicate and share experiences across the globe with a few clicks or keystrokes. But, after 20 years, the web is still in “cyber-space”. We go to this parallel virtual universe and navigate with pointers and maps that have no reference to our analog lives and which confound our intuitive sense of place. This makes wayfinding and building mental models difficult. And without being grounded by inputs and context from our physical environment, the simultaneous existence of these two worlds remains unsettling and can cause a kind of subtle tension.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;/blog/2017/04/reflections-on-project-haiku-experiences-in-reality/virtualworlds.jpg&#34; class=&#34;&#34; title=&#34;Imagined space, Hackers-style&#34;&gt;&lt;/p&gt;

&lt;p&gt;As I write this, the display in front of me shows me content framed by a website, which is framed by my browser’s UI, which is framed by the operating system’s window manager and desktop. The display itself has it own frame - a bezel on an enclosure sitting on my desk. And these are just the literal boxes. Then there are the conceptual boxes - a page within a site, within a domain, presented by an application as one of many tabs. Sites, domains, applications, windows, homescreens, desktops, workspaces…&lt;/p&gt;
&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;

&lt;p&gt;The flexibility this arrangement brings is truly incredible.  But, for some common tasks it is also a burden. If we could collapse some of these worlds within worlds down to something simpler, direct and tangible, we could engage that ancestral part of our brains that really wants things to have three dimensions and take up space in our world. We need a way to tear off a piece of the web and pin it to the wall, make space for it on the desk, carry it with us; to give it physical presence.&lt;/p&gt;
&lt;h2 id=&#34;Permission-to-uni-task&#34;&gt;&lt;a href=&#34;#Permission-to-uni-task&#34; class=&#34;headerlink&#34; title=&#34;Permission to uni-task&#34;&gt;&lt;/a&gt;Permission to uni-task&lt;/h2&gt;&lt;p&gt;Assigning a single function to a thing -  when the capability exists to be many things at once - was another source of skepticism and concern throughout Project Haiku. But in the history of invention, the pendulum swings continually between uni-tasking and multi-tasking; specialized and general. A synthesizer and an electric piano share origins and overlap in functions, but one does not supersede the other. They are different tools for distinct circumstances. In an age of ubiquitous smart phones, wrist watches still provide a function, and project status and values. There’s a pragmatism and attractive simplicity to dedicating a single task to an object we use. The problem is that as we stack functions into a single device, each new possibility requires a means of selecting which one we want. Reading or writing? Bold or italic text? Shared or private, published or deleted, for one group or broadcast to all? Each decision, each action is an interaction with a digital interface, stacked and overlaid into the same physical object that is our computer, tablet or phone. Uni-tasking devices give us an opportunity to dismantle this stack and peel away the layers.&lt;/p&gt;
&lt;p&gt;The two ideas of single function and occupying physical space are complementary: I check the weather by looking out the window, I check the time by glancing at my wrist, the recipe I want is bookmarked in the last book on the shelf. We can create similar coordinates or landmarks for our digital interactions as well.&lt;/p&gt;
&lt;p&gt;Our sense of place and proximity is also an important input to how we prioritize what needs doing. A sink full of dishes demands my attention - while I’m in the kitchen. But when I’m downtown, it has to wait while I attend to other matters. Similarly, a colleague raising a question can expect me to answer when I’m in the same room. But we both understand that as the distance between us changes, so does the urgency to provide an answer. When I’m at the office, work things are my priority. As I travel home, my context shifts. Expectations change as we move from place to place, and physical locations and boundaries help partition our lives. Its true that the smart phone started as a huge convenience by un-tethering us from the desk to carry our access to information - and its access to us - with us. But, by doing so, we lost some of the ability to walk away; to step out from a conversation or leave work behind.&lt;/p&gt;
&lt;img src=&#34;/blog/2017/04/reflections-on-project-haiku-experiences-in-reality/haiku-in-place.png&#34; class=&#34;img-block&#34; title=&#34;A concept rendering using one of the proposed form-factors for the Haiku device&#34;&gt;

&lt;p&gt;Addressing these tensions became one of the goals of Project Haiku. As we talked to people about their interactions with technology in their home and in their lives, we saw again and again how poor a fit the best of today’s solutions were. What began as empowering and liberating has started to infringe on people’s freedom to chose how to spend their time.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;When I’m spending time on my computer, its just more opportunities for it to beep at me. Every chance I get I turn it off. Typing into a box - what fun is that? You guys should come up with something… good.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is a quote from one of our early interviews. It was a refreshing perspective and sentiments like this - as well as the moments of joy and connectedness that we saw were possible - that helped steer this project. We weren’t able to finish the story by bringing a product to market. But the process and all we learned along the way will stick with me. It is my hope that this series of posts will plant some seeds and perhaps give other future projects a small nudge towards making our technology experiences more grounded in the world we move about in.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <category term="project haiku" />
        <updated>2017-04-04T18:18:08.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2017/02/reflections-on-haiku-web-clients-and-web-resources.html</id>
        <title>Haiku Reflections: Web Clients and Web Resources</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2017/02/reflections-on-haiku-web-clients-and-web-resources.html"/>
        <content type="html">&lt;p&gt;This is part of a series of posts I’m writing to put down my thoughts on the recently retired &lt;a href=&#34;https://wiki.mozilla.org/Connected_Devices/Projects/Project_Haiku&#34;&gt;Mozilla Connected Devices Haiku project&lt;/a&gt;. For an overview, see &lt;a href=&#34;http://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku.html&#34;&gt;my earlier post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;My understanding of the overarching goal for the Connected Devices group within Mozilla is to have a tangible impact on the evolution of the Internet of Things to maintain the primacy of the user; their right to own their own data and experience, to chose between products and organizations. We want Mozilla to be a guiding light, an example others can follow when developing technology in this new space that respects user privacy, implements good security and promotes open, common standards. In that context, the plan is to develop an IoT platform alongside a few carefully selected consumer products that will exercise and validate that platform and start building the exposure and experience for Mozilla in this space. Over the last few months, the vision for this platform has aligned with the emerging &lt;a href=&#34;https://www.w3.org/WoT/&#34;&gt;Web of Things&lt;/a&gt; which builds on patterns for attaching “Things” to the web.&lt;/p&gt;
&lt;p&gt;From one perspective, the web is a just a network of interconnected content nodes. It follows that the scope for standardizing the evolution of the Internet of Things is to define a sensible architecture and build frameworks for incorporating these new devices and their capabilities to maintain interoperability, promote discoverability etc.  This maps well onto connected sensors, smart appliances and other physical objects whose attributes we want to query and set over the network. Give these things URLs and a RESTful interface and you get all the rich semantics of the web, addressability, tools, developer talent pool - the list goes on and on and its all for “free”. In one stroke you remove the need for a lot of wheel re-inventions and proprietary-ness and nudge this whole movement in the direction of the interoperable, standardized web. Its a no-brainer.&lt;/p&gt;
&lt;p&gt;In this context however, the communication device envisaged by Project Haiku is orthogonal. While you can model it to give URLs to the people/devices and the private communication channel they share, the surface area of the resulting “API” is tiny and has limited value. It &lt;em&gt;is&lt;/em&gt; conceptually powerful as it brings along all the normal web best practices for RESTful API design, access control, caching and offline strategies and so-on. Still, the Haiku device would be more web client than web resource and doesn’t fit neatly into this story.&lt;/p&gt;
&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;

&lt;p&gt;This and the relatively skinny overlap in shared functionality with the proposed Mozilla IoT platform was one of the rocks on which Project Haiku floundered. And I agree that it would make little sense to have teams pulling in different directions and plotting courses that by design would not benefit each other much. But I’m also sad about a missed opportunity here. Its like we enlarged the stage but stopped short of taking the performance outside the theater.&lt;/p&gt;
&lt;p&gt;I didn’t set out with a personal ambition to create products from headless, embedded web clients. We asked questions and followed the answers and wound up in a place that seemed to make sense. With the emergence of cheap networking hardware, we can imagine using the web from devices other than the highly horizontal, multi-functional browsers on our desktop and mobile computers. When we could afford only a single computing device, it made sense to make highly flexible software capable of bringing us any experience the web could manage. Our web browser was a viewer for web content - any and all web content. It was left to the users to figure out how to partition out the different ways in which they used the web - the different hats they wore. Now, we can dedicate a device to a single task - and in doing so remove layer upon layer of complexity in the user experience. Instead of toolbars and menus and scrolling and clicks, typing or even speaking to request some part of the web, we can have a single button. Or maybe not even that - its just &lt;em&gt;there&lt;/em&gt; as long as the power and network permit. We can give some piece of the web a tangible, physical space in our lives. It could be a screen on the office wall that displays my bug list. I don’t need to context-switch as I switch tabs, instead context changes naturally as I move from one room to another, and the information is in its proper &lt;em&gt;place&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The product Project Haiku proposed follows the same philosophy. Yes, you could use a phone, or a tablet or one of the many new and lovely VoIP devices to facilitate communication between two people. But they are behind an icon, tucked away under some menu - the device sits between you and them and allows you to speak through it. Contrast that to a device whose sole function is to keep a channel open to that one person. You can send a voice or emoji message at any time, and - if they are available and nearby - talk in real time. The device is a proxy for the person, and they are represented by it in real space in your home. In this scenario the internet is just magic geography-defying tubes between one house and another.&lt;/p&gt;
&lt;p&gt;I know Mozilla is not walking away from this entirely and I hope we’ll get to circle back and explore this some more. The same ideas have spontaneously emerged in one form or another too many times to not stick at some point. We already saw mobile apps packaging up content where the app is essentially a single-task browser without all the noise. In the app store duopoly, these apps represent gated communities, taking chunks of the web and building walls around them. In IoT we have another opportunity to fix this - to keep the benefits &lt;em&gt;and&lt;/em&gt; maintain choice, freedom, privacy and security for the users of the technology rather than its keepers. We should attack it from both ends: the publishing and the requesting of content; both resource and client.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <category term="project haiku" />
        <updated>2017-02-02T17:32:00.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku-accounts.html</id>
        <title>Reflections on Project Haiku: Accounts and Ownership</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku-accounts.html"/>
        <content type="html">&lt;p&gt;This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. By focusing on the user problem and not the business model, we quickly determined that we wanted as little data from our users as we could get away with. For context and an overview of the project, please see &lt;a href=&#34;http://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku.html&#34;&gt;my earlier post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When I was a kid, my brothers and I had wired walkie-talkies. Intercoms really. Each unit was attached with about 100’ of copper wire. One could be downstairs and with the wire trailed dangerously under doors and up stairs we could communicate between kitchen and bedroom. Later, in order to talk with a friend in the appartment block opposite us, we got a string pulled taut between our two balconies. With tin cans on each end of the string, you could just about hear what the other was saying.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;/blog/2017/01/reflections-on-project-haiku-accounts/tradtelefon-illustration.png&#34; class=&#34;&#34; title=&#34;Direct, one-to-one communication&#34;&gt;&lt;/p&gt;

&lt;p&gt;RF-based wireless communication had existed for a long time already, but I bring these specific communication examples up because the connection we made was exclusive and private.&lt;/p&gt;
&lt;p&gt;We didn’t need to agree on a frequency and hope no-one else was listening in. The devices didn’t just enable the connection, they &lt;em&gt;were&lt;/em&gt; the connection. We didn’t sign up for a service, didn’t pay any subscription, and when we tired of it and it was given away, no contracts needed to be amended; the new owners simply picked up each end and started their own direct and private conversation. In Project Haiki, when we thought about IoT and connecting people, this was the analogy we adopted.&lt;/p&gt;
&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;

&lt;p&gt;That doesnt sound like a very radical position to take. But look around you and some of the ways you communicate with friends and loved ones today when you are physically apart. Cellular/SMS, Facebook, Skype, Facetime, Twitter, WhatsApp, Telegram… In each and every case you have accounts and arrangement with companies to make communication possible. And in each case that company can log at least the metadata if not the content of your conversations, change terms of agreement and policies, raise prices, or terminate the agreement entirely and disconnect you. They can be subpoena’d for their records and may even be obliged by law to retain and hand over data with their customers transactions. In our tin can and string analogy, there’s a large, locked black junction box sitting between you and your friend. You may own the equipment you use on your end, but it can be rendered effectively useless or even intrusive and hostile at any time, and there’s probably not a thing you can do about it. Everything about this situation is wrong for the direct, personal and private channel we wanted to establish to let kids and grandparent share moments and be a part of each others lives from a distance.&lt;/p&gt;
&lt;p&gt;Clearly, some parts of this problem are more tractable than others. We’re not in the ISP business for example; how you get to the internet is outside our control. But, keeping this analogy in mind provided a north star for our project. Whenever we were faced with a decision to make, it helped steer us. So when we thought about the unboxing and setup experience, we asked ourselves, “Do we actually need user accounts for these people?”&lt;/p&gt;
&lt;p&gt;Here’s the typical scenario when you install an app or take delivery of some shiny new tech. You plug it in or fire it up for the first time and you are asked to login or sign up. You create a new account with company X, providing your name, address, email, maybe gender or age bracket, perhaps they want categories of interest, and an agreement to be spammed. If its a paid service, they’ll want your credit card info too. Furthermore, the app then requests a set of permissions giving it access to your address book. Remember, the goal here is to allow a kid and their grandparent to exchange messages and chat from time to time. Which of these do we as Mozilla  - the service provider - actually need?&lt;/p&gt;
&lt;p&gt;Name? These two people are already in touch, so they don’t need to find each other in a directory. Their invitations to connect/pair could take the form of a URL sent via text, or a QR code printed and sent via snail mail. These devices only connect these two people so we don’t have to identify who a message is from. And even if we did want that, they can configure it and send it from the device itself. We don’t really need to store their names.&lt;/p&gt;
&lt;p&gt;Address? Why would we care? We don’t need to send them anything. If they do need to replace a device, they can provide a shipping address at that time.&lt;/p&gt;
&lt;p&gt;Email? Most companies want to maintain a relationship with their customers. They’ll email news of other services, offers of upgrades from time to time. Its called customer engagement and that database of email addresses is one of a company’s key assets. What if we didn’t do that?&lt;/p&gt;
&lt;p&gt;What if we treated this product just like the tin cans, or the intercom or any other thing you might purchase from a store. There’s a single transaction to acquire the thing, and that’s it. In this scenario, we only care about the device itself, its owned and used by whoever has it, and they can transfer it or sell it on and we don’t need to know or care. In our grandparent/grandchild scenario, as one child grows up, maybe they pass it on to a younger sibling. Or gift it to another family. All the users need is a way to break the “connection” that ties their devices together, and a way to start over with the invitation-to-connect process. The device itself needs to be uniquely identified to facilitate this, but not the user.&lt;/p&gt;
&lt;p&gt;How this would shake out is one of the things we’ll have to wait on, now that Project Haiku is on hold. Would it really have been practical to run a service like this with no visibility into who was using it? Would we be able to run the service at a low enough cost to allow us to support those devices indefinitely? Would this proposition have been understood and embraced by the market? The anonymity and opacity works both ways: we cant retrieve message histories for users, we can’t restore lost connections from the server side. If a device was stolen or even just picked up by a sibling, we can’t filter or block connections and nuisance messages. Each connected/paired device can sever that pairing, but as long as they are connected, any message between the two is legitimate by definition.&lt;/p&gt;
&lt;p&gt;We’ve grown accustomed to the need for user accounts, and for some part of our relationships to be owned and gated by 3rd parties we maintain agreements with. If Project Haiku and its aspirations can serve to question these assumptions and provide some food for thought, it was time well spent.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <category term="project haiku" />
        <updated>2017-01-30T22:30:47.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku-webrtc.html</id>
        <title>Reflections on Project Haiku: WebRTC</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku-webrtc.html"/>
        <content type="html">&lt;p&gt;This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. We landed on a WebRTC-based implementation of a 1:1 communication device. For an overview of the project as a whole, see &lt;a href=&#34;http://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku.html&#34;&gt;my earlier post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;/blog/2017/01/reflections-on-project-haiku-webrtc/webrtc-triangle.png&#34; class=&#34;&#34; title=&#34;WebRTC triangle diagram&#34;&gt;&lt;/p&gt;

&lt;p&gt;This was one of those insights that seems obvious with hindsight. If you want to allow two people communicate privately and securely, using non-proprietary protocols, and have no need or interest in storing or mediating this communication - you want WebRTC.&lt;/p&gt;
&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;

&lt;p&gt;For Project Haiku we had a list of reasons for not wanting to write a lot of server software. Mozilla takes user privacy seriously, and the best way to protect user data is to not collect it in the first place. We also wanted to minimize lock-in, and make the product easily portable to other providers. Cloud services convenience comes at a price: it can be hard to move once you start investing time into using a service. Our product aimed to facilitate communication between a grandparent and grandchild. We didn’t want to intrude into that by up-selling some premium service. There really wasn’t much the server would need to do if we did this right.&lt;/p&gt;
&lt;h2 id=&#34;Connect-us-and-go-away&#34;&gt;&lt;a href=&#34;#Connect-us-and-go-away&#34; class=&#34;headerlink&#34; title=&#34;Connect us and go away&#34;&gt;&lt;/a&gt;Connect us and go away&lt;/h2&gt;&lt;p&gt;Here’s how it worked. Grandparent and grandchild want to talk more, so the grandparent (or parent) installs Device A in the child’s home and connects it to WiFi. Grandparent can either install the app or their own Device B. An invitation to connect/pair can be generated from either side, and sent to the other party. Once “paired” in this way, when both devices (peers) connect to the server, a secure channel is negotiated directly between the peers and the server’s work is done. Actual messages/data is sent directly between the peers.&lt;/p&gt;
&lt;h2 id=&#34;STUN-TURN-and-making-it-work&#34;&gt;&lt;a href=&#34;#STUN-TURN-and-making-it-work&#34; class=&#34;headerlink&#34; title=&#34;STUN, TURN and making it work&#34;&gt;&lt;/a&gt;STUN, TURN and making it work&lt;/h2&gt;&lt;p&gt;On the server side, we need to authenticate each incoming connection and shuttle the negotiation of offers and capabilities between two clients. In WebRTC terminology, this broker is called a signalling server. We built a simple &lt;a href=&#34;https://github.com/mozilla/project_haiku_webrtc_signaling.iot&#34;&gt;proof of concept using node.js and WebSocket&lt;/a&gt;. The other necessary components of this system are a &lt;a href=&#34;https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/&#34;&gt;STUN and TURN server&lt;/a&gt; - both well defined and with existing open source implementations. The complexities associated with WebRTC kick in with multi-party conferencing where different data streams might need to be composited together on the server or client or both. Then there’s the need for real-time transcoding and re-sampling of audio and video streams to fit the capabilities of the different clients wanting to connect, and the networks they are connecting over. And interfacing with traditional telephony stacks and networks. In this landscape, the very limited set of parameters needed for Haiku’s WebRTC use-case make the solution relatively simple - we just don’t need most of the things that bring along all that complexity.&lt;/p&gt;
&lt;h2 id=&#34;The-client-and-the-catch&#34;&gt;&lt;a href=&#34;#The-client-and-the-catch&#34; class=&#34;headerlink&#34; title=&#34;The client and the catch&#34;&gt;&lt;/a&gt;The client and the catch&lt;/h2&gt;&lt;p&gt;There is always a catch isn’t there? Almost all WebRTC client implementation effort to date has come from desktop browser vendors. Search the web and most of what you find about WebRTC assumes you are using a conventional browser like Firefox, Chome/Chromium etc. That’s no use for the Haiku device, where we are running in an embedded Linux environment, without the expected display or input methods and limited system capabilities. Existing standalone headless web clients (such as Curl, or node.js’ built-in HTTP modules) do not yet speak WebRTC. There is some useful work in the &lt;a href=&#34;https://github.com/js-platform/node-webrtc&#34;&gt;wrtc&lt;/a&gt; module that provides native bindings for WebRTC, provided you can compile for your architecture. We were able to use this to put together a simple proof of concept, running on Debian on a BeagleBone Black. wrtc gives you the PeerConnection and DataChannel but no audio/video media streams. It was enough for us to taste sweet prototype success: a headless single-board computer securely contacting and authenticating at our signaling server, and conducting a P2P, fully opaque exchange of messages with a remote client.&lt;/p&gt;
&lt;p&gt;Going from this smoke test of the concept to a complete and robust implementation is definitely doable, but its not a trivial piece of work. Our user studies concluded that the asynchronous exchange of discrete messages was good for some scenarios, but the kids and grandparents also wanted to talk in real time. So to pick this back up means solving enough of the headless WebRTC client problem to enable audio streaming between devices. And with the added need to support a mobile app as client, likely transcoding audio too. &lt;a href=&#34;https://github.com/js-platform/node-webrtc/issues/156&#34;&gt;Bug 156&lt;/a&gt; on the wrtc module’s repo discusses some options.&lt;/p&gt;
&lt;h2 id=&#34;What-next&#34;&gt;&lt;a href=&#34;#What-next&#34; class=&#34;headerlink&#34; title=&#34;What next?&#34;&gt;&lt;/a&gt;What next?&lt;/h2&gt;&lt;p&gt;Putting the Haiku project on hold has meant walking away from this. I hope others will arrive at the same conclusions and we’ll see WebRTC adoption expand beyond the browser. There are so many possibilities. Just stop for a moment and count the number of ways in which one device needs to talk  securely to another using common protocols. Yet for reasons that suddenly seem unclear, this conversation is gated and channeled (and observed and logged) through a server in the cloud.&lt;/p&gt;
&lt;p&gt;Both the desktop and mobile browser represent just one way to connect users to the Web. There are others, we should be looking into them. Although Mozilla exists to promote and protect the open web, it is historically a browser company. I can’t tell you the number of long conversations I’ve had with colleagues which end with “wait, you mean this isn’t happening in the browser?” Moving into IoT and Connected Devices means challenging this. We’ve set aside that challenge for now, I sincerely hope we’ll come back to it.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <category term="project haiku" />
        <updated>2017-01-30T22:06:11.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku.html</id>
        <title>Reflections on Project Haiku</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku.html"/>
        <content type="html">&lt;p&gt;I’ve &lt;a href=&#34;http://sam-i-am.com/blog/2016/10/emoji-plus-voice-prototype.html&#34;&gt;written before on this blog&lt;/a&gt; about my current project with Mozilla’s Connected Devices group: &lt;a href=&#34;https://wiki.mozilla.org/Connected_Devices/Projects/Project_Haiku&#34;&gt;Project Haiku&lt;/a&gt;. Last week, after close to 9 months of exploration, prototyping and refinement this project was put on hold indefinitely.&lt;/p&gt;
&lt;p&gt;So I wanted to take this opportunity - a brief lull before I get caught up in my next project - to reflect on the work and many ideas that Project Haiku produced. There are several angles to look at it from, so I’ll break it down into separate blog posts. In this post I’ll provide a background of the what, when and why as a simple chronological story of the project from start to finish.&lt;/p&gt;
&lt;p&gt;Other posts in this series:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku-webrtc.html&#34;&gt;Reflections on Project Haiku: WebRTC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.sam-i-am.com/blog/2017/01/reflections-on-project-haiku-accounts.html&#34;&gt;Reflections on Project Haiku: Accounts and Ownership&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.sam-i-am.com/blog/2017/02/reflections-on-haiku-web-clients-and-web-resources.html&#34;&gt;Haiku Reflections: Web Clients and Web Resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.sam-i-am.com/blog/2017/04/reflections-on-project-haiku-experiences-in-reality.html&#34;&gt;Haiku Reflections: Experiences in Reality&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;Phase-0-Are-we-solving-the-right-problem&#34;&gt;&lt;a href=&#34;#Phase-0-Are-we-solving-the-right-problem&#34; class=&#34;headerlink&#34; title=&#34;Phase 0: Are we solving the right problem?&#34;&gt;&lt;/a&gt;Phase 0: Are we solving the right problem?&lt;/h2&gt;&lt;p&gt;Back in March 2016, with Firefox OS winding down and most of that team off exploring the field of IoT and the smart home, &lt;a href=&#34;http://ezoehunt.com/&#34;&gt;Liz&lt;/a&gt; proposed a vision for a project that would tackle smart home problems in a way that was more grounded in human experience and recognized the diversity of our requirements from technology and our need to have it reflect our values - both aesthetically and practically. I had been experimenting with ideas like the smart mirror and this human-centric direction resonated with me. A team gathered around her proposal and we started digging.&lt;/p&gt;
&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;

&lt;p&gt;It quickly became clear that the “smart home” box wasn’t a useful constraint. Connecting things around the home in a way that felt valuable and reflective of the principles we’d identified for this project was proving elusive. So we stepped back and did some design thinking: are we asking the right question? What do people really want from technology in the context of the home? And which people are we talking about? This lead us to a study in which we interviewed a set of teens and retirement-age folks on themes of freedom and independence in the home. You can find &lt;a href=&#34;https://docs.google.com/presentation/d/1i0ocmaQBrk4G4VfcGoYP7hJDql8QVrfSd7_l6-hqvd8&#34;&gt;more details on the study here&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;Connecting-people&#34;&gt;&lt;a href=&#34;#Connecting-people&#34; class=&#34;headerlink&#34; title=&#34;Connecting people&#34;&gt;&lt;/a&gt;Connecting people&lt;/h2&gt;&lt;p&gt;Of the themes that emerged from this study, we chose to focus on that of connecting people. We saw the same needs repeated over and over: people wanted to share moments, to maintain a presence in each others lives. At the same time there was a sense of loss of control and growing obligation from smart phones and social media; being spread too thin. Over the next few months, we built test devices to better understand this problem, and conducted further studies eventually arriving at a simple wearable device that would show real-time status for a small group of friends and family.&lt;/p&gt;
&lt;img src=&#34;/blog/2017/01/reflections-on-project-haiku/illustration-hands.png&#34; class=&#34;img-block&#34; title=&#34;Wearable mockup&#34;&gt;

&lt;p&gt;We were happy to see that a few other companies had arrived as similar conclusions - taking their own journey to get to this point. Products like &lt;a href=&#34;http://ringly.com/&#34;&gt;Ringly&lt;/a&gt; and the &lt;a href=&#34;http://goodnightlamp.com/&#34;&gt;Goodnight Lamp&lt;/a&gt;  embodied some of the same thinking. Our idea for a wearable product was very much informed by Mozilla’s ethos and mission. In this simple device we were going to implement what amounted to a simple wearable web client capable of monitoring a handful of URLs and “displaying” the changing values supplied by those endpoints as visual light patterns and haptic feedback. We would bring the [&lt;a href=&#34;https://www.mozilla.org/about/manifesto/]&#34;&gt;https://www.mozilla.org/about/manifesto/]&lt;/a&gt;(values of the web) to the world of connected wearables, and bring both peace of mind and small moments of joy to young people at a time when many in the industry seem intent on exploiting their Fear of Missing Out, and are sometimes cavalier in their handling of privacy and data ownership.&lt;/p&gt;
&lt;h2 id=&#34;Stumbling-and-a-change-in-direction&#34;&gt;&lt;a href=&#34;#Stumbling-and-a-change-in-direction&#34; class=&#34;headerlink&#34; title=&#34;Stumbling and a change in direction&#34;&gt;&lt;/a&gt;Stumbling and a change in direction&lt;/h2&gt;&lt;p&gt;Getting to grips with what it would take to produce this device and re-building momentum lost over the summer break had cost us though. Just as this picture came into focus and we started to take the next steps in the plan, the team was called to account. Our enthusiasm and confidence in the product was not shared by the innovation board. There was some skepticism of our premise - that our audience of teenage girls would want such a thing - despite the research we had done. And there were concerns about our ability to contain the cost and complexity implicit in the small, wearable form-factor.  Given the finite resources available to the Connected Devices group and the ambitions of our project relative to the experience and expertise available to us, from the outside it looked like we were heading off into the weeds.&lt;/p&gt;
&lt;p&gt;At the same time, another team had concluded a exploratory project with an outside agency and had produced a report echoing many of the needs and values Haiku had identified. They had proposed a (non-mobile) device for the home which would facilitate communication and sharing between friends and family. We decided to put aside the wearable and pick up where this report left off.  I’ve written already about some of this work. &lt;a href=&#34;http://www.sam-i-am.com/blog/2016/10/emoji-plus-voice-prototype.html&#34;&gt;We produced a “learning prototype”&lt;/a&gt; to home in some more on what people wanted from a device like this, and where we could have the most impact. We adopted a new target audience and use case - communication between kids and grandparents - and assessed priorities and features. We did some technical exploration and landed on what was essentially a WebRTC application, running on an embedded Linux device. The WebRTC architecture was a great fit: private and secure by default, with no need to store or pass personal communications through Mozilla’s servers. Each connection is point to point and the very personal and private content implicit in the use cases would always be encrypted. With little to no data to store, an open-source      codebase for client and services and a minimum of setup, we could minimize the risk/threat of lock-in for the device owner.&lt;/p&gt;
&lt;p&gt;Meanwhile, we had questions. How might this device be used? What kinds of messages would these people want to send? Should we store missed messages? Is the device portable or not? We knocked together another prototype, this time using left-over phones from FxOS days to gather data and feedback from a set of grandparents and grandchildren over a couple of week-long studies.&lt;/p&gt;
&lt;h2 id=&#34;Fleshing-out-the-idea&#34;&gt;&lt;a href=&#34;#Fleshing-out-the-idea&#34; class=&#34;headerlink&#34; title=&#34;Fleshing out the idea&#34;&gt;&lt;/a&gt;Fleshing out the idea&lt;/h2&gt;&lt;p&gt;The culmination of this work was a product definition that included the user market and use cases, the features and principles, as well as details on what we would need to implement and how. We had landed on a concept for a device and service that would give grandparents and their grandchildren an easy, one-touch experience to share moments using audio or emoji. The child would have a dedicated connected device, explicitly and exclusively paired to an app installed on the grandparent’s phone. We observed a magical thing emerging from the simplicity and directness of the experience: kids were able to carry out “conversations” without any assistance from their parents; they could own their relationship with distant loved ones. This was the real value proposition. Project Haiku wasn’t presenting a technical breakthrough as-such, but taking existing technology and fostering joy, confidence and agency using open standards and the infrastructure of the web.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;/blog/2017/01/reflections-on-project-haiku/bird-id.jpg&#34; class=&#34;&#34; title=&#34;Early device design concept by our industrial design contractor&#34;&gt;&lt;/p&gt;

&lt;p&gt;The process we follow in Connected Devices has a “Gate 1” milestone in which for a project to move forward, it should present a clear picture of what the product will be, demonstrate viability and a market fit, and detail what it will take to get there. It is evaluated against these and other criteria including alignment with the Mozilla mission, and alignment with the collective vision for Connected Devices. In December we presented to the board and found out later that week that we had met the criteria and passed Gate 1. However…&lt;/p&gt;
&lt;h2 id=&#34;Back-burnered&#34;&gt;&lt;a href=&#34;#Back-burnered&#34; class=&#34;headerlink&#34; title=&#34;Back-burnered&#34;&gt;&lt;/a&gt;Back-burnered&lt;/h2&gt;&lt;p&gt;The “however” was about resources and priorities: people, money and time. We simply couldn’t pursue each of the products at this time. In the context of the emerging game plan for Connected Devices, Haiku was not a high priority and other projects that were, were hurting for lack of people to work on them. So Project Haiku is on the back-burner. Its possible though unlikely that we’ll be able to revisit it and pick development back up later this year. In the meantime, the best we can do is to ensure that work and findings from this project are well documented so the organization and the community have the opportunity to learn what we learned.&lt;/p&gt;
&lt;p&gt;To that end, I’ll be putting my thoughts to paper on this blog on a series of topics which Project Haiku touched. As usual with Mozilla, all our code and documents are publicly available. Please find me on IRC in the #haiku channel (irc.mozilla.com) as sfoster, or through my mozilla or personal email (sfoster at mozilla, sam at sam-i-am.com) if you have any questions. I’m also on twitter etc. as samfosteriam&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <category term="project haiku" />
        <updated>2017-01-30T18:01:48.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2016/10/emoji-plus-voice-prototype.html</id>
        <title>Emoji + Voice Prototype</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2016/10/emoji-plus-voice-prototype.html"/>
        <content type="html">&lt;h2 id=&#34;Project-Haiku-Update&#34;&gt;&lt;a href=&#34;#Project-Haiku-Update&#34; class=&#34;headerlink&#34; title=&#34;Project Haiku Update&#34;&gt;&lt;/a&gt;Project Haiku Update&lt;/h2&gt;&lt;p&gt;At Mozilla, I’m still working with a team on &lt;a href=&#34;https://wiki.mozilla.org/Connected_Devices/Projects/Project_Haiku&#34;&gt;Project Haiku&lt;/a&gt;. Over the summer we had closed in on a wearable device used for setting and seeing friend’s status. It took a while for that to crystallize though and as we started the process of building an initial bluetooth-ed wearable prototype, our team was handed an ultimatum: Go faster or stop.&lt;/p&gt;
&lt;p&gt;We combined efforts and ideas with another Mozilla team that had arrived at some very similar positions on how connected devices should meet human needs. As I write we are concluding a user study in which 10 pairs of grandparents and school-age grandchildren have been using a simple, dedicated communication device.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;/blog/2016/10/emoji-plus-voice-prototype/jcheng-emoji-phone-enclosure-800w.jpg&#34; class=&#34;&#34; title=&#34;The finished unit&#34;&gt;&lt;/p&gt;

&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;

&lt;p&gt;The premise was that these 2 groups want to interact more often - to be a small, more constant part of each others lives - and that this is impeded by needing a parent to schedule and facilitate voice/Skype/Facetime calls. What if they had a single-button, direct connection they could use without the parent’s help?&lt;/p&gt;
&lt;h2 id=&#34;Prototyping-for-the-User-Study&#34;&gt;&lt;a href=&#34;#Prototyping-for-the-User-Study&#34; class=&#34;headerlink&#34; title=&#34;Prototyping for the User Study&#34;&gt;&lt;/a&gt;Prototyping for the User Study&lt;/h2&gt;&lt;p&gt;We wanted to prove/disprove this need, and to explore the relative merits of synchronous (real-time) communication in the form of a voice call, alongside asynchronous messages in the form of simple emoji messages. We built a prototype using Firefox OS on Sony phones with a simple user interface: a button for each of the emoji we’d selected, and a call/pickup/hangup button. The devices were explicitly and exclusively connected: each device would only accept incoming calls and SMS from the other in the pair, and could only dial and send SMS to that one phone.&lt;/p&gt;
&lt;h2 id=&#34;The-User-Experience&#34;&gt;&lt;a href=&#34;#The-User-Experience&#34; class=&#34;headerlink&#34; title=&#34;The User Experience&#34;&gt;&lt;/a&gt;The User Experience&lt;/h2&gt;&lt;p&gt;For the interface, we didn’t want to get hung up on custom hardware, so we gambled that we could implement a software UI on a touch-screen phone and not carry too much smart phone feature expectations and baggage into the user study. We also wanted a fixed/stationary object rather than a portable one - to give the grandparent/child a physical space and representation in the home. As sound output from the phone’s built-in speaker was lacking, we attached a USB-powered external speaker. This and the phone were housed in a 3d printed enclosure/stand, and the unit was supplied with plug-in USB power.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;/blog/2016/10/emoji-plus-voice-prototype/ui-screenshot.png&#34; class=&#34;&#34; title=&#34;Screenshot of the UI&#34;&gt;&lt;/p&gt;

&lt;h2 id=&#34;Pulling-it-Together&#34;&gt;&lt;a href=&#34;#Pulling-it-Together&#34; class=&#34;headerlink&#34; title=&#34;Pulling it Together&#34;&gt;&lt;/a&gt;Pulling it Together&lt;/h2&gt;&lt;p&gt;Each phone was fitted with a prepaid SIM, and configured with our custom software (a replacement system app) and the telephone number of the SIM in the paired device. This allowed us to side-step Wifi setup and troubleshooting and get voice calls and SMS (to carry our emoji messages) with very little development time. We were also able to use the usage (billing) reports from the carrier as a great source of data - indicating when SMS were sent, and both outgoing and incoming call time and duration. We wrote a python script to extract this data from the PDFs the carrier provided for download, and a selection of HTML/JS chart and reports to visualize the data. That &lt;a href=&#34;https://github.com/mozilla/project_haiku_dataviz.iot/&#34;&gt;code is on github&lt;/a&gt;, and we deployed to a Heroku node.js app to share the results with the team.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;/blog/2016/10/emoji-plus-voice-prototype/timeline-dataviz.png&#34; class=&#34;&#34; title=&#34;Timeline data visualization&#34;&gt;&lt;/p&gt;

&lt;h2 id=&#34;Did-it-work&#34;&gt;&lt;a href=&#34;#Did-it-work&#34; class=&#34;headerlink&#34; title=&#34;Did it work?&#34;&gt;&lt;/a&gt;Did it work?&lt;/h2&gt;&lt;p&gt;This study and the prototype was not without problems. We had only a couple of weeks to go from nothing to working devices in study participant’s hands. Audio quality and volume was an ongoing problem. Having decided to attach an external speaker, we then had to house it as one self-contained unit. I designed &lt;a href=&#34;https://github.com/sfoster/project_haiku_3d.iot/blob/master/experiment-9/z3c-speaker-enclosure.stl&#34;&gt;this enclosure&lt;/a&gt; and truthfully it was a scramble. We initially wanted an enclosure just for the phone - to hold it securely at an optimum angle. Adding the speaker - which was an irregular shape - was a bit of a challenge. I didn’t hit on the best/simplest way to bring all the parts together until afterwards. Also, I was not planning on being on-site for assembly and packing, so I was trying to make something with a minumum of parts and assembly steps. And, 20 of anything is quite a lot of 3d printing - it would have taken days to complete on my single printer, so we ended up sending files over to a company in the Bay Area who would get it all done and delivered to the office in time. Long story short, I flew down to the Mountain View office and with a little hacking and much teamwork we got all the units assembled, flashed, configured and shipped out in time.&lt;/p&gt;
&lt;p&gt;I had the novel experience of being on-call for tech support as we kicked off the study. A couple of the units arrived with the speaker jack loose or damaged somehow, but we got most of them figured out (one unit sadly never really recovered and was a source of some frustration for that pair.) On the plus side it meant I got to talk to the participants - who we otherwise had no direct contact with. These conversations alone convinced me that we were onto something.&lt;/p&gt;
&lt;p&gt;As the data came in, patterns started to emerge. We saw lots (and lots) of short (less than 45s) calls - which we deduced must have been missed calls. But a steady stream of emoji messages. Clearly, some participants were more engaged than others - we expected that and it was the reason we stretched to build 20 prototypes. I was intrigued to see apparent “emoji conversations”, as well as isolated messages that would get “answered” some hours later.&lt;/p&gt;
&lt;p&gt;The feedback from the participant surveys confirms a few hunches: More choice of emojis! Missed calls suck and the device wasnt capable of making itself heard from another room which compounded this problem. Video/“Facetime” calls please! But underneath this, a consistent message: It was &lt;em&gt;fun&lt;/em&gt; to have another way to communicate directly with grandchildren/grandparents. It wont change the world, but yes, it could work.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <updated>2016-10-19T20:52:46.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2016/09/forty-eight-hours-of-hacking-in-chattanooga.html</id>
        <title>48 Hours of Hacking in Chattanooga</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2016/09/forty-eight-hours-of-hacking-in-chattanooga.html"/>
        <content type="html">&lt;p&gt;I spent this past weekend in Chattanooga, Tennesse, in a whirlwind of planning, prototyping and generally collaborating on a pitch for the 48 Hour Launch event. I was invited to attend as one of several mentors from Mozilla, to help develop product and company ideas from the local community into something clear and compelling in just two days. For more info on the event, go read the &lt;a href=&#34;https://blog.mozilla.org/gigabit/mozilla-brings-the-iot-to-chattanooga-tn-in-a-48-hour-launch/&#34;&gt;wrap-up on Mozilla’s blog&lt;/a&gt;. I’m just going to detail some of my personal highlights.&lt;/p&gt;
&lt;p&gt;About &lt;a href=&#34;http://www.timesfreepress.com/news/business/aroundregion/story/2016/sep/01/seven-finalists-picked-compete-48hour-launch-hackathon/384538/&#34;&gt;seven teams&lt;/a&gt; were at the kick-off Friday night, each giving an introduction to their concept and what they wanted to achieve over the weekend. After drifting around a bit and listening in to the conversations that emerged afterwards, I gravitated towards the “Inclusive Makerspace” project. Cristol Kapp is a librarian at a local elementary school, and one of the first in the region to set up a functioning makerspace in her library for the kids. But, there’s a problem: some of the students have conditions and disabilities which prevent them getting involved in the makerspace activities. The need for a steady hand, fine motor control skills to manipulate tools - are just 2 barriers that effectively exclude some of these kids from what should be fun, collaborative activities in the space. Cristol clearly felt this deeply, and was accomanied by a colleague - a special education teacher - who was also committed to fixing this. That stood out for me: a clear need expressed again and again at the school, and no doubt echoed at home. And people with the opportunity and drive to find, test, improve and promote a solution. (On the Sunday, this was reinforced again when the school principal visited the hackathon to support Cristol, listen to her plans and give feedback.)&lt;/p&gt;
&lt;p&gt;I think I’ll keep this short and devote a separate post to the Inclusive I/O project itself (a renaming and branding that emerged from the weekend) and confine myself to the event here. Friday evening was spent narrowing down both the problem and set of solutions into something properly joined up and actionable. With a million ideas buzzing around all the participants heads, we needed to focus on telling a story with well defined characters, with a clearly defined problem and a solution that demonstrably addresses that problem. Of course, reality is never so simple, but for the purposes of this pitch - and to get this project into gear and actually moving down the road - we had to temporarily remove variables. We wound up Friday evening with a plan - sketched out on the back of a cupcake box (which I didn’t have the presence of mind to photograph) - and a consensus to make it so first thing in the morning.&lt;/p&gt;
&lt;p&gt;I was pretty blown away by the level of energy, the collective good will and breadth of expertise that descended on the venue over the weekend. Although each team was ultimately competing for prizes, there was no hesitation in sharing tips or resources, getting each other unstuck or even devoting large chunks of time to contribute skills where they were needed. Over the Saturday and Sunday we divided and conquered - with &lt;a href=&#34;https://thehackermom.wordpress.com/2016/09/13/the-48-hour-launch-can-you-really-launch-a-company-in-48-hours/&#34;&gt;Tamara&lt;/a&gt; and I hacking up a prototype, with the help of some great talent from the community. Meanwhile Cristol was moving efficiently though business planning, with cost and market estimates, branding and strategy, all the while tightening up the story we had started that first evening. By Sunday she had a great slide deck and a clear, concise telling of that story, practiced again and again.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;/blog/2016/09/forty-eight-hours-of-hacking-in-chattanooga/inclusive-io-chattanooga-team.jpg&#34; class=&#34;&#34; title=&#34;The Inclusive I&amp;#x2F;O team&#34;&gt;&lt;/p&gt;

&lt;p&gt;It worked. Inclusive I/O was well received by the panel and awarded 2nd place. This is huge - not only for the cash and other resources it grants - but for the validation of the idea and its originator. And for the problem Cristol saw and its real need of a solution. Thanks to all whose names I either didn’t list, forgot or never learnt who helped out along the way. I hope to stay involved in this project in some capacity; watch this space.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <updated>2016-09-16T00:12:18.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2016/05/making-a-research-prototype.html</id>
        <title>Making a Research Prototype</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2016/05/making-a-research-prototype.html"/>
        <content type="html">&lt;p&gt;&lt;a data-flickr-embed=&#34;true&#34;  href=&#34;https://www.flickr.com/photos/89425047@N00/28116305815/in/album-72157664929839371/&#34; title=&#34;Haiku UR#2 prototypes w. lanyards&#34;&gt;&lt;img src=&#34;https://c8.staticflickr.com/8/7312/28116305815_07748a6f82_z.jpg&#34; width=&#34;620&#34; height=&#34;640&#34; alt=&#34;Haiku UR#2 prototypes w. lanyards&#34;&gt;&lt;/a&gt;&lt;script async src=&#34;//embedr.flickr.com/assets/client-code.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;http://sam-i-am.com/blog/2016/04/smarthome-user-research.html&#34;&gt;last round of user research&lt;/a&gt; for my project with Mozilla’s Connected Devices team threw up a ton of useful ideas and insights. We shuffled them around on a gazillion post-its and narrowed in eventually on a theme of communication - specifically simple, non-intrusive/non-interrupting ambient messages. We saw a recurring need for ways to say “I’m still here”, “I’m OK”, “I’m thinking of you”. I was reminded of the &lt;a href=&#34;http://goodnightlamp.com/&#34;&gt;Goodnight Lamp&lt;/a&gt; - one of the first really nice IoT products I remember seeing.&lt;/p&gt;
&lt;p&gt;We wanted to validate our thoughts, and dig a little deeper into this area, so we came up with another study, this time using a simple functional prototype. Not so much a product prototype, more a prop and a way to move away from the abstract and focus in on actual reactions when interacting with a thing. In this post I’ll go into some detail on what we built, how we went about it and what we learnt.&lt;/p&gt;
&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;

&lt;h3 id=&#34;Defining-the-Prototype&#34;&gt;&lt;a href=&#34;#Defining-the-Prototype&#34; class=&#34;headerlink&#34; title=&#34;Defining the Prototype&#34;&gt;&lt;/a&gt;Defining the Prototype&lt;/h3&gt;&lt;p&gt;As the team discussed building some simple connected device that lets you click to blink some LEDs on a corresponding remote device, I remembered &lt;a href=&#34;https://www.particle.io/button&#34;&gt;Particle’s Internet Button&lt;/a&gt;. This was a lot of what we wanted, maybe even enough to conduct the study with as-is? Turns out not quite. There’s options for hooking up a power source, but nothing in the package for maintaining a battery. I toyed briefly with the idea of just taping a USB power pack to it and calling it good. But it would have been unwieldy and bulky, and still left a need for some kind of enclosure to isolate one of 4 buttons we’d hooked up, and keep the bare PCB out of harm’s way. It did help us validate some ideas (a prototype prototype?) and the device we ended up building remained hardware-compatible so we could work on the code and interactions, while the prototype itself was in-progress. We ended up combining a &lt;a href=&#34;https://store.particle.io/collections/photon&#34;&gt;Particle Photon board&lt;/a&gt;, with a &lt;a href=&#34;https://www.adafruit.com/products/259&#34;&gt;LiPo charger module from Adafruit&lt;/a&gt; and some “neopixel”-style addressable LED strip in a 3d printed enclosure, powered by a 2000mAh LiPo battery.&lt;/p&gt;
&lt;p&gt;The study we proposed was to give 5 pairs of 12-15yr old girls a device each, configured so that a click on one resulted in a LED animation on the other. They would live with these for 5 days - with minimal instruction from us - and we would interview before, after and gather telemetry data for the study period to see how they ended up using the devices. We needed a WiFi connection to function, and providing a low-friction way to configure that was a top priority. We used the soft access-point method to get SSID and password onto the device. WiFi is also pretty power-hungry, and it was clear early on that the battery would need charging each night for the device to last the duration of the study.&lt;/p&gt;
&lt;h3 id=&#34;Reflections-on-the-Process&#34;&gt;&lt;a href=&#34;#Reflections-on-the-Process&#34; class=&#34;headerlink&#34; title=&#34;Reflections on the Process&#34;&gt;&lt;/a&gt;Reflections on the Process&lt;/h3&gt;&lt;p&gt;So, how did it go? I wont go into details here, but designing the enclosure in &lt;a href=&#34;http://openscad.org/&#34;&gt;OpenSCAD&lt;/a&gt; and printing on the Lulzbot Mini purchased for this purpose was a very manageble learning curve and ultimately left me feeling confident in churning out the 14+ devices (10 for the study participants, and some for the team) in time for the study. After we were done, I did order a print of the enclosure model from &lt;a href=&#34;http://shapeways.com/&#34;&gt;Shapeways&lt;/a&gt; to try out that process, and the result was great, but at 1 week from submitting the order to receiving the thing, pretty much a non-starter for the design/iteration phase.&lt;/p&gt;
&lt;p&gt;&lt;a data-flickr-embed=&#34;true&#34; href=&#34;https://www.flickr.com/photos/89425047@N00/28013753892/in/album-72157664929839371/&#34; title=&#34;Haiku UR#2 prototype guts&#34;&gt;&lt;img src=&#34;https://c5.staticflickr.com/8/7291/28013753892_c51d0f697c_z.jpg&#34; width=&#34;640&#34; height=&#34;360&#34; alt=&#34;Haiku UR#2 prototype guts&#34;&gt;&lt;/a&gt;&lt;script async src=&#34;//embedr.flickr.com/assets/client-code.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;For the electronics, I didn’t end up with a lot of time to really explore options and the result was perhaps a  bit clunkier than it needed to be. The footprint of the battery charging module was needlessly big, but the power requirements for an always-on Wifi connection were really the driver for the size and shape of the device; we needed at least 2000mAh, and there’s no getting around the size of battery needed for that capacity. For this project I elected not to try and produce a custom PCB for mounting the button and other connections - it worked out pretty cleanly on a small piece of stripboard. After some shuffling and soldering and de-soldering, I got it packed down into as small an envelope as I could manage, and designed the enclosure around that.&lt;/p&gt;
&lt;h3 id=&#34;Documentation&#34;&gt;&lt;a href=&#34;#Documentation&#34; class=&#34;headerlink&#34; title=&#34;Documentation&#34;&gt;&lt;/a&gt;Documentation&lt;/h3&gt;&lt;p&gt;One of the secondary goals of the study was to have a go at documenting and sharing the device design - so that anyone could build their own and adapt it to their needs. We wanted to make some headway on understanding what it meant to “deliver” an open hardware project. We ended up with a high-level summary in the &lt;a href=&#34;https://github.com/mozilla/project_haiku.iot/blob/master/Prototype/README.md&#34;&gt;README&lt;/a&gt;, and more detailed instructions in a document - detailing each step in the device building, software flashing and configuration process. I produced a 3-part series of &lt;a href=&#34;https://www.youtube.com/watch?v=C2MHg81-BwQ&#34;&gt;step-by-step build videos&lt;/a&gt;, captured while I was making device #8 or so. Going the last mile to produce this documentation was - as always - instructive in itself. Some hand-drawn wiring diagrams and my videos were used to build a device and produce the details instructions. Then another team member used our docs to build out the prototype on a breadboard - finding a couple bugs along the way but ultimately with success.&lt;/p&gt;
&lt;h3 id=&#34;A-Few-Takeaways&#34;&gt;&lt;a href=&#34;#A-Few-Takeaways&#34; class=&#34;headerlink&#34; title=&#34;A Few Takeaways&#34;&gt;&lt;/a&gt;A Few Takeaways&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The stripboard was a stumbling block - I wasn’t able to find a pre-made stripboard/veroboard small enough that would meet requirements, and that stuff can be hard to cut neatly (I used a modelmaker’s miniature table saw.) A PCB layout might have been an improvement, but that too is a pretty high bar unless you have done a lot of this kind of thing before.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Having a 3D Printer next to me on my desk was pretty awesome for this kind of project. The end result was good enough quality-wise, and the turnaround time to try the result of a change to the model made it possible to go through lots of iterations and make lots of mistakes along the way.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Even though OpenSCAD is kind of doing 3D design the hard way - you must write code to describe the 3D volumes - it results in version-control-friendly files and was a much more familiar process for someone already comfortable with code. It was easy to get repeatable precision, and to use and adapt other’s libraries of shapes. Having said that, getting any kind of organic, flowing design would be very difficult with this method, so it may be the wrong tool for a final design.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;While relying on WiFi for this prototype was just a matter of convenience and expediency, its still clear that for an always-on, portable/wearable device, WiFi is probably a non-starter. We could have experimented with sleeping the device and only waking up after &lt;em&gt;n&lt;/em&gt; minutes to update, but for this study we really wanted a real-time, always-connected experience. If we continue down this path, we’ll look at alternatives like Bluetooth LE to allow us to both shrink the device and require less frequent charging.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;Final-Thoughts-and-What’s-Next&#34;&gt;&lt;a href=&#34;#Final-Thoughts-and-What’s-Next&#34; class=&#34;headerlink&#34; title=&#34;Final Thoughts and What’s Next&#34;&gt;&lt;/a&gt;Final Thoughts and What’s Next&lt;/h3&gt;&lt;p&gt;In wanting to understand a new (to us at least) user experience, it feels really important to be able to prototype each aspect of that experience ourselves. Producing a physical thing with real functionality and putting that into the hands of kids gives us a deeply informed, joined-up perspective that I think would hard to get otherwise. It comes at a cost though - this study took weeks to complete from when we first defined what we wanted to when we had the data from it. That is too slow a cadence to undertake a similar process for each idea we want to validate, but I think on balance was the right thing to do at this time, for this project. It gave us some valuable hands-on experience, an opportunity to dry-run data collection and troubleshooting of devices in the field, to run headlong into important IoT and wearable challenges and fight our way through them.&lt;/p&gt;
&lt;p&gt;For next steps, we’ve not yet defined our next study - but there will be one. In the meantime I’m starting to play with BLE and what will be involved in making a functional prototype using it. We’ll need to think hard about which ideas we can explore and validate with low-fi techniques like paper prototyping, and which need the effort and fidelity of another functional prototype. For the delivery and distribution end, I’ve since made contact with the &lt;a href=&#34;http://wevolver.com/&#34;&gt;Wevolver&lt;/a&gt; folks and others involved with open hardware and we might try some of the conventions they have established for collaboration and delivery of open hardware projects.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <category term="making" />
        <updated>2016-05-20T08:20:36.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2016/04/iot-useless-box.html</id>
        <title>IoT Useless Box</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2016/04/iot-useless-box.html"/>
        <content type="html">&lt;p&gt;A couple of weeks ago I wanted to look into Amazon’s IoT service and conceived a slightly less-dry “hello world” project I could build as a vehicle for this research. You have probably seen the &lt;a href=&#34;https://www.youtube.com/results?search_query=useless+box&#34;&gt;“Useless Box”&lt;/a&gt; concept before - its simply a box with a switch on it. When you flip the switch, a flap opens in the box and some kind of finger comes out and switches it back. I wanted to build that, but IoT-enable it, using the MQTT broker to listen for the state change in the switch, and notify the servo-listener to kick into action and put the world back to rights.&lt;/p&gt;
&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;

&lt;p&gt;Right now, I’m building this on a Raspberry Pi. I’m using the same Jessie/Raspian image I used for the &lt;a href=&#34;/blog/2016/02/smart-mirror-build-log-intro.html&#34;&gt;Smart Mirror&lt;/a&gt; and I was able to quickly get a proof of concept working with a little python code to rotate the servo in response to a MQTT message. Ideally I would have two physically seperate devices managing the switch and the servo respectively. But in the interest of both money and KISS (irony notwithstanding) I’m just using two processes: a switch script to respond to switch state changes and (debounce and) publish to the broker, and a servo script to listen for “on-ness” and rotate the servo to flip the switch back off and return the arm (and close the box lid.)&lt;/p&gt;
&lt;iframe width=&#34;560&#34; height=&#34;315&#34; src=&#34;https://www.youtube.com/embed/4F5_a7Dry58&#34; frameborder=&#34;0&#34; allowfullscreen&gt;&lt;/iframe&gt;

&lt;p&gt;So far so good, but &lt;em&gt;something&lt;/em&gt; wasn’t working when I tried to hook this up to AWS. My published messages appeared not to reach the broker. As AWS offers a more industrial-strength IoT service, I decided rather than troubleshoot the many potential points of failure (keys, certificates, policies etc) I would switch to Adafruit’s similar but much simpler offering.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://flic.kr/p/FXRgiX&#34;&gt;&lt;img src=&#34;https://farm2.staticflickr.com/1681/26229933221_c6dbf58c92_b.jpg&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That’s the first bit working. I can flip the switch and the resulting message shows up in the widget bound to that feed. The 2nd bit though where my subscriber receives the message and triggers the servo.. that’s not working. I’ve tried using the mosquitto_sub client and so far, although clearly adafruit.io is getting the message, it does not seem to be relaying it. I’m not sure if I’m confusing adafruit.io somehow by re-using the same feed id, or what. I guess I’ll fiddle with it.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <category term="making" />
        <updated>2016-04-07T20:19:00.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2016/04/smarthome-user-research.html</id>
        <title>SmartHome User Research</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2016/04/smarthome-user-research.html"/>
        <content type="html">&lt;p&gt;For the &lt;a href=&#34;https://wiki.mozilla.org/Connected_Devices/Projects/Smart_Home&#34;&gt;SmartHome project&lt;/a&gt;, we’ve taken a step back to better understand the problem space and come at solutions based on research and evidence. As a team we spent last week interviewing people in their homes, with questions on a theme of freedom and independence. We choose early teens (12-15yrs) and post-retirement folks as a demographic that might have useful insights into this topic.&lt;/p&gt;
&lt;p&gt;As a software engineer this has been an interesting process. There’s a temptation to jump into any new project with both feet and start hacking on code; starting with simple successes and working iteratively to add features and meet requirements. And we’ve done some of that in order to get familiar with what we expect will be our prototyping platforms - the raspberry pi and ESP8266. But the more we looked at the problem the more it became clear that we weren’t yet sure what technical questions the project would ask, let alone how to solve them. In the meantime, our team was trying to figure out a better vision for the smart home that would align with Mozilla’s values and potential solutions in a broad and confusing product space. None of us were in our comfort zone, so we made a determination to put roles aside and roll up sleeves and muck in. We’ve posted craigslist ads, we’ve had Skype calls, house visits, coffee shop rendezvous; interviewed, transcribed and now begun to process input from 15 different people.&lt;/p&gt;
&lt;p&gt;It turns out that a curious mind, a knack for spotting patterns, analysing outcomes for the motivations and circumstances that produced them - these are the stock-in-trade of any software engineer - and they are skills that work just as well in user research and exploratory product definition as they do in software development. And perhaps more important than that, before we are engineers, we are people. We have families, jobs, aspirations, frustrations and concerns. We were young and hope to grow old. Talking to people this last week has been a great reminder that it is people that solve problems, not code.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <updated>2016-04-07T16:18:10.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2016/03/smart-mirror-starting-up.html</id>
        <title>Smart Mirror: Starting Up</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2016/03/smart-mirror-starting-up.html"/>
        <content type="html">&lt;p&gt;With no keyboard or pointer inputs, ensuring the Smart Mirror can be restarted and booted up entirely automatically was high on my priority list. Once installed, I can’t &lt;code&gt;startx&lt;/code&gt; or click on any icons; it needs to bring up all the backend services and the dashboard to leave it in a working state without any user intervention. That lead me down a merry path and was (for me) the trickiest part of this project.&lt;/p&gt;
&lt;p&gt;Here’s the moving parts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The kernel and OS itself, with networking and other key systems&lt;/li&gt;
&lt;li&gt;The display and window manager - the subsystems that allow me to put my dashboard up on the screen&lt;/li&gt;
&lt;li&gt;The mosquitto message broker&lt;/li&gt;
&lt;li&gt;The gpio listeners&lt;/li&gt;
&lt;li&gt;The web server&lt;/li&gt;
&lt;li&gt;The browser, which should load up my dashboard URL&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I went through lots of helpful posts and projects on creating linux kiosks to figure out potential approaches. While the mirror isn’t really a kiosk - a kiosk usually has keyboard/pointer/touch user input - its a reasonable match up. After a few false starts trying Firefox/Iceweasel and Chromium kiosk options, I settled on the approach outlined in this &lt;a href=&#34;https://github.com/mivok/squirrelpouch/wiki/dashing-pi/&#34;&gt;Dashing-Pi page&lt;/a&gt;. This eschews the LXDE desktop environment entirely and uses nodm and the matchbox window manager to boot into the browser with the minumum of unecessary fluff inbetween.&lt;/p&gt;
&lt;p&gt;Orchestrating startup is a bit fiddly even so. First, &lt;a href=&#34;https://wiki.archlinux.org/index.php/Nodm&#34;&gt;nodm&lt;/a&gt; is configured to startup with/as the ‘pi’ user. The rest of the graphics/display related startup is then in a script copied to &lt;code&gt;/home/pi/.xsession&lt;/code&gt;, which starts up the &lt;a href=&#34;https://en.wikipedia.org/wiki/Matchbox_%28window_manager%29&#34;&gt;matchbox display manager&lt;/a&gt;, and the &lt;a href=&#34;http://uzbl.org/&#34;&gt;Uzbl browser&lt;/a&gt; to load the dashboard. For the backend pieces, Raspian uses the init.d system, so we install scripts in &lt;code&gt;/etc/init.d/&lt;/code&gt; to start up mosquitto, pm2 (which manages the node.js server(s)) and scripts to relay GPIO events as MQTT messages for the rest of the system.&lt;/p&gt;
&lt;p&gt;That done, I can plug the thing in and in just a minute or so it brings up the dashboard on screen and responds to sensor events. The Uzbl browser is a wrapper around WebKit. It supports commands via a socket, which means that once up, I can ssh to the rPi and remotely refresh the page, navigate to other URLs and so on which has proved valuable during development as I have none of the traditional inputs (e.g ctrl+r on the keyboard) to accomplish this otherwise.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <updated>2016-03-04T19:25:41.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2016/02/smart-mirror-talking-to-hardware.html</id>
        <title>Smart Mirror: Data channels</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2016/02/smart-mirror-talking-to-hardware.html"/>
        <content type="html">&lt;p&gt;One of the goals of this project is to get my hands dirty with integrating data and events coming from hardware (sensors attached to the raspberry pi or functions of the pi itself) and content in the browser. The raspberry pi ecosystem leans heavily on python, so while I was already using node.js for the web server, I didn’t want to get tied into a system which limited my implementation choices for publishing or consuming data. I wanted lightweight, language-agnostic messaging.&lt;/p&gt;
&lt;p&gt;My first thought was to simply use the filesystem: writing to some files and watching for changes. That seemed fraught with potential problems and reinventing too much of the wheel. So I went shopping for messaging solutions.&lt;/p&gt;
&lt;p&gt;After looking into ZeroQ and ZeroRPC, I settled on Mosquitto and MQTT. The &lt;a href=&#34;http://mosquitto.org/&#34;&gt;Mosquitto project&lt;/a&gt; is a message broker (think central hub for dispatching messages to subscribers) that implements the MQTT protocol. As well as the broker (available as a raspbian-ready package) it also has command-line clients for publishing and subscribing which proved useful. There are MQTT implementations for both python and node.js (as well as a bunch of other languages.) The pub-sub paradigm turns out to be a great fit here. My “hello world” was to use the RPi.GPIO python module to watch for rising-edge button events on the GPIO pin I had jumpered a momentary button to, and publish a message over MQTT when the button is pressed. Logged into the raspberry pi, I run:&lt;/p&gt;
&lt;figure class=&#34;highlight plain&#34;&gt;&lt;table&gt;&lt;tr&gt;&lt;td class=&#34;gutter&#34;&gt;&lt;pre&gt;&lt;span class=&#34;line&#34;&gt;1&lt;/span&gt;&lt;br&gt;&lt;/pre&gt;&lt;/td&gt;&lt;td class=&#34;code&#34;&gt;&lt;pre&gt;&lt;span class=&#34;line&#34;&gt;$ mosquitto_sub -t sensors&amp;#x2F;buttonup&lt;/span&gt;&lt;br&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;/figure&gt;

&lt;p&gt;.. to see those messages from the broker. In node.js land, the &lt;a href=&#34;https://www.npmjs.com/package/mqtt&#34;&gt;mqtt module&lt;/a&gt; provides the same functionality. We connect to the host/port the broker is on, subscribe to some topics and register callbacks for when messages arrive.&lt;/p&gt;
&lt;p&gt;Finally, to bring the browser into the equation, I ended up making an adapter to subscribe to MQTT topics, and use socket.io to relay to the dashboard page. There is some (new) websocket support in mosquitto, and probably other more direct ways to accomplish this, but this works for now. As a proof of concept, I handle ‘gpio/button’ events from socket.io, and dispatch a ‘hardbuttonup’ event on the window object. A listener for this event toggles a class to flash the screen blue. This opens up a lot of opportunities as I have flexibility at every stage: the button is a stand-in for any sensor or input device that can toggle a GPIO pin high/low. The MQTT message produced can be caught by any program running on this device, or even anywhere on the network potentially. Turning this message into a DOM event enables a seamless tie-in so you can use existing frameworks to respond to the event.&lt;/p&gt;
&lt;iframe src=&#34;https://player.vimeo.com/video/156763300&#34; width=&#34;500&#34; height=&#34;889&#34; frameborder=&#34;0&#34; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;

&lt;p&gt;Its a bit subtle, but in the video I’m clicking the button on the breadboard, and the display is flashing blue.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <updated>2016-02-25T18:51:42.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2016/02/smart-mirror-getting-started-on-the-front-end.html</id>
        <title>Smart Mirror: Getting Started on the Front-End</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2016/02/smart-mirror-getting-started-on-the-front-end.html"/>
        <content type="html">&lt;p&gt;There are a few key requirements and moving parts to this project that I wanted to explore early on. I began at the end - the UI I wanted to show on the mirror. I initially tried out &lt;a href=&#34;https://bitbucket.org/atlassian/atlasboard&#34;&gt;Atlasboard&lt;/a&gt; but quickly found it did more than I needed in some areas, and fought me to do what I wanted in others. To KISS, a ground-up implementation was going to be easier in the long run.&lt;/p&gt;
&lt;p&gt;To get started I created a static page with a simple dashboard/widget system. It had a fixed pre-defined number of slots - a 3x3 grid of rows and columns - making layout and addressing really easiy with CSS and getElementById calls. This gave me a straight-forward way to get stuff on the screen. The widgets were a rudimentary base class implementing a init/update/render (and optionally poll to update and re-render) lifecycle.&lt;/p&gt;
&lt;p&gt;On the server-side I made an &lt;a href=&#34;https://github.com/sfoster/scry-pi/tree/master/socket-board&#34;&gt;“socket-board” express app&lt;/a&gt; to do reverse proxying, and keep things like API keys out of the client-side code. The name alludes to my eventual goal to hook it up to websockets for sending data between the client and data and events on device. All my feed/api requests from the browser go through the express app, and it sends on their response. To that end I created a basic &lt;a href=&#34;https://github.com/sfoster/scry-pi/blob/master/socket-board/lib/config.js&#34;&gt;config library&lt;/a&gt; and a &lt;a href=&#34;https://github.com/sfoster/scry-pi/blob/master/socket-board/lib/forward.js&#34;&gt;request-forwarding middleware&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;At this point, the dashboard runs anywhere with node.js and a modern browser.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://flic.kr/p/DwFF9v]&#34;&gt;&lt;img alt=&#34;scry-pi dashboard screenshot&#34; src=&#34;https://farm2.staticflickr.com/1691/24632436353_017615aa4f_z.jpg&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Between the initial implementation and this screenshot, I’ve added a widget for the local IP address, which is particularly useful during development.&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <updated>2016-02-25T18:11:00.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2016/02/smart-mirror-build-log-intro.html</id>
        <title>Smart Mirror Build Log: Intro</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2016/02/smart-mirror-build-log-intro.html"/>
        <content type="html">&lt;p&gt;With the winding down of the Firefox OS / smartphone project, and a &lt;a href=&#34;https://blog.mozilla.org/blog/2015/12/09/firefox-os-pivot-to-connected-devices/&#34;&gt;new interest in IoT and connected devices&lt;/a&gt;, I thought it would be fun and instructive to go out and build something connected. I settled on the “smart mirror” concept as a nifty thing that would be actually useful at home, and fun to make. I first saw this via &lt;a href=&#34;http://hackaday.com/2014/05/01/mirror-mirror-on-the-wall/&#34;&gt;hackaday&lt;/a&gt; featuring a &lt;a href=&#34;http://michaelteeuw.nl/post/84026273526/and-there-it-is-the-end-result-of-the-magic&#34;&gt;Magic Mirror&lt;/a&gt; project. Its a doable project and should drop me in the middle of some more or less unfamiliar territory: getting hardware talking to software, lots of linux hackery, sensor inputs and gpio, and orchestrating moving parts up and down the stack. It should also be a good test platform to build on, to explore different technologies and interactions.&lt;/p&gt;
&lt;p&gt;So, I have in mind a dashboard of sorts, displayed on a monitor/tv behind a two-way mirror. It should collect together some local and remote data of a sort that might be useful to know before you head out into the world: date/time, weather, appointments, actual (vs forecast) conditions and so on. As there wont be any keyboard/mouse/touch inputs, any direct user interactions (not sure yet what they would be if anything) would need to be hands-off - maybe voice input? some kind of presence or simply proximity detection to activate it?&lt;/p&gt;
&lt;p&gt;I’m keeping notes as I go about this and I’ll post a series of updates with progress, dead-ends and observations. Some decisions I’ve already made - I’ll start out running this on a Raspberry Pi, the dashboard will be an HTML page in a fullscreen browser - others I’ll figure out along the way. In the past I’ve found that there is no substitute for actually making something to understand what the nature of the challenges will be, and its usually not what you expect. So, I expect to fall into all the traps, trip on all the hurdles and generally fumble my way though. Sounds like fun :)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.sam-i-am.com/blog/2016/02/smart-mirror-talking-to-hardware.html&#34;&gt;Smart Mirror: Data channels&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.sam-i-am.com/blog/2016/02/smart-mirror-getting-started-on-the-front-end.html&#34;&gt;Getting Started on the Front-End&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.sam-i-am.com/blog/2016/03/smart-mirror-starting-up.html&#34;&gt;Starting Up&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <updated>2016-02-18T01:46:40.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2014/03/alas-metro.html</id>
        <title>Alas Metro</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2014/03/alas-metro.html"/>
        <content type="html">&lt;p&gt;I found this article stub in my drafts folder, dated back to 2014. I had apparently intended to write about the cancelling of the Firefox Metro (Firefox for Windows 8) project, but never put pen to paper. 18 months later, I cant think anything better to say than “Alas, Metro.”&lt;/p&gt;
</content>
        <category term="mozilla" />
        <updated>2014-03-17T20:32:12.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2014/01/story-so-far.html</id>
        <title>The Story So Far</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2014/01/story-so-far.html"/>
        <content type="html">&lt;p&gt;I’ve been enjoying responses to &lt;a href=&#34;http://skinnywhitegirl.com/blog/my-nerd-story/1101/&#34;&gt;Crystal Beasley’s post&lt;/a&gt; about how she got into tech/development. It got me thinking about the diversity of backgrounds and paths people take to progamming and tech work; I know it never ceases to amaze me. Here’s my story.&lt;/p&gt;
&lt;a id=&#34;more&#34;&gt;&lt;/a&gt;
&lt;p&gt;I am the 2nd son of academics - my dad was in particle physics, my mother in the sociology of health. As a teenager and contrarian I built my identity around being an artist. My dad was one of the original telecommuters, dialing in from home through a maze of networks to work on projects at CERN in the 80s. In a novel-worthy foreshadowing, he was working at CERN while Tim Berners-Lee was cooking up an idea he called the World Wide Web. We had the ZX8* series sinclair computers in the house at that time to play with. I was curious but it was more my younger brothers who took to typing in arcane incantations from magazines. I mostly played the games. At high school there were apparently some computers but I don’t think I ever saw them - it was a geek-clique thing that held no interest for me at the time. I studied art at Leeds College of Art (Jacob Kramer) and got my BA in Fine Art (Sculpture) at Wimbledon School of Art. I worked various jobs, including a stint doing fabrication and installs for furniture designer Tom Dixon. It was there that I first learnt an appreciation for processes, jigs, fixtures and making tools to improve accuracy and productivity. It was working 7 hour shifts, 7 days a week (while finishing my BA) as a dishwasher in a busy restaurant in London that set my bar for the meaning of hard work.&lt;/p&gt;
&lt;p&gt;At some point the work ran dry and I found myself unemployed and living in my studio in a railway arch in Brixton, London. I ran into an &lt;a href=&#34;http://uk.linkedin.com/pub/noel-hayden/5/147/262&#34;&gt;old friend&lt;/a&gt; who was starting an internet services company. He asked if I could draw up a history of media for their website to provide some background for his sales pitches. This was early 1996 in the UK and most people hadn’t heard of the web or the internet so it was a tough sell. I did, and became fascinated with hyperlinks and non-linear documents, learnt enough HTML to put it together (there wasn’t much to learn back then) and fell into helping out with some of their client work.&lt;/p&gt;
&lt;p&gt;Capital “P” programming still didn’t interest me much. Shaping content and building what I later came to know as user experiences did. A few years later I moved to the US and took a job doing front-end developement with design company &lt;a href=&#34;http://frogdesign.com/&#34;&gt;Frogdesign&lt;/a&gt;. I’d done a little perl and JavaScript but mostly I did HTML and CSS. With the browser wars in full swing, being able to crank out good-looking pages to design specs that worked in contemporary browsers was a specialty much in demand. The challenges of cross-browser development and the divisions of labour meant that’s largely what I did for the next 6 or 7 years.&lt;/p&gt;
&lt;p&gt;When I finally moved on I took the opportunity to re-frame myself as a slightly less niche developer and embraced JavaScript and the emerging world of AJAX. I crossed the fine-line between web pages and web applications and wound up involved in the &lt;a href=&#34;http://dojotoolkit.org/&#34;&gt;Dojo Toolkit project&lt;/a&gt;. Its that work which probably ultimately walked me in the door at Mozilla as a web engineer, where I now work on Firefox Touch for Windows 8. Dishwasher, morning cleaner, welder/fabricator did not appear on my résumé. If pressed, today I will call myself a sofware developer. I have other names if you are interested :)&lt;/p&gt;
</content>
        <category term="dev" />
        <category term="mozilla" />
        <updated>2014-01-10T16:54:00.000Z</updated>
    </entry>
    <entry>
        <id>https://www.sam-i-am.com/blog/2013/10/thoughts-on-mozilla-summit-2013.html</id>
        <title>Thoughts from Mozilla Summit 2013: Its about the Goal</title>
        <link rel="alternate" href="https://www.sam-i-am.com/blog/2013/10/thoughts-on-mozilla-summit-2013.html"/>
        <content type="html">&lt;p&gt;This past weekend I was in Toronto for the Mozilla summit. It was one of three venues where mozillians - staff and volunteers - gathered to talk about the Mozilla project, what we’re doing, where we are going and most importantly, to meet members across the breadth of the community.&lt;/p&gt;
&lt;p&gt;I started at Mozilla almost 2 years ago now, when I joined to work on the now-shelved Pancake project. Now I work on Firefox for Metro. I had at least dim awareness of all the projects and intiatives discussed over the last few days, but a few things were brought home to me. One is that success - which I’ll define as an open, universally accessible web - is far from assured. In many ways the open web is a solution to a problem lots of people don’t know they have, and as such it is vulnerable to erosion through public apathy as much as commercial and governmental influence. The right to a free and open platform for publishing, sharing, doing business and all the other ways the web has become critical infrastructure for our society - is not written or enshrined in any law. It is a thing we must all work at preserving and building.&lt;/p&gt;
&lt;p&gt;Furthermore, Mozilla is just one of many communities that share these values. While we must compete with software giants, we must shouldn’t resemble one. We must guard against sacred cows and not-invented-here syndrome. We must proactively reach out and connect up with groups that share our values and vision, and frame our work as a collective effort, not a “Mozilla project.” This is especially true when many people only understand Mozilla to be a software company that produces a browser; nuances like it being a non-profit, mission-driven open source project can get lost in the mix.&lt;/p&gt;
&lt;p&gt;Seeing the open web vision being driven forward by so many people, across so many countries and in so many ways was invigorating though. I’m proud to be playing a part in this and look forward to the future we can build.&lt;/p&gt;
</content>
        <category term="mozilla" />
        <updated>2013-10-07T15:38:22.000Z</updated>
    </entry>
</feed>
