Essentially Metaphor

The intellectual core of managing the development of software is the editing of metaphor and analogy. If you strip away the business and team management components, the job of defining and iterating on a piece of software for people to use is essentially about creating, selecting and enforcing a bundle of metaphors.

This is because software, by its nature, is abstract. Crash Course’s excellent series on Computer Science even has a little recurring animation they play each time they move up a level of abstraction in the computing stack. The fact that this happens so often as to warrant a special bit shows how core this process of intellectual leaping “up” the chain of metaphors is to computing.

From my perspective, as someone trained in cognitive science and linguistics, Lakoff and Johnson‘s observation about the necessity of metaphors in abstract thinking and communication is exemplified in the way computers and humans interface. In this medium, there is almost no physicality (and as voice agents grow, the physical aspect of computing is further reduced into the purely audial). All understanding must map, though metaphors and mental models, to some experience of the real world.

No where is this more clear than the concept of skeuomorphism. This is the application of the visual look of real objects to their software analogs, on the theory that people will recognize and bring their intuitions from the real world into their new experience. This style fell out of fashion, the reasons for which are a whole other essay about dialectics and technology as fashion. But it is the easiest entry point for those looking to understand the importance of analogies to interfaces.

Another pervasive cognitive metaphor used in software is the “___ is a journey” analogy. A user story is a “journey” for the user. This makes some sense, but like all structuring metaphors it has a lot of implications and entailments that people wielding this metaphor are often not aware of. Users may wander on and off the path. They may never make it to a destination and still be satisfied. Not all who wander are lost.

As new interface paradigms become common, or as your UX designer pushes for some new pattern or another flow change, always watch the metaphors. The more consistent and intuitive the metaphor, the more productive the implications, the better. One thing that sets apart “product people” is not necessarily that they are inventive, but that they can crystallize an emergent abstraction. What does that mean? It means they can see the metaphors and analogies others are unconsciously relying on. They can identify the contradictions and tensions between competing analogies. They can empathize with the cognitive effort understanding a metaphor takes, and feel if it serves users well, or just adds confusion. They can look for commonalities, and take stories, features and technologies into a new level of abstraction.

Any Interface Can Be Free

There’s a meme floating around that some interfaces are, by their nature, less open, and will march us further into an easier, but less “free”, technological future. There, the human is cocooned in a machine experience they don’t understand, and have no control over, but which satisfies their every need.

I would counter that this is a choice. Just as GUIs abstract and limit what users can do, compared to the raw command line and programming, it is true that higher levels of abstraction, and simpler interfaces, do move the user away from total control over their computing experience.

And, it may be true that voice interfaces, and smart user agents, are inherently more difficult for a normal user to understand, and control the inner workings of, than it is with GUIs.

But I don’t see the evidence that is inherent in voice technology. I see these technologies, like smart speakers, being built in places, and with motivations, that push them to be closed. It is a logical progression that the user increasingly finds themselves at the mercy of a few companies’ optimization math. But it is a result of the business imperatives, not the technological ones.

Speaking to a computer is no less inherently free, or open, than typing at one. It’s easy for me to imagine recommendation systems, voice agents and other new forms of machine-human interface just as open and free as the web, or as collaboratively sourced as the Wikipedias.

There’s no reason I shouldn’t be able to ask a voice agent to cite its sources, explain its recommendations, or even take correction. There’s no inherent reason I couldn’t really have a free smart speaker, where I could provide my own voice, or specialized vocabulary, or reprogram and remix as I like. There’s no reason ads can’t be separate from information, in the things that are spoken to me by my smart assistant.

When Echo gives me an answer I want to say back “say’s who?”, and get a source.

When Siri gives me the wrong answer I want to say “that’s wrong. Correct it.”

When Google Now gives me a fact I want to ask “how up to date is that?” and get a last edited date and author.

All this is as technically feasible as anything these systems do now. These systems’ closed nature is capitalism’s limitation, not technology’s. It is only our culture and mindset that see self-determination, openness and freedom as an impossible or evaporating dream in cyberspace.

PM’ing in public

I work at a beautifully weird organization. The Wikimedia Foundation is transparent to the outside world in ways most people, even people who know our work (not the projects, but the Foundation) or support it through donations, don’t know.

For a PM, the idea of doing our job in public can be a bit of a challenge. Many PMs operate best in 1:1s and manipulating the flow of information. If things are transparent, not just internally, but, literally, public (as in both free to use/copy and viewable in the internet “commons”), how do you differentiate, prototype or otherwise get the element of surprise, so skillfully deployed by Apple and others? Well, you don’t. You accept that your goal is to be seen and copied, just like our contributors’ photos on Commons, our research, designs and products are to be shared, in the very hope that they will not just be seen and used, but copied and remixed.

But also it means all your tickets, even your teams RETRO NOTES, are there for anyone to see. Dad want to see  your teams quarterly update for the executives? Future employer want to see a product proposal you wrote? It’s possible.

At first it weirded me out. I had expected to be reporting regularly to the public (it is a charity, and a place known for advocating transparency on the internet.) I had not expected to have my meeting notes made public by default, or the staff meetings to be streamed live to the world. Now it seems normal. But I had to gain confidence in what I was doing and remember every fucking day that nobody is perfect. My unfinished specs, tasks open and assigned to me way too long, flimsy rationales, sometimes aggressive politicking, they are all there for you to see, but they are not that unique. At least that’s what I tell myself. Plus, what you find is that, like your personality flaws, most people don’t give a shit, or even see the same shortcomings you see in yourself. We’ve had fewer than 50 people ever come comment or get involved with any of the public work my team has produced over 2 years (not counting code contributors). No one has time to read your specs.

And on the other hand, I get to do a job that’s usually among the most “proprietary” in the world, and do it in public. I wish more orgs would let their teams do their process in public. When there is no competitive advantage to secrecy, I think there could be great value to our profession, and the products we create, if more people who make great software could be truly open about their process.

To illustrate the level of transparency I’m talking about, you can see the “dark mode” feature’s development on wiki, Phabricator (our task management system) and public google docs. Not because we set out to do so, but because it is the default. From the planning meeting when the team first prioritized the feature. To the task tree, including the initial design thinking , the interactive prototypedesign research presentation, the tech scoping, the implementation and iterations and bugs, the user testing and evaluative research, the QA test plan, the regression testing results, the public mailing list announcement of the beta test, my draft of the App Store text, the release checklist for actually pushing out that version, the clean up tasks in the following bug fix release, a post launch review and analysis I did for my peers, the quarterly report where I took a victory lap for the positive response, and the draft and final the blog post where my team leads wrote about how we approached all this. And that’s not even all the fingerprints of this thing on the public internet. This for a simple (but important) feature. All teams have these kinds of digital artifacts, ours just happen to be public.

If you’re new to product management, curious how it works, or just nosy about Wikipedia and Wikimedia, I recommend checking out some of our easier to digest stuff: our monthly metrics meeting (always streamed live on YouTube), our quarterly meetings, decks and notes, or check out the many links I posted about dark mode. It’s far from a perfect, or exemplary, organization, to be transparent about it. But at least its flaws, like my own as a PM, are all there to be seen, discussed and hopefully improved.

Make the easy stuff easy and the hard stuff possible.

Open source software sux at UX. Why?

Lots of reasons, of course, but one is that these interfaces and their creators are often focused on users abilities, and the full expressiveness of the software’s functionality, not on the users’ needs or capabilities.

For example, often, the free software drive to make everything controllable by the user leads to an approach to interfaces that make all that is possible, visible. They treat all functions equally, since the goal is just to provide all the tools and let the liberated user decide. This equality of features often also leads to interfaces that lack clear hierarchy, or are overloaded with menus, options and configuration preferences. Lastly, in these projects, design and particularly visual design are “extra”, and often not in the skill set of participants. The reasons for that are, really, I think, a separate discussion, touching on the gender of aesthetics and traditional roles in software.

But, all that said, full user control is a value worth preserving as software evolves. It’s something people who focus on licenses and patents as the core of free technology lose site of: if software is truly libré, everything a user wants to do, or change, should be possible, even if it is hard. Software may be freely licensed, but if it is impossible to use, or very difficult, is it really free for all? If it is to be a real practical freedom, free software must also be useable for all.

Finally, for software to be broadly successful, it needs to be not fully featured, but useful. It need not be fully customizable, but it must be intuitive. So how do these tensions get resolved in a way that pushes free software forward?

Larry Wall has a phrase that he applied to PERL, but could also be applied to open software interfaces: make the easy stuff easy and the hard stuff possible.

This is the way to bridge these tensions: make the things people most need from the software easy and intuitive. Don’t make the user invest in configuration, or understanding complex metaphors, or unique interface paradigms. But also, make the hard things possible. Don’t enclose everything in black boxes and “smart settings”. If the user wants to turn something off, let them. If they know better than your recommendation model what they want to see next, let them manage their own queue. For everything you make easy, also make it controllable, and understandable. Then it will be truly free.

 

Contending with contemporary methods in design

If you’ve worked with designers, researchers or product managers who’ve been educated in, or advocate for “user centered design”, you’ll know it is well worth researching and learning about this philosophy and the processes it advocates. If you haven’t, see this standardized summary and this video by way of introduction. The original, complete, user centered design process is quite full and nuanced, and like many development and design systems it is rarely (never?) fully and completely implemented as envisioned. However, many of its methods and ways of thinking are useful, even without full adoption of the entire program.

Below, are some thoughts on some of these methods (not the program as a whole) and the good and bad of them, and how to deploy them effectively.

One of the main related methods of user centered design is user testing. This is the most important thing you can take from this. Watch users use your product. There are services and firms that can help you, but even you (yes you!) can record and observe a user while they use your product. Don’t intervene, don’t let them see you. Then ask them questions when they’re done. It’s never not informative.

I love user testing, and have used, and advocated for it, since I learned about it. But it is focus grouping. It will reduce the risk of confusing or failing the user, but it won’t find a compelling use case. Just as a focus grouped movie is less likely to fail, but focus grouping is unlikely to turn a shitty movie into a hit. You still need the need discovery. In user centered design they advocate for a generative research and prototyping process to achieve this. Personally, I’ve never seen such a process uncover a need that cognitive empathy with a well developed persona could not. But obviously these methods have been quite successfully applied by some. It does provide a lot of evidence for needs, though.

Which brings us to personas. Personas are archetypal users that represents a defined audience or role in the system. Often these read like the product of a marketer’s imagination, and that is indeed their origin. Usually they come with some cheesy name (“The Megainfluencer”, “Soccer Mom”, and so on) and some demographics. Even when developed specifically for your audience, I’ve found these to be of less value than user testing. Often they are good for organizing and explaining the product or its user stories, especially in documentation, but less valuable for decision making, or actual feature development. There, they can help develop empathy, and also help give focus and reality to stories and features. But I’ve been involved in very few systems or contexts where rich personas were more useful than just simple names for roles and user types, and associated needs or stories. I’d also strongly urge caution if using personas without real training in user centered design or marketing, especially if the personas involve people unlike yourself. It is very easy for a well intentioned amateur persona to become indistinguishable from an offensive bundle of stereotypes that reinforce bias rather than develop empathy.

Camp Software

Wikipedia explains that camp is:

“an aesthetic style and sensibility that regards something as appealing because of its bad taste and ironic value. Camp aesthetics disrupt many of modernism‘s notions of what art is and what can be classified as high art by inverting aesthetic attributes such as beauty, value, and taste through an invitation of a different kind of apprehension and consumption…  Camp aesthetics delights in impertinence. Camp opposes satisfaction and seeks to challenge.”

Camp is often applied to visual, performing and conceptual art, along with lots of movies and TV, restaurants and experiences. RuPaul is campy, John Waters is campy, Benihana is campy, an art car is campy. But can it apply to software? Can you make meaningful or even “popular” software that “opposes satisfaction”? This website’s own home page was my attempt to work in an aesthetic I thought was so ugly it might actually be cool and to oppose satisfaction by providing only the shallowest of explanations of what Jiko Kanri is.

But there has to be much more out there. Apps with ironic look and feel? Does that count? Or tools made intentionally hard to discourage their use. We’ve all seen web sites we could easily describe as “over the top”. Every Chinese social app I look at seems to be inverting the normal values of software design to stuff the screen with every function and text it can find a few pixels of space for. Is that camp? Is Snapchats “old” unfriendly, gesture heavy UI and attempt to be impertinent and to challenge users to learn new things through word of mouth, excluding the conservative of habit and the un-networked user. Is that camp?

One effect of the usefulness of software, and its categorization as a medium for tools and games, is that media theory, which is rich and insightful about movies, TV, news, music, you name it, seems to treat software as outside its view. Website and trends might be analyzed and deconstructed. But non-game software is not subject to the same level of critical and analytical thinking. I enjoy watching YouTube videos by movie and cultural critics. These lengthy reviews, analyses and essays are interesting, insightful and show how media like TV and movies work. How much of that is possible for software? Most software videos are tutorials or shallow reviews. I understand these apps are tools.

But I also know: Software is a medium.

I want a more popular, useful, fun, interesting media study of it. And I want to see weirdos like me celebrating software so ludicrous its tragic, or so tragic its ludicrous.

Software I’ve Loved

Someday we’ll really understand as a society, and maybe people younger than me already do, the central role software and it’s interfaces play in our culture now. UIs are now era defining, and can be mass critiqued as any product of a collective creative craft, as we do with movies, television, news and other media. Somewhere there are kids that love software as young Speilberg and Lucas loved movies. Somewhere there’s a little girl who dreams of software, just as there are girls dreaming of becoming the next Shonda Rhimes. I hope we give these kids the spaces and ladders that film, publishing and so many other media offer their most adamant fans. We need to show them there is more to software than programming, just as their is more to movies than acting, and make being a “software nerd” something more than learning to code, just as being a movie nerds is more than learning to operate a camera.

One potential way to do this is to encourage a “culture of software”, where newbies can hear not only about process and technology, but where there is a robust critical discussion of software itself, its language, trends and tropes.

And maybe this exists, but certainly not as much as the cultures that surround and support older media.

As my meager contribution, this is one of two essays which focus on 3 classic pieces of software. In this one, I have 3 I’ve loved, and why I loved them:

  1. WriteNow – Back when GUIs were coming to mass market maturity with Windows 95 and Mac OS 6, word processing was still new enough to be cool. Our little Mac SE’s beautiful fonts and ability to do things like multi-column documents were the forefront of what computes could do for home computer users like my family. It was the age of the “desktop publishing revolution”. An early, simple, disruption that would, in many ways, presage the broad democratization of media creation that would follow. At this time, there were multiple word processors, and Word was just beginning to bloat. But above them all, from my POV, for years, sat a little Mac word processing program called WriteNow. It fit on a single 3.5 floppy, loaded fast, was responsive, and was easy to understand and use, with a nice clean UI. Best of all it wasnt’ bloatware. It was full featured (it did footnotes!) but wasn’t hyper customizable and it didn’t have paper clip to help you. And that was the triumph. It knew what the core of a word processor was, and it did it well. Sadly, it didn’t survive the era, and eventually I succumbed to using the beast that is Word (for a while). But its spirit lives on in MacOS’s TextEdit. TextEdit is fine, but is slightly too bare bones (though it too has grown some extraneous features over 15 years). WriteNow hit the sweet spot in its category at a time when this category was the killer app for the home PC. I don’t think it was a mass cultural phenomenon like Word, but for this user, it was software worth loving.
  2. HyperCard – HyperCard is an historically important piece of software, and the first really broadly available system for hypermedia. It expanded who could build a GUI, was the vehicle for the first wiki, influenced the World Wide Web and gave us the ancestor of Javascript. It was my first introduction to event driven programming and first chance to play with GUI design. It was so powerful, yet easy to understand. It was a stack of metaphorical cards you could program to do anything. I made animations, games, simulated spaceships, anything I could think of that fit into the 512×342 black and white pixels of our Mac SE. Like the best creative sandboxes, it was constrained but almost infinitely expressive. If you’re not familiar with HyperCard and its role in software history, I recommend reading about it, or even giving it a go on an emulator. If you’re young, it may feel like getting into an antique car: you can see all the bits were there that make up much of todays software, but just a bit lower res, much less safe, not on the internet, and way slower.
  3. IntelliJ – When I started building Android apps, in my Developer Evangelist role at Greystripe, the only viable way to do that was to download Eclipse. Eclipse is a great gift to the world, a free open development environment. But it is not good software. Confusing, bloated and overly customizable, it was my least favorite part of building Android apps. At my next startup, the CTO immediately asked me why I wasn’t using IntelliJ. It was a free, beautified a tool, that makes doing hard things much easier and was light years better than Eclipse. I tried it and was hooked. I soon noticed this tool was everywhere. Our developer partners were making the change quickly, and en-masse. It was fast market disruption by a superior product in action. I had to rewrite our SDK instructions a couple months later, due to the volume of devs asking for IntelliJ based instruction. The JetBrains people continued to make great editors and software tools. Though I don’t do much Android programming these days, I was not surprised to hear Google’s Android Studio was being developed as an adaptation of IntelliJ.

So what do these have in common: performance, knowing their purpose and serving it well, expressiveness, elegance. They all enable me, make me more creative, and make making things easier and less annoying. Great software is an admirable cultural object, but it is also a tool, and these 3 tools have not only been impressive in their own moment and category, but they also served as the spark for untold amounts of second order creation.

Software I’ve Hated

Although I prefer my list of software I’ve loved, like most media, most software is bad or mediocre. And some of it is painful, annoying or even deadly. Unlike movies, software robs and blackmails. That’s hate-worthy. But, for me, the software I’ve hated most is software I’ve used a lot, but that share(d) some key characteristics.  Not just annoying to use, but really obviously flawed in a fundamental way, but ultimately popular anyway. Like a shitty movie that makes millions, these are product that may touch many peoples lives, and may be “successful” by most metrics, but which fundamentally fail as software. That makes me hate them.

  1. Eclipse – The invert of IntelliJ. The first couple times I tried to use Eclipse in grad school, I literally gave up on the intro screen. It had four options, none of which were straightforward actions like create a new code file. I clicked one. It was some kind of package manager, but for plugins? I was on a Mac, it looked like Windows (I guess it was/is Swing?). My fan whirred on. I hadn’t written a line of code and my top of the line laptop was having to give it’s all. A few years later, when you had to use Eclipse for Android development, it had improved. I made it past the launch screen. But it remained ugly, slow, inconveniently and excessively customizable, resource heavy, cluttered with too many features I didn’t need and lacking some basics I expected. It’s integration with the Android toolchain was brittle at best and more often wall-punchingly inane. I kind of liked building Android apps, but I very much hated Eclipse.
  2. iTunes (some versions) – A clichéd choice but a necessary one. I’ve used iTunes since its primary purpose was to rip and burn cds, and manage your music library. It also had an equalizer that was fun to fiddle with, and a visualizer that was a beautiful demonstration of the Mac’s graphics power. Then came the iPod, and the Store, and sharing, and Podcasts, and the iPhone, and videos and tv, and iCloud, and iPad, and TV again, and Apple Music and Ping and and OMFG this interface makes no sense. Each view is different… what is wrong with a sortable table? It’s fluctuated in usability. The version in Sierra is useable, if confused and inconsistent. But boy, have their been updates where I thought, “the designer of this literally hates people who want to use this workflow. They did this to vex me.” And, yet, it’s running even as I type.
  3. Explorer – For a period Explorer was the best browser available for OS X and I used it voluntarily. It was resource heavy, but fast and its rendering was spiffy looking. At other points in my life, I’ve used Explorer, but always found it to be ugly, slow or just not worth the download. But there were also points where Explorer was the worst and I worked in companies that mandated its use. I actually liked Windows NT okay. But I remember cursing the name of Bill Gates sitting in a shipping office in Seattle, as a temp, trying to load a large page of information into my little desktop and not enjoying my experience. Also as someone who’s done web work, fuck Explorer for their non-compliant weird old browsers still causing problems for web devs and users worldwide.

What do these three POSes have in common? Complexity, slowness, monopoly. They are tools that do too many things (or try to), do them slowly or unstably, but that I had to use. Even though Eclipse is a FOSS, non-profit, at that time, on Android I was not free. There was no real market or competitor, and that led to   shitty bloatware.

Everybody Wants to Rule the World

I happen to like modern and conceptual art, when it’s good. But I’ve never been at a busy modern art museum and not heard the clichéd comment “my kid could paint that”. (My answer usually being: I’d like to see that.)

The medium of software is, similarly, one users tend to criticize from a place of “knowing better”. This tendency extends even to those who make software. “If we just…”

Everyone wants your decision making rights, even if they don’t know it. In the next few essays I’ll talk about various typical stakeholders and how to interact with them in a productive and empowering way, while also protecting your prerogatives and role as decision maker.

Before we get into specific relationships, I wanted to call out some commonalities  that hopefully illustrate what I mean, and help new PM or interested outsiders understand how this kind of leadership can be done.

One thing that separates a great PM from an okay one is that they understand that their job is not just setting the agenda but setting the tone, and team narrative. Is the team a successful release machine, delighting users and hitting KPIs? Or is it an abject failure that builds buggy shitware and is about to be re-orged into oblivion? Between these poles, there is a lot of complicated reality. A great PM will shape the team’s reality. Not though lies, withholding and manipulation, but through emphasis, tone and self-certainty. In discussions and meetings be real, but also determined: cynically optimistic.

The most obvious way to be the team’s leader is to be the person who decides, or decides who the decider is. If there are disagreements between functions or teammates, or you don’t need to make the call, always state clearly who has say. Do the same when out-of-office, or unintentionally neglecting a situation. Don’t let it hang: delegate.  Weirdly, the best way to protect your prerogative is by delegating it frequently and explicitly. If you don’t make the decision, decide who does.

I personally value decisiveness and determination very highly, and although they can shade too easily into arrogance and stubbornness, such is the case with any virtue taken to extreme. One way to avoid this extreme, but maintain decisiveness, is to embrace negotiated decisiveness. Let the negotiations run free, and let discussions happen. Then be the one that ends them. Then, once they are ended, record that decision and maintain your determination to see it carried out, until new information is available or risks manifest. Again, this kind of leadership is not, at core, about seeing your meme win (“getting your way”), but about having say over which memes are allowed, and how they win (“being the decider”).

Finally, remember, when making decisions and considering the sides of an argument or political situation, that surety doesn’t equal correctness. The people who are the most determined are not always the most correct or well intentioned. If you want a career in “authority-less leadership” you have to be able to not give in to bullies and blow hards. Let them blow. Remember they will get mad when they don’t get their way. Remain calm, and practice benevolent determination.

 

Showrunners

Software product management, by and large, descends from consumer goods product managers. The idea of a wholistic product owner, market analyst and in house success engineer comes from Procter and Gamble and other consumer brands. Except in games. In games they are called Producer, in an analogy to movies. Essentially the same kind of job, both in the same medium, just different genres. Weird right?

I believe this is a result of software being a relatively adolescent medium, which borrowed pre-existing metaphors, jobs and work patterns and gradually grind them into the right shape for software. So, yeah, being a software PM is like making a consumer product, or even a b2b product, not that different from any other business. But it also is like being a movie producer or director, leading a collective commercial creative endeavor to deliver a media experience. I want to throw a third out there: the show runner. http://www.showrunnersthemovie.com

Show runners are the driving force and final editorial voice of modern TV. The prestige TV renaissance, with its serial dramas and long form complex narratives, is all made possible by show runners. Like PMs they combine leadership, creative strategy, and ultimate ownership of success. Sound familiar?

So what? Why does it matter? Because, as a similar, but different job, show runners have things to teach us: tools and patterns we could import, heroes to emulate or failures to avoid. But not just show runners. Cool as they are, my larger hope is that as software development evolves and continues to mature and that software Product Management will become its own thing, even if it borrows its names, and its tools, from many sources.