What a DAM Mess

“Hey, Paul, you and your team know information architecture. Ever work with Adobe Assets?”

That’s how it started, humbly and simply, with a question from a business development lead at my agency. He had been working on a few deals that included Adobe Assets, one of the popular Digital Asset Managers – or DAMs – out in the market. I answered to that colleague, simply, “Not yet, but I’ll look into it.”

Digital Asset Management – also DAM – holds the promise of centralizing all of an organization’s assets… things such as images, content, PDFs, working files, and so forth. It was something that I had a basic familiarity with at the time, but once I learned more, I recognized how powerful a centralized set of assets would be for just about any organization.

While standalone DAMs exist, integration with a broader Content Management System (CMS) or Digital Experience Platform (DXP) is where the real power comes into play. An example of that promise is: I can have an asset that is on a content maintenance cycle, well-tagged, in a folder and place that makes sense in a DAM… that can then be associated with a particular component or template in a CMS… that can be measured for performance in a test-and-learn personalization program, and adjusted accordingly. It’s the end-to-end “Hey, how is this doing?” question that any marketer worth their salt has.

Many DAMs offer significant integration points beyond that (extending this into broader organization workflow management, and existing creative tools) but that’s the gist of it. Organize your stuff, maintain it, and you’re setting yourself up for a bright future of serving up totally personalized experiences to customers on their phones, tablets, computers, watches… you name it.

Well. Maybe.

Where do we put our DAM stuff?

I started working with my team to understand how information architecture (IA) could or should play a role in the DAM work we were selling.

The start of untangling any mess – as information architecture (IA) expert Abby Covert might suggest – is understanding what you’ve got. An inventory and audit, addressing both qualitative and quantitative aspects of content, is the start of so so much work. And in working with DAMs, by delightful coincidence, starting with an inventory and audit makes a ton of sense.

This is where organizations start to get wide-eyed and realize how much of a challenge moving to a new DAM (or simply organizing their assets) can be. When I worked with a large-scale retailer several years ago, we opened a discovery by talking about where they stored their assets today. Great news: they had a shared creative server, internally!

But the creative team also regularly dropped things on their local computers – desktops were full of icons. They also had another shared server for some production assets. And a SharePoint instance where some things lived. And they also used Box for a few things. One marketing team had a totally different process. Yet another department had designed a full lifecycle process built around catalog production that worked well for them, but isolated all of their work from everyone else.

Inefficient? Don’t be so sure. These teams all still produced displays, catalogs, digital ads, 4 websites in 2 languages, and more on a seasonal basis. That’s not trivial work and they got it done, every time.

But they recognized that having a centralized library of all of their work could lead to efficiencies, both in organization and processes around content and asset production. Did they need to have an asset from a photo shoot in 24 different places? No. Did they need to manually create contact sheets in PDFs for approval by creative directors? Also no. Did they need to manually create variations of an asset for display ads, the website, and the mobile website? No!

This is all to say that the way most organizations handle assets today is likely very, very inefficient from a broader enterprise perspective, even when everything happens on deadline and on time.

Enter the IA

That brings me back to the question posed by my colleague. Organizing a DAM is all about figuring out where to put things, how to name things, how to not name things, how to account for modalities, how to support various users’ needs, and almost everything else in a space I typically define as information architecture.

What I’d seen anecdotally at that agency and elsewhere was that developers would handle everything about assets because it was seen as a development task. And while developers absolutely incorporated a degree of IA in their work, it was usually the “best practices” that had been used elsewhere. Anything else around deeply investigating relationships between these pieces of information was left to the client. It felt like a real opportunity.

This isn’t where I’ll say an IA came in and was a superhero and resolved it all. But it’s important to note that with a dedicated information architect involved in the process early on, the shape of the work got a little different.

We started to apply information architecture principles and heuristics to DAMs. In the work my team does now, once we complete the audit, we review it fully with the client in a workshop setting. We talk through how things are organized, any peculiarities, and definitely add in a lot of “nice job on this!” too.

From there we start to analyze their organizational structure and workflows to inform the folder structure. In my experience a DAM is one place where an organization’s structure internally (teams, departments) can make sense for where to put things. But even talking through and reviewing how work gets done in a collaborative, workshop setting is essential.

And while workflows lead into a richer, more governance-related bent of this work, I’ve found that many organizations haven’t taken the time to simply write down, step-by-step, how they do things. These aren’t small companies, either. These are monsters. Big ones. And them seeing things like, “Oh, wow, this gets approvals from 8 people in email” and “Huh, this department puts their stuff here but it really should be there, maybe” is revelatory.

And yes, the development team is naturally involved in this work. They're getting a front row seat to the blueprint for a migration and setup of a DAM.

Ultimately we deliver a full set of recommendations on folder structure, taxonomy, tags, metadata, and naming conventions. We typically include workflow analysis and recommendations too. This culminates in a framework for these teams to take and run with as they look to reorganize, migrate, and implement a new organizational system. Not small work, but important work.

The DAM of the DAM

There’s a slight cautionary tone in my words here, because one other risk I typically see is assigning ownership of the DAM to an already-busy marketing team, an IT team, or a creative team.

I’m not here to tell you if you should reorganize your team but I will say: you absolutely, positively, 100% need to make it someone’s job to be the Digital Asset Manager (DAM) of the DAM. They may be an Asset Librarian, a DesignOps expert, an Information Architect, whatever. Critically that person needs to have knowledge in information architecture. They may “live” in an IT team or marketing team, but having knowledge – at least from a business perspective – of the particular DAM being used is immensely helpful.

The cover of Murmur. DAM kudzu.

When I’ve worked with clients who have internal DAM owners, things are different. The discussion becomes more about how they can scale and grow a system for a team, and how they can enforce standards. The discussion includes not just the immediate work but what’s around the corner: governance, ownership, and maintenance, critical parts of digital work.

Without DAM owners? Well, you know the cover of R.E.M.’s 1983 Murmur, right? The kudzu? That’s what happens to the DAM. One department organizes things one way, another doesn’t tag anything, and soon you’re back in the same place you started. Not having a person in charge of DAM governance is a pretty bad idea.

Change is DAM Hard

Centralizing your assets is all about change. The way things worked may stay the same, but how things are set up is going to be new. And change is hard. I’m starting to say this more and more: all of this will lead to people being pissed off, upset, or discouraged.

Some people get very attached to their way of working and don’t want to change it. This isn’t something that shows up on a DAM feature scorecard, but you have to honestly and openly evaluate how ready your team is to change. It is absolutely a factor in moving to a different DAM, or any DAM, and absolutely should be considered. I’ve joked that workshops are like therapy but, sometimes, they end up leaning hard in that direction for just this reason. Don’t forget the emotional work, either.

IA is Essential

That simple question that came my way years ago led to a lot of good work with people and teams to figure out how they should better organize their stuff. So when you’re evaluating your DAM – either a current one or shopping for a new one – do not overlook the importance of information architecture. It’s DAM important.

Affordances are dead

I miss affordances.

Affordances in computing UIs… are nothing new! When we put words on a screen, we collectively needed to visually convey what one could do with those words. Menus? Buttons? Dropdowns? All of the fundamental UI elements we’ve had for 30+ years started with deliberate design decisions – good or bad – that became standards, or de facto standards.

And as the web and touch interfaces have matured, designers have collectively thrown them out the window. I blame removing underlines on links.

Let me explain, without getting nostalgic.

In the old days of the web, links were underlined and in blue (versus black and no underline for text). It was the standard. Over time, the ability to remove underlines was introduced – so links could be any color and not have an underline. But… how does one convey it’s a link? Some links sprouted icons. Some remained in a different color. Some grew on hover, or showed a background on hover, or did something only when tinkered with.

Even there, the ability to scan a digital thing and convey intent was lost.

But, as with changes, people adapted. We started to click and poke at more things on pages, even things that looked as ordinary as anything else, in the hopes – the hopes – that maybe this thing would make a page or an app do the thing we wanted it to do.

Touch interfaces escalated this change. When using a touch interface, more than ever, the only way to know what one can interact with is through experience. It is less and less conveyed by the UI itself. Thus, it’s shifting the mental burden of figuring out “how do I work this” to the user – fully. Now, it may not be a significant burden! But, a burden nonetheless. The UI no longer says, “This is clickable” or “This is a thing you can interact with” consistently. Sometimes yes, sometimes no. I’ll note: we’ve collectively gotten used to this.

Big Unsure

Like any UI snob, this came home to roost for me when I upgraded my family Mac to Big Sur, the latest macOS. This version has a UI shift that, when rifling through reviews, seems to be noted as significant but manageable. I disagree. Any change will be tough for people including myself, but the other side of that change appears to be… kinda crappy.

Probably the biggest shift is that everything in the OS is button-esque now. Menus are button-y things that open up boxes below them, without a strong visual attachment. Buttons? Are just icons now with no clear indication that they can be clicked, other than the fact that they’re icons. Look at this screenshot from Safari.

Icons? Or buttons? Who the hell knows? (These are… toolbar buttons in Safari.) [Image description: five icons in a row, with no background or border or indicator, from Safari]

Icons? Or buttons? Who the hell knows? (These are… toolbar buttons in Safari.) [Image description: five icons in a row, with no background or border or indicator, from Safari]

I talk a big game about context. The above is out of context but this area in Safari is accurate. Five icons, in a row. No borders. No indicators of what they do. The colors? I believe they’re from third-party extensions, but who knows? And they’re blue because… Reasons, I guess.

There is a ton of guessing one needs to make to understand this. The only thing that helps is prior experience with Safari. There is almost nothing here, plainly, that indicates these are buttons. Who’s to stop a developer from just… putting icons there? That aren’t clickable? And just provide information? Right.

This is just one example, but it’s emblematic of Apple’s continued decision (that kicked off with iOS 7 and Jony Ive’s takeover of the OS… which I also liked at the time) to prioritize visual cleanness over usability.

A menu in Big Sur. Everything is just rounded rectangle buttons now. [Image description: a screen shot of the Finder menu in Big Sur, showing ‘About Finder’ selected with a blue rounded rectangle around it.]

A menu in Big Sur. Everything is just rounded rectangle buttons now. [Image description: a screen shot of the Finder menu in Big Sur, showing ‘About Finder’ selected with a blue rounded rectangle around it.]

Everything is Button, Button is Everything

Here’s a menu from Big Sur, from Finder specifically. Aesthetically? Not terribly different from prior macOS versions. But the button-itis of macOS extends here too: all of these text items are just buttons. The panel itself? Also looks like a big button now. Even the hover in the menu bar… makes it a button. Seriously, Apple, you get rid of buttons on devices and put them all on screen? Is that the deal?

Anyway. Why would you need to buttonize a menu? If you wanted to transplant an iOS interface into a menu, basically – and that’s what Control Center is. A UI train wreck.

Control Center suffers from a crappy visual hierarchy and a design for touch interface that doesn’t translate cleanly to a mouse-based interface. Here, have a look.

Control Center in macOS Big Sur. Someone approved this. [Image description: a screenshot of Control Center in Big Sur, showing multiple rounded rectangle items and controls.]

Control Center in macOS Big Sur. Someone approved this. [Image description: a screenshot of Control Center in Big Sur, showing multiple rounded rectangle items and controls.]

Since day one, Control Center – even on iOS – has been problematic. The great news is that now those problems are on macOS. This makes sense if one has limited space to convey information – then it becomes a real design challenge. But take a moment and look at this. Not everything can be interacted with in the same way. All of these controls, when hovered or clicked on, act differently. So they look somewhat consistent but don’t act that way. How do they work? What do they do? No one knows until they’re clicked on. Some morph into menus. Some are visual panels.

Again, here, the lack of clear affordances means this is a hodgepodge of controls that – honestly? – would be better served by a menu! Nice one, Apple, nice one.

Beyond that? The redesigned dock icons and app icons… are also buttons. No joke. Those shapes you memorized and used to differentiate apps are gone now, with everything surrounded by a rounded rectangle.

The future is dimmer

This is a rant, no doubt. And OSes change – they have forever. But the push that Apple has put in place with Big Sur is wildly unimaginative and short-sighted. How we use computers and computing devices has changed, of course, in the past 40 years! It’s a big difference. But in Big Sur, Apple’s thrown out so much of the UI standardization it helped usher in to computing. Our interfaces today are making us do more work, more figuring things out, and throwing away consistency and adaptability.

Parental Controls in 2020 are still junk

I can’t believe I have to write this, but parental controls like Apple’s Screen Time are still incomplete, ill-informed, and immature solutions that don’t do everything they need to do. I’m going to pick on Screen Time since it’s what I use, and my family is mostly in the Apple ecosystem (my son has an Android phone he uses just for playing around – he is too young to have a phone – but he does use it for little things.)

Bad Assumptions

The core of it, in my estimation, is this. Computing is essential to our lives in 2020 and will be beyond. It’s important to properly educate and support both kids and parents as we navigate this. My son needs a different set of access at age 10 than he will need at age 16, and age 13, and age 21. I am under no expectation nor illusion that a tool like Screen Time should just magically do everything; I’m not abdicating parental responsibility. What I am saying is this.

The internet has gotten incredibly complex in the past 10 years, and none of the tools out there reflect this reality.

Example 1: Apple Music

When I joined Apple Music, part of the rationale was for convenience – it was, at the time, the only music streaming service that integrated with Siri. Later, we upgraded to a family account. My son does love to listen to music, but some of the music he’s heard elsewhere isn’t… great. This happens. This is normal! And so when we discuss a particular song, I may want to choose to block an individual song altogether. I can not. There is no way to do it. Apple Music allows for blocking explicit content, which is a start, but songs can still be about explicit topics even though they’re not using explicit words.

The simple Clean/Explicit block is a start, but Apple – like Spotify – should empower parents and guardians to make their own decisions for their own families. Apple does not. As a result, we have to severely restrict how the kiddo listens to music. It’s not ideal.

Also, the Android version of Apple Music? Doesn’t support Screen Time even if you’re signed in to a managed account. Restrictions must be set locally, on the device. Guh?

Example 2: Safari “Allowed Sites”

Screen Time allows one to set a level of permission for websites. You can choose to block everything and permit individual sites by hand, you can allow everything, or you can use something in the middle that blocks “adult” websites (not defined in the UI) and then block/allow specific ones on top of it. Great.

Allowing only permitted sites is primitive. It broke when I wanted to allow the kiddo to browse our library’s site for audiobooks to download and listen to on our family Mac: every link led to a different site, and as a consequence, I had to allow each site. This is an example where macOS and Screen Time assume we’re on the web of 2000, not 2020. This is more complex now, and the tool makes dumb assumptions. In the end, I just let him use Chrome for audiobooks – which, by the way, has no restrictions in Screen Time because it’s not Safari! – and obviously monitor what he’s doing.

Is it the job of the OS?

A natural question on this might be, well, should the OS handle this at all? It’s valid. But it’s also an implementation detail. We do have Circle installed on our Netgear Orbi network, and it’s helpful for completely blocking or allowing device access, but its filtering isn’t reliable.

Worse though, as a parent why would I ever need to delve into freaking network settings for this? Again, it’s 2020 – the barrier of entry to the internet is not as high as it was 20 years ago and yet, these tools are straight outta the past.

A better path forward

Here’s the thing. There’s an opportunity here for software and hardware to create a system that supports parents’ goals. My wife and I want my son to be safe and smart online, and we want him to have excellent digital hygiene. None of the tools out there actually support that, which means screen time is always a push-pull. All of these problems are byproducts of: the way the internet has grown and changed into a few major companies control of streaming entertainment, the de-geekifying of computing, and the way our culture has embraced all of this. And what do we have? Ways to block that kinda work, mostly don’t, and don’t support a strategy of good growth for our kids.

On Reviews

Recently, Apple pulled customer reviews from their online store. To me it isn't a huge deal. The internet is full of opinions on things, particularly Apple’s products, and it’s fairly simple to be able to find other reviews. (The trustworthiness of those reviews is a whole other can of worms.)

But Boing Boing spun it completely disingenuously with the headline, "Apple doesn’t want to hear what you think about their stuff anymore.” The argument there is:

  1. Apple pulled reviews.

  2. Thus, Apple doesn't want to be transparent with people.

  3. This is probably somehow a free speech thing or something.

But it’s trash, because here’s the thing: reviews in a lot of places – including hashtag-ads on social media – are bought and paid for. Companies provide products, people write about them and toss in a tiny disclaimer in a review, and there you go. While I stated it’s easier to find other reviews on the internet, good luck finding a bunch that aren’t spammy as hell. It’s a mess. In any event, every company moderates their on-site reviews; it’s just a cost of doing business, and it’s foolish to think that Apple is providing some kind of free forum. Apple is a company, not a charitable organization.

Apple’s move is only hostile or opaque if you believe that reviews are the only way people can share an opinion or experience with their stuff.

Canary in the Coal Mine

On Sunday, I deactivated my Twitter account.

A year earlier I kicked the idea around in my head. I had been using Twitter since 2007, which is a long time to use anything I guess, but the overall abdication of responsibility that Twitter as a company took with its product just didn’t sit well with me. And here I was, writing a lot of things and sharing photos and spending a lot of time on it. That felt wrong.

So why the delay? Well, it was hard to quit. Even after deleting Tweetbot, even after keeping content blockers on the web version, it was pretty simple for me to dip in to Twitter absentmindedly during the day. And as the US government was given away, it was harder not to look – to see what fresh horror there was. Twitter was way less about funny jokes and neat ideas, as it was in the beginning. For years, people have been getting verbally abused and targeted on Twitter, and the company has done nothing. They could do so much more. They choose not to.

So I found myself sitting at my kitchen table on Sunday, in-between chores, reading Twitter and seeing things that got me angry. I checked out the account settings, downloaded all my data, and deleted my account.

Twitter is an excellent idea, and a very interesting platform. It is a bad product with poor leadership.

Naughtocorrect

In March I became so very frustrated with my iPhone's autocorrect feature that I thought, "DAMMIT I AM GOING TO TURN THIS BLASTED THING OFF." And I did.

And I am here to tell you that everything is fine.

Here's what's changed.

First, I make typos. I always have and always will (until Google somehow just types things for me, which is scary as fuck) and now I need to correct them manually. I still miss a physical keyboard for this, but it's really not as bad as one would think. My overall typing quality is pretty good, turns out.

The other thing I've done more is dictation. It's something I had used of course, but it's been a lot more convenient to just talk to the device and let it type for me (but not in that creepy Google way, which is scary as fuck).

The only thing I do miss are the shortcuts I made for longer phrases, but I can deal.

So, if autocorrect really bothers you, go ahead and turn it off. It's fine. It's just a feature.

iPhone 6 Plus Review

I've owned my iPhone 6 Plus for three years, and I thought it would be time to do a review.

For reference, my phone prior to this was an iPhone 5; for a quite brief time before that I owned a generic Android phone, which was my first foray into smartphones. The 6 Plus has been my main sidekick through a lot, and it's well-deserving of an examination of how it fits in to my life.

Form Factor

Back when I was deciding which phone to get, I really was on the fence between the iPhone 6 (which my wife ended up getting) and the 6 Plus. I chose the latter in part because it really obviated the need for an iPad. My iPad mini was thus sold, and the 6 Plus was going to be my travel companion and general tablet-phone thing.

I somewhat regret this decision.

Listen, I love the big screen. I do. It's fantastic for reading (as fantastic as this type of screen can be) and watching movies. But for anything else, it's a giant pain in the ass. It's big and clunky and awkward. I always need two hands to operate it.

Without a case on it, it's slippery as fuck. I always need to use a case, because the texture of the phone is that of thin, aluminum ice. I've tried operating it without a case, but the likelihood of my dropping this already too-big phone goes way up. And that's not great.

Of note, the phone has been dropped at least a dozen times in the past three years. There are charming little pucks and dents on the bottom. The worst, though, was that last year in Seattle I dropped it face-down on a floor and the screen cracked. There's a hairline crack still there, and I simply haven't fixed it yet. Mind you, that was with an Apple case on. This phone is nice but fragile.

Battery Life

It's bad.

I replaced my battery last November with one from iFixit and, I'm sad to say, that one is starting to decline in life as well. But the original battery started getting to a point where I was getting maybe two hours of use – normal use, nothing silly – before I had to plug in. Random shutdowns were also happening; it was especially charming to witness during a phone call.

Most days, if I'm browsing the web or reading (primary activities), I'll get maybe four solid hours from the iFixit replacement battery. That's an improvement. But it's clear something is amiss with this and the prior battery as well.

Notably, when the shutdowns started happening I took my phone to the Apple Store fully willing to pay for a new battery. At the time I was told no – the battery wasn't broken *enough* for them to fix it. I said, "But I am a consumer and I have money." They said, "No." I was stunned. And pissed, because I had to do it myself.

Anyway. Battery life is not good.

Screen Quality

The screen is pleasant. It's a beautiful, rich color screen. I like it.

Camera

The rear-facing camera is nice. It's not perfect, and I wish it could handle more macro shots, but it does a pretty good job most of the time. The photos are good enough quality to blow up to 8x10 if one chooses to do so; anything more than that is pushing it. You can tell it's a smartphone camera.

The front-facing camera is garbage and might as well not be there. It can't handle low-light, it's fuzzy and low resolution, and it's only good for a pinch.

Software

Over the past three years, my iPhone 6 Plus's performance has significantly degraded. Switching apps, launching apps, even Touch ID responsiveness have all moved from "Damn, this is way faster than my 5!" to "Let me count to 3." Most interactions with the iPhone 6 Plus are punctuated by pauses. Open an app? Pause. Wait. Wait. Wait. There it is. Type in in Messages. Hit the text bubble. Wait. Wait. Wait. Wait. Keyboard.

It, in other words, exhibits most behaviors one expects of a 3-year-old PC. Notably, my 2011 (!!) MacBook Air still feels more up-to-date and reasonable than this phone.

In addition, this phone occasionally refuses to recognize my touch. That was true before the screen crack: sometimes, and I don't know what causes this, the whole screen becomes non-responsive. Swipe, tap, zip, pinch, push, nothing. Only turning it off and turning it back on again works. This happens in any app, and at seemingly random times.

As a bonus, the Touch ID sometimes exhibits this as well. No response from the sensor so I need to press the button and enter my passcode.

But back to software: it's been depressing to see something so high-performing and top-of-the-line become bottom rung in such a short amount of time. iOS 10 and iOS 11 definitely sucked the life out of this thing, and I'm still truly sorry I upgraded to iOS 11.

One of my favorite bugs: if I play audio with an app, any app, sometimes that app's controls will be "stuck" on the home screen. Like right now, my home screen shows controls for a podcast I last listened to two days ago. I can't get rid of it. It may go away on its own. It may not.

Control Center remains a joke and is horrible. I hate it. It's bad. It makes me mad to use it, that people made this and then thought it was good. It is not. It is one of the least usable things Apple has made, and I used the round iMac mouse.

Safari will crash if I zoom in on a page with a lot of graphics on it. Safari can't render Uniqlo's mobile site without crashing, too. There are some things Safari just can't do well, and it's sad to report that modern websites are in that mix.

Other Apple apps are fine, but the ginormous headers in Apple's apps are useless, speaking to a visual hierarchy that isn't really there. (INBOX! ARCHIVE! SETTINGS!) It's a design language that I can appreciate for small screens, maybe, but not large ones. It's unfortunate.

On the plus side, some of my favorite apps are on iOS and that's mostly what keeps me here. Nike+ Running has been my companion for 8 years (and running with this phone is horrible!); Overcast is the best podcast player; Things is the best to-do organizer. WIthout those things I'd be lost.

Summary

The iPhone 6 Plus is big, not very fast, has a decent camera, and runs things I like. The "This is an amazing piece of technology" phase is gone. The "This has improved my life" phase is long gone. In the end, while my phone does what I need it to do, I doubt I'd get a phone this large again. I'm also not certain about my future phone being on iOS, because of how purely terrible iOS has gotten in significant areas.

One big plus: it has a headphone jack. Thank goodness for that small piece of sanity.

 

We Lost Native Apps

This is a work in progress. More to come, including some organization.

Somewhere between the time the web really took off and today, we lost truly native apps for our computers.

I know. I hear you. “I have Spotify on my iPhone! And I have it on my Mac too!” And yes, you are right. Some would rightly point out that native apps are undergoing a renaissance. While this is technically correct (especially thanks to programming languages & frameworks that let people "skin" an app to look like its native OS), there is something lost here. Take Spotify and Netflix for instance. The Spotify interface is practically the same from device to device. Netflix used to support native controls and native interfaces, but instead chose to unify under one interface across all devices as much as possible.

This isn't wholly good or bad, although I have an opinion on it.

So what?

A few decades ago, home computers were partially defined by their varying operating systems and programming languages. AppleBASIC was similar to BASIC 2.0, for instance, but not completely the same. Still, one could get the gist of BASIC on a TI-99/4A and mostly take it to a C64.

Once GUIs really took hold, we saw a lot of differentiation come to market. Mac OS was different than OS/2 which was different than BeOS which was different than Windows which was different than Amiga OS. Computers were bought and sold on the operating system and on the PC side, MS-DOS was mostly running the show. Over time, choices diminished and it was down to Mac OS and Windows.

The web grew in popularity. With it came an emerging common interface for accessing information – something new and different. The web, and a browser around it, started to take hold as the definition of what it meant to interact with a computer. That's notable: it wasn't Mac OS or Windows that instigated this change.

Once web apps came around and started to gain acceptance – around the initial AJAX and JavaScript libraries – the days of native apps became numbered. People started to get used to the idea of accessing things the same way on multiple devices without a local OS getting in the way. This is significant and followed through on the idea of the web being an interface for information. More so than a local OS, more so than anything else. You see this in the lineage: much later, we've got apps like Slack and Spotify that are web apps running in a local OS window, and only show their cracks through bugs or quirks.

Once the need to have a truly locally-influenced app is obviated, so is the need to store information locally. Why would you want your files to be on your home Mac when your work PC is where you needed the information? We collectively shifted over to storing things online, and in the cloud, in places we have no real control over for convenience.

Write Once, Run Anywhere

Some application frameworks promise the idea of writing a single codebase and then having that app work on different OSes naturally – iOS, Android, et al. These distill the operating system down to a series of UI widgets at best, and an aesthetic exercise at worst. These apps end up being native technically (that is, to fulfill a requirement on a checklist), but not holistically. Slack and Spotify, for instance, do not operate quite like any other Windows or Mac OS apps. There are conventions that are similar enough to get the gist, but one can tell they're not exactly like the local OS's built-in apps.

What We Lost

Here's where we're at.

  1. OSes used to have solid standards for interaction models. These were enforced by the manufacturers.
  2. The web has never had solid standards for interaction models, but many have emerged and become de facto standards.
  3. Because the web hasn't had these standards and the web usurped the popularity of local OSes, it influenced the direction of those OSes.
  4. And because of that, apps really don't need to adhere to OS standards anymore. There's no real risk or consequence.

I am not certain if this is ultimately something good nor bad. When I first started writing this I really felt that it was bad – it's something we lost, something we don't have anymore.

But on the flip side, we have the promise (if not the actual implementation) of universal interfaces that work from device to device, which is pretty awesome. The unfortunate part is that those interfaces don't need to be consistent from app to app, which is the real pain here: every app is potentially a new learning curve, a new way of interacting, a new way of processing, a new set of shortcuts, a new combination of gestures. That is what we've lost.

It is a different way of locking one in to an interaction model: if one learns to use Spotify, it isn't necessarily portable to iTunes (eeeesssshhhh) nor is it portable to Finder or Windows Explorer. It's an adjustment and a learning curve, or at least a situational change, every single time.

It would be ideal to say that we interact with our devices in a certain way, and they will work in that certain way no matter what. That yes, you can access the same app on different devices – but it will reflect the place you're using it more than anything else. That would be ideal. I'm not sure how, or if, we should get back there.

Welcome to Mac

There's been a few things swirling around the web this week about Apple's designs and how maybe they're not the pinnacle of everything anymore; in those posts and thoughts are strands suggesting that Apple's stuff has lost its humanity.

I agree wholeheartedly. But I wanted to share a reason why.

I switched from the PC to the Mac in 2000. I've written endlessly about my reasons for switching but most importantly, I put them on phonezilla.net (the earlier version of this website). Over the course of the day I found my lil' "I bought a Mac, yay!" post was picked up by major Mac news sites like Macintouch, The Mac Observer, et cetera. Along with that I started getting emails. There were a few, "YOU BOUGHT A TOY!!!11!11!" types but the overwhelming majority were strangers congratulating me and welcoming me to the Mac community.

It was something I hadn't experienced since my C64 & C128 days.

However, one letter stuck with me. It was a long one. It opened with a hearty welcome, brief introduction, and so on. And then the author went into detail about how, with a Mac, I had an amazing amount of power: the ability to more directly tell stories. Not just my stories, but the stories of other people – people who needed to be heard. He equated the video editing/recording revolution of the early 2000s with the desktop publishing revolution of the 1980s (and, I think, was mostly right). The letter closed by asking, "With a Mac and a digital video camera, you've got this awesome power. What stories will you tell?"

I am sure I replied to him and thanked him. Pretty sure I got no reply. And worse, I no longer have the email (happens!) I also can't quite picture this happening today. Maybe. But it feels unlikely.

Computers and how we use them have changed. They'll continue to do so. But in the early-to-mid-2000s, I think we really closed the chapter on large-scale computer clubs, hobbyists, SIGs, and all of those other community markers from the prior couple of decades.

 

Clunk

I've had a few thoughts on computers stewing in my noggin over the past few weeks.

Smartphones

I've come to the conclusion that, as a general form factor, I still prefer laptop computers over smartphones. I know! This surprised me too, given I once thought the iPad would obviate my need for a laptop. But I realized this: there are tasks that are just a giant pain in the ass on my phone. Consuming and reviewing a lot of data, and analyzing that data. Handling multiple tasks. Typing. iOS, especially, is trying to push from a specialized computing platform to a generalized one (see iPad Pro). The Mac, and laptops generally speaking, are just better at some things. I'm deeply concerned about the no-holds-barred push to mobile, where there is less of an opportunity for openness (hi! what browser runs on your iPhone? who runs the App Store?) and creative ideas.

That's not to say there won't be any. But, shit, animoji? And scanning your face? This is what Apple is applying technology to when we still have gaping holes elsewhere in the broader UI? Yeah. Well.

Keyboards v. WIMP

The great Sarah Emerson linked up this rather good rant from @gravislizard about GUIs versus keyboard-driven interfaces. My first instinct was negative because of the lead tweet – I was using computers in 1983, too, and no one can tell me that loading programs from a tape drive on a VIC-20 was faster than opening a damn tab in a browser. I stand by this.

However, my initial "SOMEONE IS WRONG ON THE INTERNET" stance faded as I read through it. It's not something I perfectly agree with, but I largely do. There are salient points and contextual items I want to call out.

  • GUIs were positioned as a way forward specifically because keyboard interfaces were deemed too complex for the broader public. And they were! Commands had to be memorized or looked up in a reference manual. Knowing how to program – even a bit – was a requirement, even if all someone wanted to do was load up a CB simulator. This was and is hard for a lot of people, and popularizing an abstraction layer based on visuals instead of words was one broad design solution created to address that. All commands were made obvious with menus and putting things on screen. Remember: we had keyboard overlays to help us.

  • That all said, the keyboard is a highly adaptable and flexible interface. No interface is perfect but, for many tasks, it can absolutely be faster (especially POS systems, as called out in the thread). Repetitive data entry, rote tasks, et al.

  • As GUIs matured – and thinking about consumer window/icon/mouse/pointer (WIMP) specifically – we got to a point where a large number of items piled up in design debt, both conceptual and functional, and were never fixed. In lieu of fixing them, large companies (Apple, Microsoft, Google) moved to touch-based interfaces. This lack of attention is why windows can still steal focus whilst typing on a modern desktop OS in 2017. It could be fixed; these companies just don't care.

  • People should always strive to make computer interfaces easier for other people. But education fell out of this picture a long time ago, and that put all of the burden on the interface to explain itself. Instead of interfaces becoming simpler and more clear, they became more complex – this is general-purpose computing, so it stands to reason. Our technology nowadays implies it's simple due to a lack of a manual; this is a lie. Manuals are not bad. Interfaces, sometimes, do need to be explained. And "no interface is the best interface" is bullshit.

  • I absolutely agree that interfaces, generally speaking, aren't working as hard as they could for users. Google Maps is a great example called out in the thread. Mapping software on phones and computers, generally, stagnated. I suspect this is because mapping in those contexts is seen as a byproduct of navigation, which is something that can be automated (and is, and will be). Ostensibly, these products support other use cases for maps (exploration, education), but it's obvious that's not what they're designed for.

  • General-purpose computing is missing and has missed a lot of opportunities. It's shifted to watching television, reading the news, communicating with people, and photo/video. There's a lot of untapped potential that is constrained by interfaces and the state of the industry.

I'm not completely pessimistic here. I think there's a lot of places computing can go, but the current setup of Apple and Google running the show, effectively, has severely constrained new ideas and cast aside the spirit of tinkering and exploring that existed in the industry 30-40 years ago. It feels more and more like there's just one path forward for computing products from a consumer perspective – which is false and terribly unimaginative.

Things that give me hope in interface design and general-purpose computing? Pfft, us finally collectively talking about ethics and realizing that a lot of design decisions made over the past few decades were really bad ones. That helps me imagine a more just world, a more equitable world where technology is used for good and serves genuine needs.

I'm also holding out a tiny sliver of hope for the World Wide Web because the non-commercial web still exists at all, and that's a good thing. Despite the increasing complexity of front-end code (trying to put a size 12 foot into a size 5 flat), it's still relatively easy to put things on the web. That remains powerful and democratic. That remains exciting. And that's a case where, when used well, technology can bring us together.

Editor's note: This is a new format I'm trying for posts – much more drafty, less polished. Although it may not appear this way, it has been edited since first published.