Posts Tagged: GUI

Apple’s Knowledge Navigator, Voice Agents and Adaptive Feedback

A video showing Apple’s 1987 video of the Knowledge Navigator (a conceptual idea at the time)

“In 1989 [or 1987?], Apple released a celebrated video entitled The Knowledge Navigator, which deposited a genteel, tuxedoed actor in the upper-right-hand corner of a Powerbook.”

What is even more interesting:

“The “professor” video was set in September 2011[citation needed]. In October 2011, Apple re-launched Siri, a voice activated personal assistant software vaguely similar to that aspect of the Knowledge Navigator.[4]


In doing this, Apple were pushing the boundaries in how we access and interact with computers and information. They brought forward ideas of how technology like this could fit seamlessly within our day to day lives, embodying a variety of tasks – from calling a friend or colleague to finding in-depth scientific research journals. It’s not although Apple were the first to think up this digital assistant – think Sci-Fi like Star Trek’s Data character. But that was exactly that — Science Fiction. Apple’s video was taking these ideas out of science fiction and into our households, in the same way they did with the Personal Computer. I think this is testament to the importance of pushing innovation and not being afraid to step beyond what we currently know to be possible, even if the technology available at the time cannot physically realise it yet, we should continue to dream it. If Apple had not had these thoughts in 1987, would we have something as sophisticated as Siri today?

“Much of the GUI’s celebrated ease of use derives from its aptitude for direct manipulation […] But agents don’t play by those rules. They work instead under the more elusive regime of indirect manipulation”

“The original graphic-interface revolution was about empowering the user—making “the rest of us” smarter, and not our machines”

“But autonomous agents like those envisioned by Telescript will soon appear on the Net, one way or another”

—Steven Johnson, Interface Culture

(Did they predict the ubiquity of ‘the cloud’?)

“The ultimate goal of the more ambitious agent enthusiasts, however, goes well beyond software that dutifully does what it’s told to do—book airline tickets, sell stock. The real breakthrough, we’re told, will come when our agents start anticipating our needs”

—Steven Johnson, Interface Culture

Siri and other voice activated agents might not be there just yet, but the fact they exist to the sophisticated level in which they do begs the question “why is this not perfect?” which in turn inspires continued innovation until it is just that — a perfect response to our needs. It’s likely that in the future, Siri (and others) will integrate even more seamlessly into our daily lives. Instead of just responding to our commands — Find me coffee, wake me up in 8 hours, call Steve, etc — they will anticipate our needs — “Wake me up in 8 hours” “Ok, I’ll wake you up in 8 hours, but you haven’t locked your front door yet or turned off the downstairs lights. Would you like me to?” (Integration with Homekit). Or perhaps “Siri, find me coffee” “Ok, I the nearest is Starbucks, head north and turn left” but what if Siri knows you don’t like Starbucks, and you prefer checking out local independents rather than a chain? Maybe the response will be followed by “…but there’s an independent an extra mile away. Head west and turn right”.

This links to Firefly, a music recommendation service founded in 1995. Johnson states that “What makes the [Firefly] system truly powerful is the feedback mechanism built into the agent”. The fact that the agent responded to your ratings of various records to further tailor the following recommendations is what set it apart and gave it an edge. In other words – it was the ability to adapt. Feedback, in its many forms, is a recurring principal of powerful interaction design. I would call this kind of feedback Adaptive Feedback.

A small link to this is a minor, but very useful, aspect of the Apple pop-down ‘Save’ menu. The menu uses Progressive Disclosure to show the user more or less customisation options when saving a file to their hard disk.

Giles Colborne sums up the merits of this design in his book, Simple and Usable:

“The Save dialog box is a classic example of this. The basic feature is nothing more than two core questions:

  • what would you like to call this file?
  • where, from a list of options, would you like to save it?

But experts want something richer: extended options to create a new folder for the document, to search your hard disk for places to save the document, to browse your hard disk in other ways, and to save the file in a special format.

Rather than show everything, the Save dialog box opens with the main-stream version but lets users expand it to see the expert version.

The box remembers which version you prefer and uses that in the future. This is better than automatic customization because it’s the user who chooses how the interface should look. This is also better than regular customizing because the user makes the choices as she goes, rather than having a separate task of creating the menu. This means mainstreamers aren’t forced to customize. That model, of core features and extended features, is a classic way to provide simplicity as well as power.”

This is only a very minor example of an interface simply showing characteristics of Adaptive Feedback. The true potential of this type of feedback and anticipation of user needs is even greater, but it’s important to consider if details like this could help on a smaller scale too.

But how does this link to video game interfaces? A quick example could be this: Imagine a player just finished a tough wave of combat and has taken cover nearby to protect themselves as their health is very low. The player quickly opens up their inventory. Perhaps the interface of the game can interpret this, and anticipate that the player’s priority is probably to use a Medical Kit or Health Potion and heal themselves. The interface could then put this option in the forefront or highlight it somehow—similar to how Google and Apple gave me my most recent documents first—to save the player time in this crucial, tense moment of running low on health. That is of course, if the game wants to help the player. Some gameplay may benefit from making healing during combat more difficult, rather than easy, in order to more accurately convey a feeling of desperation, tension or realism. As well as this, what if the constant changing of where something is in the inventory actually hindered the player? Once they learnt where things were, it wouldn’t work too well if the game went and changed this each time (not dissimilar from how supermarkets move things around to encourage shoppers to look around more). These are all questions whose answers are dependent on the particular design in question. 

Dynamic interfaces and innovation in how we view information

While reading Interface Culture, Steven Johnson mentioned an early (1996) concept of Apple’s known as V-Twin.

“In early 1996, Apple began showing a functional demo of its new Finder software, in which all file directories include a category for “most representative words”. As you change the content of the document, the list of high-information words adjusts to reflect the new language. At first glance, this may seem like a superficial advance—a nifty feature but certainly nothing to write home about. And yet it contains the seeds of a significant interface innovation”

“Apple’s list of high information words raises the stakes dramatically: for the first time, the computer surveys the content, the meaning of the documents”

“Apple’s new Finder was the first to peer beyond the outer surface, to the kernel of meaning that lies within. And it was only the beginning.”

“It is here that the real revolution of text-driven interfaces should become apparent. Apple’s V-Twin implementation lets you define the results of a search as a permanent element of the Mac desktop—as durable and accessible as your disk icons and the subfolders beneath them.”

In Apple’s OS X Tiger in 2004, the original idea behind V-Twin and Views was shipped as Apple’s new ‘Smart Folders’. I find it interesting to note that it took nearly a decade for this innovation to reach and be received by the masses.

With View’s came a few breaks in consistency, as the function of these folders differed from regular folders, and therefore so did their various interactive behaviours. certain actions may not happen the way the user is accustomed to, based on their existing knowledge of Apple’s set-in-stone Interface conventions. “Wasn’t the user experience supposed to be all about consistency?” “The fact that the View window departs so dramatically from the Mac conventions indicates how radical the shift really is, even if it seems innocuous at first glance.”

I think this is particularly relevant when it comes to my questioning of ‘Innovation vs. Convention’, which I plan to discuss in detail towards the end of my Research Report. Here, in Apple’s example, breaking convention was a necessary result of innovation. Certain conventions could not physically exist within this innovation, as they directly contradicted the function of the innovation itself.

Further Views reading can be found on on Johnson’s Website.

“The contents of the view window, in other words, are dynamic; they adapt themselves automatically to any changes you make to the pool of data on your hard drive.” “If there is a genuine paradigm shift lurking somewhere in this mix—and I believe there is—it has to do with the idea of windows governed by semantics and not by space. Ever since Doug Engelbart’s revolutionary demo back in 1968, graphic interfaces have relied on spatial logic as a fundamental organisational principle.”

I find a link in modern interface design here. For example, I have a multitude of documents in my Google Drive. When I open up the Google Docs Web App, I am faced with, firstly, the option to create a new document, secondly, followed by a list of all of my documents. However, Google doesn’t automatically order these by Name or Type—as some apps may default to—but by Date.

A screenshot from my Google Docs home-page, showing the hierarchy of where I will go when searching for the document I want.

The above screenshot illustrates this. When I arrive here, once I have decided that creating a new document is not my goal, I focus my attention on the next ‘chunk’ of information (highlighted by the red). Google has decided that it’s very likely I’ll want to resume working on something I have opened recently. Even if I haven’t opened the document I want in the very recent past, chances are it’s one of those I opened in the past month (orange). Failing that, the search bar is only a few pixels away at the top of the screen, so I can search precisely for what I want.

The majority of the time, though, Google’s first instinct is in fact exactly true, and the exact document I came in search for has pride of place in the hierarchy of the information. This is an example of the application organising information (or files) by a meaning that it has perceived through interpreting more static, regimented data (when a user has opened a file). The application associates ‘recently used’ with ‘higher priority’. As I said, the majority of the time, this is very accurate – it is probably my research report files (for this exact report) that I want to access, as that is almost solely what I am working on currently. However, sometimes that may not be the case. Perhaps I am working on my report, but I want to dig out an older document that I find relevant to what I’m working on now, in order to reference it. This organisation of information does not interfere with that, as I explained with the Orange and Yellow – everything else is still close at hand.

Apple also does this in their modern interfaces. Let’s take a look at my (suitably and conveniently disorganised) desktop at this present moment.

A few moments before taking the screenshot above, I’d taken another screenshot. At the time I took that screenshot, it’s likely I was planning to use it for something very soon after — whether that was to share it, move it to a different location, or open it in an image editing app to annotate it. I know that the screenshots save to my desktop, so my first step would be to open my desktop (in this case, I’ve opened it as a folder, rather than going to the actual desktop itself. It is also worth mentioning I could have opened the ‘folder’ ‘All my files’ and the result would be the same). As you can see, Apple have kindly organised my desktop files by Date Last Opened. This places my recently taken screenshot, again, in a prominent position set aside from the rest – it’s right at the top, under a section specifically for files I have created today. It is the first file I see; this makes the workflow of Save Screenshot → Find Screenshot → Share Screenshot (something I do often) about as streamlined as it has ever been.

The same principle would apply in a range of different scenarios, for example if I had saved an image from the web or maybe from another application. It is also worth mentioning that I can further organise the screen above. Apple gives you the option to organise the files here firstly by Date Last Opened (sectioning them into Today, 7 Days, 30 Days, as above) but within those sections you can further organise them by Name, Kind, etc. So, you might know that the file you are looking for was opened in the last week, and you also know it begins with a particular letter, so you can use those details combined with Apple’s intuitive sorting to then find it in a fraction of the time than if you were faced with your entire desktop listed A → Z.

This is just a small example of how modern interface designers are streamlining our workflows by interpreting and extracting meaning from data. This particular example isn’t even as complex as interpreting the content of documents, merely the time they were created or last opened. It also stands as an example of how very early interface design (going back to Apple’s 1996 Views) paved the path of innovation with their own breakthroughs along the way — not just in the form of new metaphors or visuals, but by questioning the way we think about and utilise data and information. The notion of a semantic—rather than solely spatial—file system is one of these.

“What the view window does, in effect, is say this: why not organise the desktop according to another illusion? Instead of space, why not organise around meaning?”