Posts Tagged: Agents

Apple’s Knowledge Navigator, Voice Agents and Adaptive Feedback

A video showing Apple’s 1987 video of the Knowledge Navigator (a conceptual idea at the time)

“In 1989 [or 1987?], Apple released a celebrated video entitled The Knowledge Navigator, which deposited a genteel, tuxedoed actor in the upper-right-hand corner of a Powerbook.”

What is even more interesting:

“The “professor” video was set in September 2011[citation needed]. In October 2011, Apple re-launched Siri, a voice activated personal assistant software vaguely similar to that aspect of the Knowledge Navigator.[4]

Wikipedia

In doing this, Apple were pushing the boundaries in how we access and interact with computers and information. They brought forward ideas of how technology like this could fit seamlessly within our day to day lives, embodying a variety of tasks – from calling a friend or colleague to finding in-depth scientific research journals. It’s not although Apple were the first to think up this digital assistant – think Sci-Fi like Star Trek’s Data character. But that was exactly that — Science Fiction. Apple’s video was taking these ideas out of science fiction and into our households, in the same way they did with the Personal Computer. I think this is testament to the importance of pushing innovation and not being afraid to step beyond what we currently know to be possible, even if the technology available at the time cannot physically realise it yet, we should continue to dream it. If Apple had not had these thoughts in 1987, would we have something as sophisticated as Siri today?

“Much of the GUI’s celebrated ease of use derives from its aptitude for direct manipulation […] But agents don’t play by those rules. They work instead under the more elusive regime of indirect manipulation”

“The original graphic-interface revolution was about empowering the user—making “the rest of us” smarter, and not our machines”

“But autonomous agents like those envisioned by Telescript will soon appear on the Net, one way or another”

—Steven Johnson, Interface Culture

(Did they predict the ubiquity of ‘the cloud’?)

“The ultimate goal of the more ambitious agent enthusiasts, however, goes well beyond software that dutifully does what it’s told to do—book airline tickets, sell stock. The real breakthrough, we’re told, will come when our agents start anticipating our needs”

—Steven Johnson, Interface Culture

Siri and other voice activated agents might not be there just yet, but the fact they exist to the sophisticated level in which they do begs the question “why is this not perfect?” which in turn inspires continued innovation until it is just that — a perfect response to our needs. It’s likely that in the future, Siri (and others) will integrate even more seamlessly into our daily lives. Instead of just responding to our commands — Find me coffee, wake me up in 8 hours, call Steve, etc — they will anticipate our needs — “Wake me up in 8 hours” “Ok, I’ll wake you up in 8 hours, but you haven’t locked your front door yet or turned off the downstairs lights. Would you like me to?” (Integration with Homekit). Or perhaps “Siri, find me coffee” “Ok, I the nearest is Starbucks, head north and turn left” but what if Siri knows you don’t like Starbucks, and you prefer checking out local independents rather than a chain? Maybe the response will be followed by “…but there’s an independent an extra mile away. Head west and turn right”.

This links to Firefly, a music recommendation service founded in 1995. Johnson states that “What makes the [Firefly] system truly powerful is the feedback mechanism built into the agent”. The fact that the agent responded to your ratings of various records to further tailor the following recommendations is what set it apart and gave it an edge. In other words – it was the ability to adapt. Feedback, in its many forms, is a recurring principal of powerful interaction design. I would call this kind of feedback Adaptive Feedback.

A small link to this is a minor, but very useful, aspect of the Apple pop-down ‘Save’ menu. The menu uses Progressive Disclosure to show the user more or less customisation options when saving a file to their hard disk.

Giles Colborne sums up the merits of this design in his book, Simple and Usable:

“The Save dialog box is a classic example of this. The basic feature is nothing more than two core questions:

  • what would you like to call this file?
  • where, from a list of options, would you like to save it?

But experts want something richer: extended options to create a new folder for the document, to search your hard disk for places to save the document, to browse your hard disk in other ways, and to save the file in a special format.

Rather than show everything, the Save dialog box opens with the main-stream version but lets users expand it to see the expert version.

The box remembers which version you prefer and uses that in the future. This is better than automatic customization because it’s the user who chooses how the interface should look. This is also better than regular customizing because the user makes the choices as she goes, rather than having a separate task of creating the menu. This means mainstreamers aren’t forced to customize. That model, of core features and extended features, is a classic way to provide simplicity as well as power.”

This is only a very minor example of an interface simply showing characteristics of Adaptive Feedback. The true potential of this type of feedback and anticipation of user needs is even greater, but it’s important to consider if details like this could help on a smaller scale too.

But how does this link to video game interfaces? A quick example could be this: Imagine a player just finished a tough wave of combat and has taken cover nearby to protect themselves as their health is very low. The player quickly opens up their inventory. Perhaps the interface of the game can interpret this, and anticipate that the player’s priority is probably to use a Medical Kit or Health Potion and heal themselves. The interface could then put this option in the forefront or highlight it somehow—similar to how Google and Apple gave me my most recent documents first—to save the player time in this crucial, tense moment of running low on health. That is of course, if the game wants to help the player. Some gameplay may benefit from making healing during combat more difficult, rather than easy, in order to more accurately convey a feeling of desperation, tension or realism. As well as this, what if the constant changing of where something is in the inventory actually hindered the player? Once they learnt where things were, it wouldn’t work too well if the game went and changed this each time (not dissimilar from how supermarkets move things around to encourage shoppers to look around more). These are all questions whose answers are dependent on the particular design in question.