As well as looking to art history itself, the rise and history of interaction design also offers interesting perspective. The invention of the Graphical User Interface (GUI, pronounced ‘Gooey’) was a significant turning point in design, hugely influencing how we interact with technology and—consequently—the influence of that technology on our lives. As video games are fundamentally interactive experiences, the advancements of the GUI and other developments within Human Computer Interaction (HCI) play a crucial role in the evolution of game play.
To trace the roots of the GUI, it may seem logical to travel back to the dawn of Personal Computers (PCs). After all, what use would we have for a graphical interface before then; what would we possibly put it on? However, ideas of such an interface can indeed be traced back much further than the personal computer, far before technology was capable of realising them.
Particularly, the late 1930’s, when Vannevar Bush wrote about a hypothetical device named the Memex, published in an article named As We May Think. His idea would have far reaching influence on interface design, long after it’s time.
The Memex, pictured above, was envisioned as a desk with two touch screen displays, a keyboard and a scanner attached to it. The idea was to allow the user access to all human knowledge using connections very similar to the hyperlinks we are familiar with today. The fact that this idea was conjured as early as the 1930’s is hugely interesting, and the way in which technology has panned out since is remarkably close to Bush’s ideas.
“The irony here, is that a middle-aged army scientist, writing thirty years before the first PC, understood interactivity better than all the Web titans in Silicon Valley. [..] After all, sometimes the best way to understand a technology is to approach it with no expectations, no preconceived ideas. Unhampered by any historical precedent”
—Steven Johnson, Interface Culture
We do, in the present day, have access to a huge expanse of human knowledge through the internet. Whether this is exactly what Vannevar had in mind or not is unclear but the concept is certainly not too far of a stretch — especially considering our method of interacting with that knowledge is through graphical displays, input devices and a system of hyperlinks all very much like what Bush described.
It is innovators like Vannevar Bush who we can thank for paving the way towards the methods of interaction we often take for granted now. In the words of Frank Chimero, innovators “do not stand on the inside of what is possible and push; they imagine what is just outside of what we deem possible and pull us towards their vision of what is better. They can see through the fog of the unexplored spaces and notice a way forward”.
I think this is a quintessential way of describing not only the ideas of Bush, but of all the other hugely influential innovators of interaction, some of whom I will briefly cover in the following sections. In fact, I could go as far as to say that Chimero’s words epitomise the very core sentiment of this entire paper.
One of those innovators who pulled us toward their vision with great impact is Douglas Engelbart. Although the Memex wasn’t developed because of the lack of technology at the time, the ideas proved hugely influential later in the century. Douglas Englebart, often considered the ‘father’ of the GUI, began to work on a machine which would serve to improve human intellect. He recalled Vannevar Bush’s essay to conceptualise such a machine, where the user could build models of information graphically and navigate around them dynamically.
In 1962 this was a huge leap of thinking, undoubtedly difficult for most people to comprehend; the computers which existed at this time were room-filling mainframes operated by specialists only. Despite being a difficult concept to persuade, by 1968 his ideas, technology and staff had grown sufficiently and he demonstrated his ideas publicly in front of over a thousand computer professionals.
This was the “public debut of the computer mouse” but this was only “one of many innovations demonstrated that day” [Source]. The mouse was mechanically different to modern mice, however the way which the user interacted with it is virtually identical. This demo would spark the widespread adoption of the GUI and therefore “dramatically changed the way in which humans and computers interact” according to Johnson (1997), who continues: “The visual metaphors that Doug Engelbart’s demo first conjured up in the sixties probably had more to do with popularizing the digital revolution than any other software advance on record”
Johnson’s comments certainly carry weight which can be seen simply by noting the similarities between Douglas’s 1960’s demo and modern technology. Douglas was undoubtedly one of the most influential figures in interface design; his technological advancements provided a solid base for which designers to work upon in the coming years. Although his work was focused more on mechanics and physical technology rather than design, without his intellect GUIs would not have received the foundational technology on which to base themselves, which was necessary in order for them to eventually achieve the result we experience today.
Researchers at Xerox PARC were amazed by Douglas’s demonstration, which inspired their creation of the Xerox Alto in 1973. Despite being a commercial failure, the Alto is widely considered a large influence in interaction design, making some important breakthroughs with the design of its GUI. It was widely used for research purposes [Source] and therefore allowed for further developments within Human Computer Interaction.
The Alto began with an interface that resembled a command terminal more than a desktop environment, but eventually resulted in the creation of SmallTalk in 1975. Originally conceived as a programming environment, SmallTalk went on to become the first modern GUI and in turn conceived the earliest use of icons and pop-up menus.
As well as this, the Alto also demonstrated the first use of a diagonal-pointing bitmap cursor pointer which we recognise in modern computing today. What is most notable about this particular cursor is it’s behavior – it alternated between different shapes depending on the task. For example, today the cursor may change into a hand when grabbing; a watch, spinning wheel or hourglass when loading; various different arrows for re-sizing, etc. This is a significant example of early visual feedback, a crucial element of interface design.
A final advancement to note is that the Alto also inspired the creation of Alto Trek, one of the first network-based multi-person video games which was also the first game to utilise the mouse, and would later inspire the creation of Microsoft’s Allegiance [Source]. It is clear that Xerox contributed many important and varied developments within interaction design.
One of the most crucial of all these developments, though, occurred by inspiring the work of another important innovator — Steve Jobs.
As established, the influences of the Xerox team were far reaching, but their effect on the developments of Apple was one of—if not the—most crucial of all; what Steve Jobs would go on to create from this sparked revelation he experienced during a visit to Xerox PARC would alter the landscape and direction of User Interface and Experience indefinitely.
Development for Apple Lisa began in 1978, with some members of the team being former members of the Xerox PARC group. The project was to design a powerful personal computer with a Graphical User Interface that would be targeted toward business computers.
The Lisa used a desktop metaphor and saw the birth of the first pull-down menu bar, with each menu always appearing horizontally across the top of the screen. This is just one convention created then that still exists, almost entirely unchanged, in Mac OS X today (at the time of writing, the current version is 10.11 OS X El Capitan, as shown in Fig above). The Lisa also introduced many other elements which we take for granted today: check-marks for selected menu items; keyboard command shortcuts; greyed out inactive items; the trash can; the use of icons to represent the entire file system; drag-and-drop; double-clicking — just to name a few. The developments here allowed for progress towards a universal structure for organising information on the screen, in a way that is familiar, versatile and user-friendly.
However, it wasn’t exactly the Lisa itself that went on to make history. Despite being such an advanced machine, sales were limited mostly due to the $10,000 price tag and difficulty of writing software for it. This called for a much more simplified, lower cost version of the Lisa. Steve Jobs took the task upon himself and achieved this goal with the original Apple Macintosh, which was introduced to the world in dramatic and iconic fashion in 1984, retailing for $2,495. It retained most of the GUI features of the Lisa, and even shared some of its low-level code, but the operating software itself was written from scratch to fit in the small memory footprint. It was this machine that would succeed, and mark a significant turning point for interface design.
Questions surrounding who invented what or who stole from who are often hotly debated within the technology industry. However—regardless of personal opinions or accusations—if artists, entrepreneurs and inventors didn’t take influence from one another, then that would be a true hinderance of innovation. Influence is an intrinsic, fundamental element that is necessary for innovation to happen.
This sentiment can, in fact, be best described with a Haiku written by Yosa Buson.
“Lighting one candle
with another candle—
In the words of Frank Chimero,
“Buson is saying that we accept the light contained in the work of others without darkening their efforts. One candle can light another, and the light may spread without its source being diminished.”
As creators, we must accept that creation and innovation is an accumulative effort – one that is ever progressing. Our own ideas along with everyone else’s snowball together to collect new thoughts and developments along the way — a movement that never ceases. As our malleable inspirations travel down the infinite branches of the thoughts of others, they become reshaped — moulded into something new. An improvement here and a new perspective there, the result is perpetual growth and change. At each stage, no one can claim ownership to an idea, for it is the combined product of a hundred others. Did Karl Benz, Edouard Michelin, or Henry Ford steal the wheel from the Sumerian people of the Bronze Age? Equally, did Douglas Engelbart steal from Vannevar Bush? Or did their personal contributions build upon an ever-progressing concept; the summation of the ideas and contributions of many, always pushing the boundaries of what we know, drawing us closer to that which lies beyond what we currently see?
This is how we innovate; we take inspiration and then develop it. This is what distinguishes innovation from thievery—personal, individual development. Steve Jobs took a mouse that cost Xerox $300 to develop and made it cost $15, while also simplifying it and improving the ease-of-use. “If you lined up Engelbart’s mouse, Xerox’s mouse, and Apple’s mouse, you would not see the serial reproduction of an object. You would see the evolution of a concept.” —Malcom Gladwell, Creation Myth [Source]. An idea must be built upon, for it to not be stolen. Developed, adapted, improved.
As Isaac Newton said,
“If I have seen further it is by standing on the shoulders of giants”
But even this was adapted from another,
“Bernard of Chartres used to say that we are like dwarfs on the shoulders of giants”
— John Salisbury.
Creation requires influence (Kirby Ferguson, Everything is a Remix). The forces that shape our lives can’t be attributed to individual owners; we are the product of everyone before us. In order to see beyond what we know, we must stand on their shoulders.
Various sketches, notes and workings out done while working through my Research Report. Included here in order to show my thought processes and methods of working through ideas.
“In 1989 [or 1987?], Apple released a celebrated video entitled The Knowledge Navigator, which deposited a genteel, tuxedoed actor in the upper-right-hand corner of a Powerbook.”
What is even more interesting:
“The “professor” video was set in September 2011. In October 2011, Apple re-launched Siri, a voice activated personal assistant software vaguely similar to that aspect of the Knowledge Navigator.”
In doing this, Apple were pushing the boundaries in how we access and interact with computers and information. They brought forward ideas of how technology like this could fit seamlessly within our day to day lives, embodying a variety of tasks – from calling a friend or colleague to finding in-depth scientific research journals. It’s not although Apple were the first to think up this digital assistant – think Sci-Fi like Star Trek’s Data character. But that was exactly that — Science Fiction. Apple’s video was taking these ideas out of science fiction and into our households, in the same way they did with the Personal Computer. I think this is testament to the importance of pushing innovation and not being afraid to step beyond what we currently know to be possible, even if the technology available at the time cannot physically realise it yet, we should continue to dream it. If Apple had not had these thoughts in 1987, would we have something as sophisticated as Siri today?
“Much of the GUI’s celebrated ease of use derives from its aptitude for direct manipulation […] But agents don’t play by those rules. They work instead under the more elusive regime of indirect manipulation”
“The original graphic-interface revolution was about empowering the user—making “the rest of us” smarter, and not our machines”
“But autonomous agents like those envisioned by Telescript will soon appear on the Net, one way or another”
—Steven Johnson, Interface Culture
(Did they predict the ubiquity of ‘the cloud’?)
“The ultimate goal of the more ambitious agent enthusiasts, however, goes well beyond software that dutifully does what it’s told to do—book airline tickets, sell stock. The real breakthrough, we’re told, will come when our agents start anticipating our needs”
—Steven Johnson, Interface Culture
Siri and other voice activated agents might not be there just yet, but the fact they exist to the sophisticated level in which they do begs the question “why is this not perfect?” which in turn inspires continued innovation until it is just that — a perfect response to our needs. It’s likely that in the future, Siri (and others) will integrate even more seamlessly into our daily lives. Instead of just responding to our commands — Find me coffee, wake me up in 8 hours, call Steve, etc — they will anticipate our needs — “Wake me up in 8 hours” “Ok, I’ll wake you up in 8 hours, but you haven’t locked your front door yet or turned off the downstairs lights. Would you like me to?” (Integration with Homekit). Or perhaps “Siri, find me coffee” “Ok, I the nearest is Starbucks, head north and turn left” but what if Siri knows you don’t like Starbucks, and you prefer checking out local independents rather than a chain? Maybe the response will be followed by “…but there’s an independent an extra mile away. Head west and turn right”.
This links to Firefly, a music recommendation service founded in 1995. Johnson states that “What makes the [Firefly] system truly powerful is the feedback mechanism built into the agent”. The fact that the agent responded to your ratings of various records to further tailor the following recommendations is what set it apart and gave it an edge. In other words – it was the ability to adapt. Feedback, in its many forms, is a recurring principal of powerful interaction design. I would call this kind of feedback Adaptive Feedback.
A small link to this is a minor, but very useful, aspect of the Apple pop-down ‘Save’ menu. The menu uses Progressive Disclosure to show the user more or less customisation options when saving a file to their hard disk.
Giles Colborne sums up the merits of this design in his book, Simple and Usable:
“The Save dialog box is a classic example of this. The basic feature is nothing more than two core questions:
- what would you like to call this file?
- where, from a list of options, would you like to save it?
But experts want something richer: extended options to create a new folder for the document, to search your hard disk for places to save the document, to browse your hard disk in other ways, and to save the file in a special format.
Rather than show everything, the Save dialog box opens with the main-stream version but lets users expand it to see the expert version.
The box remembers which version you prefer and uses that in the future. This is better than automatic customization because it’s the user who chooses how the interface should look. This is also better than regular customizing because the user makes the choices as she goes, rather than having a separate task of creating the menu. This means mainstreamers aren’t forced to customize. That model, of core features and extended features, is a classic way to provide simplicity as well as power.”
This is only a very minor example of an interface simply showing characteristics of Adaptive Feedback. The true potential of this type of feedback and anticipation of user needs is even greater, but it’s important to consider if details like this could help on a smaller scale too.
But how does this link to video game interfaces? A quick example could be this: Imagine a player just finished a tough wave of combat and has taken cover nearby to protect themselves as their health is very low. The player quickly opens up their inventory. Perhaps the interface of the game can interpret this, and anticipate that the player’s priority is probably to use a Medical Kit or Health Potion and heal themselves. The interface could then put this option in the forefront or highlight it somehow—similar to how Google and Apple gave me my most recent documents first—to save the player time in this crucial, tense moment of running low on health. That is of course, if the game wants to help the player. Some gameplay may benefit from making healing during combat more difficult, rather than easy, in order to more accurately convey a feeling of desperation, tension or realism. As well as this, what if the constant changing of where something is in the inventory actually hindered the player? Once they learnt where things were, it wouldn’t work too well if the game went and changed this each time (not dissimilar from how supermarkets move things around to encourage shoppers to look around more). These are all questions whose answers are dependent on the particular design in question.
While reading Interface Culture, Steven Johnson mentioned an early (1996) concept of Apple’s known as V-Twin.
“In early 1996, Apple began showing a functional demo of its new Finder software, in which all file directories include a category for “most representative words”. As you change the content of the document, the list of high-information words adjusts to reflect the new language. At first glance, this may seem like a superficial advance—a nifty feature but certainly nothing to write home about. And yet it contains the seeds of a significant interface innovation”
“Apple’s list of high information words raises the stakes dramatically: for the first time, the computer surveys the content, the meaning of the documents”
“Apple’s new Finder was the first to peer beyond the outer surface, to the kernel of meaning that lies within. And it was only the beginning.”
“It is here that the real revolution of text-driven interfaces should become apparent. Apple’s V-Twin implementation lets you define the results of a search as a permanent element of the Mac desktop—as durable and accessible as your disk icons and the subfolders beneath them.”
In Apple’s OS X Tiger in 2004, the original idea behind V-Twin and Views was shipped as Apple’s new ‘Smart Folders’. I find it interesting to note that it took nearly a decade for this innovation to reach and be received by the masses.
With View’s came a few breaks in consistency, as the function of these folders differed from regular folders, and therefore so did their various interactive behaviours. certain actions may not happen the way the user is accustomed to, based on their existing knowledge of Apple’s set-in-stone Interface conventions. “Wasn’t the user experience supposed to be all about consistency?” “The fact that the View window departs so dramatically from the Mac conventions indicates how radical the shift really is, even if it seems innocuous at first glance.”
I think this is particularly relevant when it comes to my questioning of ‘Innovation vs. Convention’, which I plan to discuss in detail towards the end of my Research Report. Here, in Apple’s example, breaking convention was a necessary result of innovation. Certain conventions could not physically exist within this innovation, as they directly contradicted the function of the innovation itself.
Further Views reading can be found on on Johnson’s Website.
“The contents of the view window, in other words, are dynamic; they adapt themselves automatically to any changes you make to the pool of data on your hard drive.” “If there is a genuine paradigm shift lurking somewhere in this mix—and I believe there is—it has to do with the idea of windows governed by semantics and not by space. Ever since Doug Engelbart’s revolutionary demo back in 1968, graphic interfaces have relied on spatial logic as a fundamental organisational principle.”
I find a link in modern interface design here. For example, I have a multitude of documents in my Google Drive. When I open up the Google Docs Web App, I am faced with, firstly, the option to create a new document, secondly, followed by a list of all of my documents. However, Google doesn’t automatically order these by Name or Type—as some apps may default to—but by Date.
The above screenshot illustrates this. When I arrive here, once I have decided that creating a new document is not my goal, I focus my attention on the next ‘chunk’ of information (highlighted by the red). Google has decided that it’s very likely I’ll want to resume working on something I have opened recently. Even if I haven’t opened the document I want in the very recent past, chances are it’s one of those I opened in the past month (orange). Failing that, the search bar is only a few pixels away at the top of the screen, so I can search precisely for what I want.
The majority of the time, though, Google’s first instinct is in fact exactly true, and the exact document I came in search for has pride of place in the hierarchy of the information. This is an example of the application organising information (or files) by a meaning that it has perceived through interpreting more static, regimented data (when a user has opened a file). The application associates ‘recently used’ with ‘higher priority’. As I said, the majority of the time, this is very accurate – it is probably my research report files (for this exact report) that I want to access, as that is almost solely what I am working on currently. However, sometimes that may not be the case. Perhaps I am working on my report, but I want to dig out an older document that I find relevant to what I’m working on now, in order to reference it. This organisation of information does not interfere with that, as I explained with the Orange and Yellow – everything else is still close at hand.
Apple also does this in their modern interfaces. Let’s take a look at my (suitably and conveniently disorganised) desktop at this present moment.
A few moments before taking the screenshot above, I’d taken another screenshot. At the time I took that screenshot, it’s likely I was planning to use it for something very soon after — whether that was to share it, move it to a different location, or open it in an image editing app to annotate it. I know that the screenshots save to my desktop, so my first step would be to open my desktop (in this case, I’ve opened it as a folder, rather than going to the actual desktop itself. It is also worth mentioning I could have opened the ‘folder’ ‘All my files’ and the result would be the same). As you can see, Apple have kindly organised my desktop files by Date Last Opened. This places my recently taken screenshot, again, in a prominent position set aside from the rest – it’s right at the top, under a section specifically for files I have created today. It is the first file I see; this makes the workflow of Save Screenshot → Find Screenshot → Share Screenshot (something I do often) about as streamlined as it has ever been.
The same principle would apply in a range of different scenarios, for example if I had saved an image from the web or maybe from another application. It is also worth mentioning that I can further organise the screen above. Apple gives you the option to organise the files here firstly by Date Last Opened (sectioning them into Today, 7 Days, 30 Days, as above) but within those sections you can further organise them by Name, Kind, etc. So, you might know that the file you are looking for was opened in the last week, and you also know it begins with a particular letter, so you can use those details combined with Apple’s intuitive sorting to then find it in a fraction of the time than if you were faced with your entire desktop listed A → Z.
This is just a small example of how modern interface designers are streamlining our workflows by interpreting and extracting meaning from data. This particular example isn’t even as complex as interpreting the content of documents, merely the time they were created or last opened. It also stands as an example of how very early interface design (going back to Apple’s 1996 Views) paved the path of innovation with their own breakthroughs along the way — not just in the form of new metaphors or visuals, but by questioning the way we think about and utilise data and information. The notion of a semantic—rather than solely spatial—file system is one of these.
“What the view window does, in effect, is say this: why not organise the desktop according to another illusion? Instead of space, why not organise around meaning?”
The integrated nature of the diegetic UI within the gameplay meant that it was used to enhance the story driven gameplay, not just laid over the top as a means of control. (As explained by the Lead UI Artist above) The broken, dystopian future of Deadspace was emphasised by details such as the interface being broken or unpredictable at times with static, scanlines, flickering lights, etc. These details did not necessarily make logical sense in a futuristic world (why would this advanced futuristic technology suffer these analogue traits?) but were included for the sake of enhancing the atmosphere and ability to tell the story. These elements communicated feeling and emotion to the player and increased their immersion within this world.
This is an example of innovation in Interaction/UI Design creating a more successful experience for the player. As a result of these decisions, the player had a more seamless existence within an imaginary sci-fi world. Usually, interactions such as checking your remaining life, opening a door, or fast traveling on a map can break that sense of immersion and temporarily bring a player out of the imaginary world. However, in the case of Dead Space, the designers remedied this by innovating and thus creating an experience where the player didn’t need to leave the game world to perform these actions. This was done by making it so that the in-game character showed their health on their back, interacted with maps in game, etc. This level of immersion benefits the game by intensifying the emotions the designers set out to instill – fear, or ‘a horror experience’.
(The minor exceptions to this are screens such as the pause menu, settings page, etc which do not ever ‘belong’ to the in-game avatar but solely to the player of the game. The designer mentions how this was approached by ensuring these interfaces were always set behind Isaac, the in-game player avatar)
🔗 Links to: Skeuomorphic Design
(Using familiarity by retaining ornamental design cues that were necessary in the original form of an object but no longer technically are) E.g. Using scanlines in Dead Space, the paper texture background in Apple’s notes app.
The above screenshot shows the original UI that needed to be implemented but the team realised this distracted too much from the gameplay and broke the immersion. The player’s attention would never be focused where it needed to be. The ‘rig’ is the answer to this, implementing the elements diegetically into the game.
As shown above, with the player navigation system, the team made many innovations throughout the design process in order to stay truly committed to their diegetic interface. It seemed illogical to allow this diegetic illusion to break at any point, as it would reduce the effect of the rest of the design.
However, the design of the ‘Bench’ proved it very difficult to stick to these self-set conventions. Trying to create a more immersive method of upgrading equipment using the ‘Bench’ resulted in favoring full diegesis at the expense of usability.
“Dead Space’s the workbench began as a way to tie Clarke’s engineering background to the game as he created weapons from what he found in the environment, Ignacio said. Its redesign in Dead Space 3, while offering more traditional weapons, was also way to push the idea of Clarke as engineer farther by allowing him to actually craft his own weapons.
The first attempt to redefine the workbench for Dead Space 3, which included Clarke in frame and multiple windows on the bench, was “unusable,” he said.
“You know when you’ve screwed up a system,” he said, when those working on the game would rather use the debug system than the one in the game.
The compromise involved using a more traditional UI element that took over the whole screen. Despite a break with the diegetic design principles, it’s a decision he stands by.
“At the end of the day, none of that is important if your users really can’t interact with your game,” he said. “The bottom line is that fun and usability are more important than the bullshit I was talking about in the beginning.””
The bottom line is that sometimes you have to let go of the conventions you’ve set yourself, or the previously established industry conventions, in order to ultimately create a better experience for the user. When the ‘Bench’ system the team were designing was failing, they needed to let go of their stubborn desire to 100% stick to a completely diegetic interface and instead settle for a full screen, more traditional UI screen. The result was much more usable for the player, despite not being as seamlessly embedded in the in-game surroundings. This compromise was worthwhile and had a positive effect on the overall experience. This is a case where breaking the rules was evidently the right choice to make.
Above are some images showing the iterations of the inventory design. The end result was simplified as much as possible which was necessary for the diegetic nature of the design to function. It was important to keep Isaac on the screen to maximise the effect, as the interface is being projected as a hologram from his equipment. Consequently, though, this reduces the ‘real estate’ available for the UI to use. It was important to focus on the readability and keep the features minimal. While doing this, it is also clearly important for the team to keep to the aesthetic style and theme of the game. Mostly solid colours and lines were used to minimise confusion and clutter, with each section clearly in its own ‘chunk’. However there are still more subtle elements of texture and pattern, which could be considered decorative (and therefore possibly considered not truly minimal), but it can be justified that they do serve important function – as explained above – these details are a subtle example of using skeuomorphs such as scanlines to contribute to the atmosphere.
My personal reflection is that in a situation such as this, it is important to prioritise simplicity, usability, readability and clarity above all else, especially given the limited screen space (as I would say the designers did at the time). Most of the time, this would involve removing everything that is not entirely necessary for the function of the interface – which would generally include more ‘decorative’ elements. However, it’s also important not to sacrifice elements that are in aid of the story, atmosphere and overall feel of the experience. If scanlines and other more superficial details allow the design to blend into the game world more effortlessly, then this is equally important, providing that they don’t interfere, such as by obscuring the text. I feel that it is important to add details like this in the least obtrusive ways. For example, the subtle shapes of the corners and lines add a technological, futuristic feel to the interface while also ensuring these headings stand out to the player – meaning they can quickly skim over the interface and locate the section they are looking for more quickly. As well as this, a colour palette of muted grey-blues with highlights of bright blue and white— making clever use of varying opacity—also contributes to the sci-fi theme while simultaneously creating text that is easy to read against the background of the game world.
Having said this, if I were to design the interface for the next Dead Space game, I would opt for something even simpler. It’s easy to say that details such as texture and scanlines add to the ‘sci-fi’ look, and this is true based on established conventions and trends – but is almost becoming too reliant on stereotypes. It might be beneficial to consider this from a different perspective. When I think of ‘futuristic’ user interface, I think of innovation. But by recycling the nature and characteristics of older, analogue technology, is this not going in the opposite direction? When it comes to the Deadspace 3 inventory in particular, the combination of patterns, shaped borders, scanlines, textures and detailed item thumbnails are beginning to teeter on the verge of clutter. I think from this point, it may become beneficial to take an aesthetic direction akin to the new Star Wars: Battlefront interface or perhaps Destiny.
The Star Wars: Battlefront 2015 Beta interface, shown above, makes use of:
The design avoids feeling bland, clinical or plain by making use of full scale, high definition artwork occupying the background space (blurred when necessary to retain clarity of the overlayed options). The large character and asset models in the background rotate slowly, adding subtle life to the interface without becoming distracting. These backgrounds also take this opportunity to showcase the high quality modelling and artwork featured in the game – giving the player a chance to look more carefully and close-up – as opposed to when they would usually be in fast-paced battle action, glossing over the rich details.
I personally feel that this clean and slick style could be combined very effectively with the Dead Space team’s experienced knowledge of implementing diegetic interfaces. This is one aspect the SWBF interface did not take advantage of – all interface is completely non-diegetic. [It is worth noting that in the following, I am thinking completely in terms of game design, out-ruling any preconceived biases about the studios themselves, their signature styles, usual methods of working, their available resources, capabilities, creative freedom, etc] The lack of diegesis in Star Wars may be due to wanting to stick to the safer choice of a more traditional UI for such a large-scale, online multiplayer FPS – something that may not be able to take as many risks or is more bound to established conventions and patterns within its UI.
However, more prominently, the use of Diegesis could pose consistency issues. Players can take control of a variety of different characters – from both sides of the battle. My first thought was that the designers could have made use of the fact that a character such as a Stormtrooper wears a helmet, and could therefore use a Diegetic Helmet HUD – or at least have the existing HUD slightly slanted and fish-eye to replicate looking through a helmet (see: Destiny). The problem created here is that… what if the player was playing as Luke, a character without a helmet? Or piloting space craft or a vehicle such as an AT-AT? These would call for very different HUD designs which would likely result in either: a) far too much work or b) inconsistency and therefore lack of usability. These are all considerations for when trying to streamline a design to suit it’s context most appropriately.
On the other hand, a new Dead Space would (if in keeping with the first three games) most likely feature a single, unchanging playable character who can continue to seamlessly make use of diegesis through holograms or similar in-game technology. As said above, I feel that if this also took advantage of a more contemporary design aesthetic like that of Star Wars, the result could be an even more seamless and functional interface that was also in keeping with the sci-fi theme — without falling prey to any outdated sci-fi stereotypes.
The book Game Development Essentials: Game Interface Design also covers the Dead Space interface and its use of diegesis. [To be updated with scans from the book]
I have finalised my Learning Agreement and Work Schedule drafts to be approved.
By planning my work in this way I can:
I now feel more confident in what I can achieve throughout this project and have a better idea of how manageable it will be. These documents will be useful to keep referring back to, making sure my work stays on the right path.
Below is my work-in-progress draft for my Research Report Learning Agreement.
Above is a collection of excerpts I’ve saved from my reading of The Shape of Design by Frank Chimero. I have highlighted anything I find particularly true, relevant, notable, etc that I want to remember and may want to include in my Research Report. I will use quotations as the basis of points I make or to support and evidence them.
Revui is another Youtube series offering analysis and critique of various games and their UI & UX.
The videos are very informative and I will use them in the same way that I have been watching UXP. The overall design and editing of the videos is very effective and clear, as well as aesthetically strong. As shown in the screenshot above, the speaker gives his critique and then offers a concise, logical improvement on the issues he finds. I think this is particularly important and reminds me of a quote from Donald Norman:
“I make it a rule never to criticize something unless I can offer a solution.”
—Donald Norman, The Design of Everyday Things
What is particularly useful for me is how the content of the videos leads to many new paths for me to find more research material.
The channel features some ‘Roundup’ videos that discuss current happenings in Design within Video Games. Although the videos are fairly old now, and the channel doesn’t seem to be active, it’s still a useful archive of information that seems perfectly current despite it’s age. While working on my report, any information looking at design within video games is valuable.