Monthly Archives: November 2015

UE4 Experiment — 2D Sidescroller

I used starter content and followed tutorials to experiment with a 2D Side scroller. I did this to improve my knowledge of more varied blueprints as well as start playing with a basic collectible script which I could later use the logic of for my 3D prototype.

  • Two different methods of spawning the coin - constantly, at a set interval or by button press
  • Using Arrays to create more random spawning of coins
  • Using Target points to designate where coins can spawn
  • Ensure that the coins spawn on the correct plane (as it is a 2D game)

View post on imgur.com

Coins spawning every 2 seconds

View post on imgur.com

Showing the use of text strings to return when the player has touched the coin – this helped me figure out the logic of how to determine when the coin had been triggered, which could then be developed to add the coin to the player’s total count, and destroying it.

Overall, doing small tests like this is helping me expand my knowledge in a varied way. This will help me know how to apply my knowledge better and start to figure out how to create things more independently.

UE4 – Movement Controller Progress

Pitch Rotation to match Vertical Movement

My focus today was to iterate on my movement controller based on my evaluation in my previous post. Firstly, I wanted to make the Pitch of the Actor rotate based on vertical movement. So, when Weedy swims downwards his nose will point forward and vice versa, to increase the feeling of realistic swimming movement. Otherwise, it appears as if he is just hovering up and down. I found the solution to this very simple.

Tilt_BP

In the Movement Sequence Blueprint, I have highlighted the edits I made to achieve this. The script already did a similar action for MoveRight and Roll and I used this to figure out how to do the same for MoveUp and Pitch.My basic understanding of this is:

  • When Weedy Moves Up
  • A (Input value of movement axis) * B (Y Pitch Value) =
  • Create Rotation on the Y axis (Pitch) to tilt using that value
  • Return this value to the target

Although this was successful and it functions in the way I wanted, I do feel like I need a better understanding of exactly why. I have a rough idea of how the nodes work, as outlined above, but I’m not entirely confident in it. I think the main focus for me right now should be to continue learning Blueprints and improve my knowledge much more, so I can more confidently make changes and understand how to create the desired functionality for my prototype.

View post on imgur.com

The above Gif shows the above in action. When I move Weedy vertically, his body tilts in the right direction, giving more realism and life to the movement.

What I have noticed, though, is that in the above graph, the nodes to tilt the Pawn in the direction of Forward movement are disconnected. This is something I need to look into and may have happened accidentally when editing the BP. This is another reason why I need to improve my BP knowledge, so I can quickly understand and fix details like this.

Otherwise though, I am pleased with the progress and will simply keep working on various details of the script in order to refine both my understanding and the functionality of the prototype.

Collision

Another area that needs improvement is how the script handles collisions. There is an area of the graph which deals with what happens when the actor collides above a certain velocity, in order to release particles and carry out other behaviour. Right now, the collision can be a little extreme and I feel this makes sense in terms of a spaceship or plane, but not an animal swimming through water. When Weedy collides with the wall, I would like to achieve a much softer impact.

View post on imgur.com


As you can see above, the collision sometimes can be quite buggy and unstable, causing clipping with the camera and the player to go spinning off. My end goal will be to soften the collision, possibly by having the actor absorb most of the impact and simple gently rotate him away from the wall. Upon colliding, I would like to have small clouds of dust and bubbles emit in the form of particles, instead of the mechanical sparks the controller initially had. I have very briefly begun looking into particle systems, which was—understandably—much more complicated than the simplicity of Unity 2D, so I need to decide whether this is realistic for me to create from scratch or whether I can find some starter assets to download and make use of.

Update camera to follow vertical pitch movement

Currently, when moving vertically, the camera remains looking at the back of the Pawn. Instead, I would like to test out how it feels if the camera rotates towards the direction that Weedy is facing when he moves vertically up and down (LMB/RMB). I feel like this may be helpful for the player if they can see where they are going when rising up or down through tunnels within a level. I did some research into this and begun playing around with the script. I did make some minor progress but struggled to achieve much. I found a helpful answer that broke down the necessary steps to achieve something like this (below), so I can use this as a starting point, but as I have said before – I need a better understanding of Blueprints first, so I think it’s wise I take a few steps back before attempting too much at once.

Pitch_Camera_Answer

I think a good idea might be to see if I can attend some extra UE4 Drop-in sessions while also going back and trying some more BP tutorials before I progress.


(Extra Ref for later) Underwater Scene Tutorial

UE4 – 3D Third Person Character Movement – Development

Building on my initial tests, I have been further developing my 3D Prototype Controller. I started by creating a simple 3D model in Blender to use for the prototype in order to better visualise how the movement is working with my particular character, Weedy the Sea Dragon.

  • Using a reference image of mine to begin modelling a rough shape
  • Adding fins and refining the shape
  • Adding the finishing touches to the model

By modelling manually like this I am improving my ability to quickly knock out prototype models for a faster workflow. Previously I have made most of my models by creating Voxels in Sproxel and then using the Decimate Modifier in Blender to generate a low poly model, which results in much less control over the shape and less desirable topology. I think it’s important to improve my ability to model more traditionally and manually. Although I have little knowledge of 3D and my topology is far from ideal, it still functions perfectly fine for a prototype, which is all I need.

A turntable of the final model.

4_Import

Importing the model into UE4

I exported my model from Blender as an FBX and then imported it into Unreal Engine. The image above shows the model as a Skeletal Mesh, but I soon realised a Static Mesh would be fine for this stage and re-imported it. If I need to rig or animate at all later (which is likely depending on how I progress with the movement controls), then I can make use of a Skeletal Mesh. For now though, I wanted to keep things as simple as possible so I can focus on one thing at once.

View post on imgur.com

I first opened up the Flying Starter Content and replaced the mesh with my own, just to quickly test how those controls felt as a starting point. It worked fine and I could tell this would be a good place to start editing the Blueprints to refine the movement myself.

View post on imgur.com

However, I also wanted to try the “Space Shooter” Starter Pack I had found in my earlier post. It took a fair amount of trial and error and lots of extra research and troubleshooting to get the content into my new project and tweak it to work with my model and my ideas. I studied the blueprints and other assets carefully, removing what I didn’t need in order to give myself the most simple resources to start working with. This has taught me a lot more about UE4 and Blueprints. The above image shows an early test, controlling the character after setting up my model with the Blueprints and other content I had imported, and after making several tweaks.

View post on imgur.com

I continued to play around, refining what I had and learning more about how it works. I adjusted some Speed values and also the Input Bindings quite a lot, resulting in a control scheme that I feel works more fluidly — although there is still a lot I would like to continue to change. For example, I altered it so that up/down movement was controlled with the Left/Right Mouse Button. The player would already be using the mouse to steer the camera, along with WASD to steer the Pawn, so it makes sense to make use of LMB/RMB. Previously, this was set to use Shift and Control, but I found this much less natural.

The above image shows the current state of my prototype. I began adding in some blocks to practice navigating around in order to get a better feel for how the control is working. The game would most likely feature tunnel-based levels so it’s important to create a controller that works well with tighter spaces. It’s also important to experiment with the up and down movement, not just the forward/back/left/right, as the vertical movement in the water is something that is important for me to get right.

A few details I would like to work on from this point are:

  • Further refining the input bindings to feel more intuitive
  • Create fluid controls between both Game Pads and Keyboard/Mouse
  • Increasing the speed of up/down movement
  • Increased Yaw rotation when turning left/right
  • Adding Pitch rotation when moving vertically up/down
  • Re-introduce the speed boost function
  • Create a more elaborate grey-boxed level
  • Build a better understanding of the Blueprints used so far

It is important to me that I end up with a movement controller that is uniquely my own. Although it is very useful that I have found starter content I can make use of as a starting point, it’s necessary that I actually do fully understand it and am not relying too heavily on the work of someone else. In order to do this I will continue to deconstruct the Blueprints and try rebuilding them from scratch so I can break down each section individually and examine how it works. I will then continue to build my own adjustments into it until the product is something unique to myself and my concept.

I also think it is very important to begin collecting feedback from others at this stage. The way that people interact with control schemes varies a lot person to person and some feedback would be very valuable in order to understand where I should put my focus next and what details need more tweaking. It may be necessary here to add the option for inverting the controls, as some people may require this in order to give accurate feedback. The way in which the camera moves is similar to how the control of a FPS or in-game Flight works – these are examples where a lack of option to invert can sometimes be completely game-breaking for players, resulting in them feeling very frustrated.

My next steps should be to address the areas above and look into creating a set of play-testing questions regarding the control so far.

UE4 – 3D Third Person Character Movement – Initial Tests

My current focus is experimenting with 3D Third Person movement control in Unreal Engine 4, in order to work towards experimenting with a prototype (and gauge whether this is possible for me to complete). The UE4 ‘Flying’ starter project was a good place to start and I have played around with this project, viewed other people’s work based off it and sought out as many tutorials as I could find on the subject.

  • The 'Flying' Starter Content Blueprints, offering valuable insight on how individual elements are handled and can be adjusted
  • A screenshot of the content. The ship has very basic movement but offers a good starting point, however I am trying to find as many different variations as possible to work and learn from

Although the start project focuses on a flying spaceship pawn, this is actually still relevant to my prototype idea – which is an attempt to create an underwater swimming controller (see Ecco the Dolphin for reference). If you imagine that the empty space around the flying ship is in fact water, the movement is almost identical.

I have been looking at games that use 6DoF (Six Degrees of Freedom: forward/back, up/down, left/right, pitch, yaw, roll) controllers to gain a better understanding of how this movement can work, how I want my player to move and whether this will be suitable or not. I’ve been looking at games such as Descent, Retrovirus, Shattered Horizon, etc. These games are mostly space-themed first person shooters, that involve zero gravity or otherwise flying upside down.

I feel like this method of movement is very close to how I would like my player to move, but possibly just without the full Roll (so, no swimming upside down). However, this is exactly what a prototype will allow me to figure out. This type of movement works best with tunnel style levels, which is also what I have had in mind for my prototype – or at least areas of it.

View post on imgur.com

The image above shows game play footage from Descent, featuring 6DoF in the First Person. [Source] This research eventually led me to the following demo of a 6DoF controller in UE4 by Tom Shannon.

View post on imgur.com

I found an accompanying tutorial for this type of movement but unfortunately couldn’t manage to make it fully functional due to it being outdated for a much earlier build of the engine and also my limited knowledge of Blueprint. Despite this, following the tutorial still taught me much more about using UE4 and Blueprint and I did manage to create a partially working controller.

The tutorial I followed to emulate this movement, which was unfortunately too outdated and I struggled to make it work

The tutorial I followed to emulate this movement, which was unfortunately too outdated and I struggled to make it work

View post on imgur.com

An initial result of mine after following Tom’s tutorial – the basic ship movement is functional (WASD), but the mouse input wouldn’t work. Amongst a forum post, I found an updated version of the graphs he used and tried those also. The mouse input was responsive but was juddering and wouldn’t allow you to look around 360 degrees, feeling very restricted. After troubleshooting, I decided to continue researching and experimenting from alternative reading and tutorials

The Blueprint Graphs I ended up with after my first (unsuccessful) attempt

The Blueprint Graphs I ended up with after my first (unsuccessful) attempt – shown above

  • Updated Graph - Better, but still couldn't get it to function entirely correct

After moving on from this initial test, I found this “Space Shooter Starter Pack” from a user on the Unreal forums. The post is also fairly old and a little outdated but I did manage to get it to work in 4.9 after a little tweaking, allowing me to play around with the controller and also delve into it’s Blueprint to see exactly how it works. This project is a much improved version of the starter flying controller bundled with the engine, which I have also been playing with. It isn’t perfect for my needs but is definitely a valuable resource to improve my UE4 knowledge and also get me started on my own creation. Below I have included a set of gifs I recorded while testing out the project files myself.

View post on imgur.com


I have broken down the controls to look at them individually, and see more closely exactly how they function. This will help me get an understanding of details which I would like to use myself and areas I could focus on for improvement. It also improves my understanding of the technical side of movement and how various details can come together to produce a controller which is much more fluid and satisfying to use.

  1. Using W, A, S and D to move forwards, backwards, left and right (or a combination)
  2. Using Spacebar to levitate/rise upwards and Ctrl to sink down
  3. Using the mouse to move the camera/look around – which also updates the direction the ship is facing and therefore traveling
  4. Using Shift to boost forwards and temporarily increase speed

[It is also worth noting that the UFO can fire projectiles and use an abduction beam which are not shown above, as I am mostly focusing on player movement for now and will move onto these kinds of mechanics later]

Overall, the controls feel fluid and the functionality is very close to what I would like to end up with in my own project. I found that using Shift/Ctrl to rise and sink was a little cumbersome on the keyboard but I can imagine this being much nicer on a game pad – however, I would like to create a control system that works seamlessly between input devices – not tailored to just one. I also need to keep in mind that the ideal platform of my concept is touchscreen tablets such as the iPad, so the control system would be much simpler (as detailed in my earlier project’s Pitch Doc).


A small section of the Graph Editor for the Ship's controller

A small section of the Graph Editor for the Pawn’s controller


Understandably, the Blueprints are much more complex than what I have been dealing with so far, but the author has helpfully commented and arranged them in a logical way, so I can begin to pick them apart. This also means that if I would like to know how to create—for example—the speed boost, there is a neatly commented section that I can view to see exactly how this works and begin to pick it apart. There are various more complicated techniques used to improve the overall fluidity of the control – such as changing the FOV (field of view) of the camera dependent on speed and also using a spring arm to re-position the camera more effectively as the player moves.

I think a wise next step for me would be to begin trying to recreate this controller in a blank project using these project files as reference, but minus the extra details such as the more complex animations and 3D model (so I can focus purely on movement). I would like to create and use a simplified, prototype Sea Dragon model (not rigged) to help me visualise it with my own concept. This will allow me to strip the Blueprints down to the essentials and worry about polish such as animation later, once the functionality is where I would like it to be. I will also be able to build a more thorough understanding of each element. I think it would be logical to take these project files one section at a time and focus on remaking it myself. Hopefully this will not prove too problematic as a result of my limited Blueprint knowledge. If I find this too ambitious or challenging, I can scale back my goals and take a step backwards to build a stronger foundation of UE4 knowledge first.

Next steps:

  • Create a very simple Sea Dragon prototype model and import into UE4 (can just be primitive shapes at this stage)
  • Attempt to recreate the sample controller, step-by-step, in a new project using my own model
  • Implement a grey-boxed environment for testing (comparing movement within tunnels and open spaces, etc)

Notes, Sketches and Planning

Various sketches, notes and workings out done while working through my Research Report. Included here in order to show my thought processes and methods of working through ideas.

Apple’s Knowledge Navigator, Voice Agents and Adaptive Feedback

A video showing Apple’s 1987 video of the Knowledge Navigator (a conceptual idea at the time)

“In 1989 [or 1987?], Apple released a celebrated video entitled The Knowledge Navigator, which deposited a genteel, tuxedoed actor in the upper-right-hand corner of a Powerbook.”

What is even more interesting:

“The “professor” video was set in September 2011[citation needed]. In October 2011, Apple re-launched Siri, a voice activated personal assistant software vaguely similar to that aspect of the Knowledge Navigator.[4]

Wikipedia

In doing this, Apple were pushing the boundaries in how we access and interact with computers and information. They brought forward ideas of how technology like this could fit seamlessly within our day to day lives, embodying a variety of tasks – from calling a friend or colleague to finding in-depth scientific research journals. It’s not although Apple were the first to think up this digital assistant – think Sci-Fi like Star Trek’s Data character. But that was exactly that — Science Fiction. Apple’s video was taking these ideas out of science fiction and into our households, in the same way they did with the Personal Computer. I think this is testament to the importance of pushing innovation and not being afraid to step beyond what we currently know to be possible, even if the technology available at the time cannot physically realise it yet, we should continue to dream it. If Apple had not had these thoughts in 1987, would we have something as sophisticated as Siri today?

“Much of the GUI’s celebrated ease of use derives from its aptitude for direct manipulation […] But agents don’t play by those rules. They work instead under the more elusive regime of indirect manipulation”

“The original graphic-interface revolution was about empowering the user—making “the rest of us” smarter, and not our machines”

“But autonomous agents like those envisioned by Telescript will soon appear on the Net, one way or another”

—Steven Johnson, Interface Culture

(Did they predict the ubiquity of ‘the cloud’?)

“The ultimate goal of the more ambitious agent enthusiasts, however, goes well beyond software that dutifully does what it’s told to do—book airline tickets, sell stock. The real breakthrough, we’re told, will come when our agents start anticipating our needs”

—Steven Johnson, Interface Culture

Siri and other voice activated agents might not be there just yet, but the fact they exist to the sophisticated level in which they do begs the question “why is this not perfect?” which in turn inspires continued innovation until it is just that — a perfect response to our needs. It’s likely that in the future, Siri (and others) will integrate even more seamlessly into our daily lives. Instead of just responding to our commands — Find me coffee, wake me up in 8 hours, call Steve, etc — they will anticipate our needs — “Wake me up in 8 hours” “Ok, I’ll wake you up in 8 hours, but you haven’t locked your front door yet or turned off the downstairs lights. Would you like me to?” (Integration with Homekit). Or perhaps “Siri, find me coffee” “Ok, I the nearest is Starbucks, head north and turn left” but what if Siri knows you don’t like Starbucks, and you prefer checking out local independents rather than a chain? Maybe the response will be followed by “…but there’s an independent an extra mile away. Head west and turn right”.

This links to Firefly, a music recommendation service founded in 1995. Johnson states that “What makes the [Firefly] system truly powerful is the feedback mechanism built into the agent”. The fact that the agent responded to your ratings of various records to further tailor the following recommendations is what set it apart and gave it an edge. In other words – it was the ability to adapt. Feedback, in its many forms, is a recurring principal of powerful interaction design. I would call this kind of feedback Adaptive Feedback.

A small link to this is a minor, but very useful, aspect of the Apple pop-down ‘Save’ menu. The menu uses Progressive Disclosure to show the user more or less customisation options when saving a file to their hard disk.

Giles Colborne sums up the merits of this design in his book, Simple and Usable:

“The Save dialog box is a classic example of this. The basic feature is nothing more than two core questions:

  • what would you like to call this file?
  • where, from a list of options, would you like to save it?

But experts want something richer: extended options to create a new folder for the document, to search your hard disk for places to save the document, to browse your hard disk in other ways, and to save the file in a special format.

Rather than show everything, the Save dialog box opens with the main-stream version but lets users expand it to see the expert version.

The box remembers which version you prefer and uses that in the future. This is better than automatic customization because it’s the user who chooses how the interface should look. This is also better than regular customizing because the user makes the choices as she goes, rather than having a separate task of creating the menu. This means mainstreamers aren’t forced to customize. That model, of core features and extended features, is a classic way to provide simplicity as well as power.”

This is only a very minor example of an interface simply showing characteristics of Adaptive Feedback. The true potential of this type of feedback and anticipation of user needs is even greater, but it’s important to consider if details like this could help on a smaller scale too.

But how does this link to video game interfaces? A quick example could be this: Imagine a player just finished a tough wave of combat and has taken cover nearby to protect themselves as their health is very low. The player quickly opens up their inventory. Perhaps the interface of the game can interpret this, and anticipate that the player’s priority is probably to use a Medical Kit or Health Potion and heal themselves. The interface could then put this option in the forefront or highlight it somehow—similar to how Google and Apple gave me my most recent documents first—to save the player time in this crucial, tense moment of running low on health. That is of course, if the game wants to help the player. Some gameplay may benefit from making healing during combat more difficult, rather than easy, in order to more accurately convey a feeling of desperation, tension or realism. As well as this, what if the constant changing of where something is in the inventory actually hindered the player? Once they learnt where things were, it wouldn’t work too well if the game went and changed this each time (not dissimilar from how supermarkets move things around to encourage shoppers to look around more). These are all questions whose answers are dependent on the particular design in question. 

Dynamic interfaces and innovation in how we view information

While reading Interface Culture, Steven Johnson mentioned an early (1996) concept of Apple’s known as V-Twin.

“In early 1996, Apple began showing a functional demo of its new Finder software, in which all file directories include a category for “most representative words”. As you change the content of the document, the list of high-information words adjusts to reflect the new language. At first glance, this may seem like a superficial advance—a nifty feature but certainly nothing to write home about. And yet it contains the seeds of a significant interface innovation”

“Apple’s list of high information words raises the stakes dramatically: for the first time, the computer surveys the content, the meaning of the documents”

“Apple’s new Finder was the first to peer beyond the outer surface, to the kernel of meaning that lies within. And it was only the beginning.”

“It is here that the real revolution of text-driven interfaces should become apparent. Apple’s V-Twin implementation lets you define the results of a search as a permanent element of the Mac desktop—as durable and accessible as your disk icons and the subfolders beneath them.”

In Apple’s OS X Tiger in 2004, the original idea behind V-Twin and Views was shipped as Apple’s new ‘Smart Folders’. I find it interesting to note that it took nearly a decade for this innovation to reach and be received by the masses.

With View’s came a few breaks in consistency, as the function of these folders differed from regular folders, and therefore so did their various interactive behaviours. certain actions may not happen the way the user is accustomed to, based on their existing knowledge of Apple’s set-in-stone Interface conventions. “Wasn’t the user experience supposed to be all about consistency?” “The fact that the View window departs so dramatically from the Mac conventions indicates how radical the shift really is, even if it seems innocuous at first glance.”

I think this is particularly relevant when it comes to my questioning of ‘Innovation vs. Convention’, which I plan to discuss in detail towards the end of my Research Report. Here, in Apple’s example, breaking convention was a necessary result of innovation. Certain conventions could not physically exist within this innovation, as they directly contradicted the function of the innovation itself.

Further Views reading can be found on on Johnson’s Website.

“The contents of the view window, in other words, are dynamic; they adapt themselves automatically to any changes you make to the pool of data on your hard drive.” “If there is a genuine paradigm shift lurking somewhere in this mix—and I believe there is—it has to do with the idea of windows governed by semantics and not by space. Ever since Doug Engelbart’s revolutionary demo back in 1968, graphic interfaces have relied on spatial logic as a fundamental organisational principle.”

I find a link in modern interface design here. For example, I have a multitude of documents in my Google Drive. When I open up the Google Docs Web App, I am faced with, firstly, the option to create a new document, secondly, followed by a list of all of my documents. However, Google doesn’t automatically order these by Name or Type—as some apps may default to—but by Date.

A screenshot from my Google Docs home-page, showing the hierarchy of where I will go when searching for the document I want.

The above screenshot illustrates this. When I arrive here, once I have decided that creating a new document is not my goal, I focus my attention on the next ‘chunk’ of information (highlighted by the red). Google has decided that it’s very likely I’ll want to resume working on something I have opened recently. Even if I haven’t opened the document I want in the very recent past, chances are it’s one of those I opened in the past month (orange). Failing that, the search bar is only a few pixels away at the top of the screen, so I can search precisely for what I want.

The majority of the time, though, Google’s first instinct is in fact exactly true, and the exact document I came in search for has pride of place in the hierarchy of the information. This is an example of the application organising information (or files) by a meaning that it has perceived through interpreting more static, regimented data (when a user has opened a file). The application associates ‘recently used’ with ‘higher priority’. As I said, the majority of the time, this is very accurate – it is probably my research report files (for this exact report) that I want to access, as that is almost solely what I am working on currently. However, sometimes that may not be the case. Perhaps I am working on my report, but I want to dig out an older document that I find relevant to what I’m working on now, in order to reference it. This organisation of information does not interfere with that, as I explained with the Orange and Yellow – everything else is still close at hand.

Apple also does this in their modern interfaces. Let’s take a look at my (suitably and conveniently disorganised) desktop at this present moment.

A few moments before taking the screenshot above, I’d taken another screenshot. At the time I took that screenshot, it’s likely I was planning to use it for something very soon after — whether that was to share it, move it to a different location, or open it in an image editing app to annotate it. I know that the screenshots save to my desktop, so my first step would be to open my desktop (in this case, I’ve opened it as a folder, rather than going to the actual desktop itself. It is also worth mentioning I could have opened the ‘folder’ ‘All my files’ and the result would be the same). As you can see, Apple have kindly organised my desktop files by Date Last Opened. This places my recently taken screenshot, again, in a prominent position set aside from the rest – it’s right at the top, under a section specifically for files I have created today. It is the first file I see; this makes the workflow of Save Screenshot → Find Screenshot → Share Screenshot (something I do often) about as streamlined as it has ever been.

The same principle would apply in a range of different scenarios, for example if I had saved an image from the web or maybe from another application. It is also worth mentioning that I can further organise the screen above. Apple gives you the option to organise the files here firstly by Date Last Opened (sectioning them into Today, 7 Days, 30 Days, as above) but within those sections you can further organise them by Name, Kind, etc. So, you might know that the file you are looking for was opened in the last week, and you also know it begins with a particular letter, so you can use those details combined with Apple’s intuitive sorting to then find it in a fraction of the time than if you were faced with your entire desktop listed A → Z.

This is just a small example of how modern interface designers are streamlining our workflows by interpreting and extracting meaning from data. This particular example isn’t even as complex as interpreting the content of documents, merely the time they were created or last opened. It also stands as an example of how very early interface design (going back to Apple’s 1996 Views) paved the path of innovation with their own breakthroughs along the way — not just in the form of new metaphors or visuals, but by questioning the way we think about and utilise data and information. The notion of a semantic—rather than solely spatial—file system is one of these.

“What the view window does, in effect, is say this: why not organise the desktop according to another illusion? Instead of space, why not organise around meaning?”

Dead Space Analysis

Video: Designing Dead Space's immersive user interface

Video: Designing Dead Space’s immersive user interface

Dead Space’s Diegetic UI

The integrated nature of the diegetic UI within the gameplay meant that it was used to enhance the story driven gameplay, not just laid over the top as a means of control. (As explained by the Lead UI Artist above) The broken, dystopian future of Deadspace was emphasised by details such as the interface being broken or unpredictable at times with static, scanlines, flickering lights, etc. These details did not necessarily make logical sense in a futuristic world (why would this advanced futuristic technology suffer these analogue traits?) but were included for the sake of enhancing the atmosphere and ability to tell the story. These elements communicated feeling and emotion to the player and increased their immersion within this world.

This is an example of innovation in Interaction/UI Design creating a more successful experience for the player. As a result of these decisions, the player had a more seamless existence within an imaginary sci-fi world. Usually, interactions such as checking your remaining life, opening a door, or fast traveling on a map can break that sense of immersion and temporarily bring a player out of the imaginary world. However, in the case of Dead Space, the designers remedied this by innovating and thus creating an experience where the player didn’t need to leave the game world to perform these actions. This was done by making it so that the in-game character showed their health on their back, interacted with maps in game, etc. This level of immersion benefits the game by intensifying the emotions the designers set out to instill – fear, or ‘a horror experience’.

(The minor exceptions to this are screens such as the pause menu, settings page, etc which do not ever ‘belong’ to the in-game avatar but solely to the player of the game. The designer mentions how this was approached by ensuring these interfaces were always set behind Isaac, the in-game player avatar)

🔗 Links to: Skeuomorphic Design

(Using familiarity by retaining ornamental design cues that were necessary in the original form of an object but no longer technically are) E.g. Using scanlines in Dead Space, the paper texture background in Apple’s notes app.

Screen Shot 2015-10-27 at 10.14.16.png

The above screenshot shows the original UI that needed to be implemented but the team realised this distracted too much from the gameplay and broke the immersion. The player’s attention would never be focused where it needed to be. The ‘rig’ is the answer to this, implementing the elements diegetically into the game.

  • The initial design for helping the player navigate, which was later deemed 'unsalvageable'
  • The innovation to solve this dilemma - the 'locator', a much simpler, still fully diegetic, glowing line on the floor
  • Showing how the locator has evolved over time into the later games. Much more clarity is achieved here, by dimming the rest of the world and lighting up the focus of the player's vision with bright blue lighting

As shown above, with the player navigation system, the team made many innovations throughout the design process in order to stay truly committed to their diegetic interface. It seemed illogical to allow this diegetic illusion to break at any point, as it would reduce the effect of the rest of the design.

Screen Shot 2015-10-27 at 11.12.20

However, the design of the ‘Bench’ proved it very difficult to stick to these self-set conventions. Trying to create a more immersive method of upgrading equipment using the ‘Bench’ resulted in favoring full diegesis at the expense of usability.

Dead Space’s the workbench began as a way to tie Clarke’s engineering background to the game as he created weapons from what he found in the environment, Ignacio said. Its redesign in Dead Space 3, while offering more traditional weapons, was also way to push the idea of Clarke as engineer farther by allowing him to actually craft his own weapons.

The first attempt to redefine the workbench for Dead Space 3, which included Clarke in frame and multiple windows on the bench, was “unusable,” he said.

“You know when you’ve screwed up a system,” he said, when those working on the game would rather use the debug system than the one in the game.

The compromise involved using a more traditional UI element that took over the whole screen. Despite a break with the diegetic design principles, it’s a decision he stands by.

“At the end of the day, none of that is important if your users really can’t interact with your game,” he said. “The bottom line is that fun and usability are more important than the bullshit I was talking about in the beginning.””

Polygon article on Dino Ignacio, Visceral Games’ lead UI Designer

The bottom line is that sometimes you have to let go of the conventions you’ve set yourself, or the previously established industry conventions, in order to ultimately create a better experience for the user. When the ‘Bench’ system the team were designing was failing, they needed to let go of their stubborn desire to 100% stick to a completely diegetic interface and instead settle for a full screen, more traditional UI screen. The result was much more usable for the player, despite not being as seamlessly embedded in the in-game surroundings. This compromise was worthwhile and had a positive effect on the overall experience. This is a case where breaking the rules was evidently the right choice to make.

Above are some images showing the iterations of the inventory design. The end result was simplified as much as possible which was necessary for the diegetic nature of the design to function. It was important to keep Isaac on the screen to maximise the effect, as the interface is being projected as a hologram from his equipment. Consequently, though, this reduces the ‘real estate’ available for the UI to use. It was important to focus on the readability and keep the features minimal. While doing this, it is also clearly important for the team to keep to the aesthetic style and theme of the game. Mostly solid colours and lines were used to minimise confusion and clutter, with each section clearly in its own ‘chunk’. However there are still more subtle elements of texture and pattern, which could be considered decorative (and therefore possibly considered not truly minimal), but it can be justified that they do serve important function – as explained above – these details are a subtle example of using skeuomorphs such as scanlines to contribute to the atmosphere.

My personal reflection is that in a situation such as this, it is important to prioritise simplicity, usability, readability and clarity above all else, especially given the limited screen space (as I would say the designers did at the time). Most of the time, this would involve removing everything that is not entirely necessary for the function of the interface – which would generally include more ‘decorative’ elements. However, it’s also important not to sacrifice elements that are in aid of the story, atmosphere and overall feel of the experience. If scanlines and other more superficial details allow the design to blend into the game world more effortlessly, then this is equally important, providing that they don’t interfere, such as by obscuring the text. I feel that it is important to add details like this in the least obtrusive ways. For example, the subtle shapes of the corners and lines add a technological, futuristic feel to the interface while also ensuring these headings stand out to the player – meaning they can quickly skim over the interface and locate the section they are looking for more quickly. As well as this, a colour palette of muted grey-blues with highlights of bright blue and white— making clever use of varying opacity—also contributes to the sci-fi theme while simultaneously creating text that is easy to read against the background of the game world.

Having said this, if I were to design the interface for the next Dead Space game, I would opt for something even simpler. It’s easy to say that details such as texture and scanlines add to the ‘sci-fi’ look, and this is true based on established conventions and trends – but is almost becoming too reliant on stereotypes. It might be beneficial to consider this from a different perspective. When I think of ‘futuristic’ user interface, I think of innovation. But by recycling the nature and characteristics of older, analogue technology, is this not going in the opposite direction? When it comes to the Deadspace 3 inventory in particular, the combination of patterns, shaped borders, scanlines, textures and detailed item thumbnails are beginning to teeter on the verge of clutter. I think from this point, it may become beneficial to take an aesthetic direction akin to the new Star Wars: Battlefront interface or perhaps Destiny.

The Star Wars: Battlefront 2015 Beta interface, shown above, makes use of:

  • Bright, solid, flat colours (and a limited, consistent palette)
  • Blurred and frosted backgrounds for clarity
  • Clear and consistent grid systems
  • Flat, informative pictograms over more detailed thumbnail artwork
  • Strong use of clear and modern typography
  • Spacious and generous use of white-space to avoid clutter

The design avoids feeling bland, clinical or plain by making use of full scale, high definition artwork occupying the background space (blurred when necessary to retain clarity of the overlayed options). The large character and asset models in the background rotate slowly, adding subtle life to the interface without becoming distracting. These backgrounds also take this opportunity to showcase the high quality modelling and artwork featured in the game – giving the player a chance to look more carefully and close-up – as opposed to when they would usually be in fast-paced battle action, glossing over the rich details.

I personally feel that this clean and slick style could be combined very effectively with the Dead Space team’s experienced knowledge of implementing diegetic interfaces. This is one aspect the SWBF interface did not take advantage of – all interface is completely non-diegetic. [It is worth noting that in the following, I am thinking completely in terms of game design, out-ruling any preconceived biases about the studios themselves, their signature styles, usual methods of working, their available resources, capabilities, creative freedom, etc] The lack of diegesis in Star Wars may be due to wanting to stick to the safer choice of a more traditional UI for such a large-scale, online multiplayer FPS – something that may not be able to take as many risks or is more bound to established conventions and patterns within its UI.

However, more prominently, the use of Diegesis could pose consistency issues. Players can take control of a variety of different characters – from both sides of the battle. My first thought was that the designers could have made use of the fact that a character such as a Stormtrooper wears a helmet, and could therefore use a Diegetic Helmet HUD – or at least have the existing HUD slightly slanted and fish-eye to replicate looking through a helmet (see: Destiny). The problem created here is that… what if the player was playing as Luke, a character without a helmet? Or piloting space craft or a vehicle such as an AT-AT? These would call for very different HUD designs which would likely result in either: a) far too much work or b) inconsistency and therefore lack of usability. These are all considerations for when trying to streamline a design to suit it’s context most appropriately.

On the other hand, a new Dead Space would (if in keeping with the first three games) most likely feature a single, unchanging playable character who can continue to seamlessly make use of diegesis through holograms or similar in-game technology. As said above, I feel that if this also took advantage of a more contemporary design aesthetic like that of Star Wars, the result could be an even more seamless and functional interface that was also in keeping with the sci-fi theme — without falling prey to any outdated sci-fi stereotypes.

The book Game Development Essentials: Game Interface Design also covers the Dead Space interface and its use of diegesis. [To be updated with scans from the book]

Studio Work Update + Planning

I have had to restructure and rethink my Studio Work (Creative Practice) due to my work placement next year. I am no longer following my original project plan, and therefore my deliverables for January will be much less substantial. I am still following the same idea (3D Sea Dragon Prototype) but will only aim to complete about 10% of what I originally had in mind, as the other 90% will be completed as part of my work placement.

I created a Progress Report in the form of a few slides to update lecturers and peers (as well as clarify things for myself) on where I am at currently and what my plans are now.

View the slides as a PDF: Progress Report BA3a

I am now mostly devoting time to my Research Report, but alongside this I am still researching and planning towards my newly decided Jan 4th Deliverables. For my January submission, I am aiming towards:

  • Character Controller (the beginning of, may still be experimental and unfinished)
  • Collectible Script, working with the character controller
  • Sketches and Initial Ideas for Branding (possibly some early logo type/mark designs)

To accompany these, I will also have:

  • Research + Reflective Journal (this blog)
    • Research and tutorials into UE4, Blueprints, 3D Prototyping, etc
    • Research and inspiration into Branding and Identity for Games (as well as more general findings)
  • Individual ‘Mini’ Tasks
  • Completed Art Test
  • Any applicable Sprung Studios prep work
    • e.g. Research into prototyping apps, journal entry on 1-week trial (December)

All of the above will contribute to my graded submission.