Posts By Aiden Lesanto

Notes, Sketches and Planning

Various sketches, notes and workings out done while working through my Research Report. Included here in order to show my thought processes and methods of working through ideas.

Apple’s Knowledge Navigator, Voice Agents and Adaptive Feedback

A video showing Apple’s 1987 video of the Knowledge Navigator (a conceptual idea at the time)

“In 1989 [or 1987?], Apple released a celebrated video entitled The Knowledge Navigator, which deposited a genteel, tuxedoed actor in the upper-right-hand corner of a Powerbook.”

What is even more interesting:

“The “professor” video was set in September 2011[citation needed]. In October 2011, Apple re-launched Siri, a voice activated personal assistant software vaguely similar to that aspect of the Knowledge Navigator.[4]

Wikipedia

In doing this, Apple were pushing the boundaries in how we access and interact with computers and information. They brought forward ideas of how technology like this could fit seamlessly within our day to day lives, embodying a variety of tasks – from calling a friend or colleague to finding in-depth scientific research journals. It’s not although Apple were the first to think up this digital assistant – think Sci-Fi like Star Trek’s Data character. But that was exactly that — Science Fiction. Apple’s video was taking these ideas out of science fiction and into our households, in the same way they did with the Personal Computer. I think this is testament to the importance of pushing innovation and not being afraid to step beyond what we currently know to be possible, even if the technology available at the time cannot physically realise it yet, we should continue to dream it. If Apple had not had these thoughts in 1987, would we have something as sophisticated as Siri today?

“Much of the GUI’s celebrated ease of use derives from its aptitude for direct manipulation […] But agents don’t play by those rules. They work instead under the more elusive regime of indirect manipulation”

“The original graphic-interface revolution was about empowering the user—making “the rest of us” smarter, and not our machines”

“But autonomous agents like those envisioned by Telescript will soon appear on the Net, one way or another”

—Steven Johnson, Interface Culture

(Did they predict the ubiquity of ‘the cloud’?)

“The ultimate goal of the more ambitious agent enthusiasts, however, goes well beyond software that dutifully does what it’s told to do—book airline tickets, sell stock. The real breakthrough, we’re told, will come when our agents start anticipating our needs”

—Steven Johnson, Interface Culture

Siri and other voice activated agents might not be there just yet, but the fact they exist to the sophisticated level in which they do begs the question “why is this not perfect?” which in turn inspires continued innovation until it is just that — a perfect response to our needs. It’s likely that in the future, Siri (and others) will integrate even more seamlessly into our daily lives. Instead of just responding to our commands — Find me coffee, wake me up in 8 hours, call Steve, etc — they will anticipate our needs — “Wake me up in 8 hours” “Ok, I’ll wake you up in 8 hours, but you haven’t locked your front door yet or turned off the downstairs lights. Would you like me to?” (Integration with Homekit). Or perhaps “Siri, find me coffee” “Ok, I the nearest is Starbucks, head north and turn left” but what if Siri knows you don’t like Starbucks, and you prefer checking out local independents rather than a chain? Maybe the response will be followed by “…but there’s an independent an extra mile away. Head west and turn right”.

This links to Firefly, a music recommendation service founded in 1995. Johnson states that “What makes the [Firefly] system truly powerful is the feedback mechanism built into the agent”. The fact that the agent responded to your ratings of various records to further tailor the following recommendations is what set it apart and gave it an edge. In other words – it was the ability to adapt. Feedback, in its many forms, is a recurring principal of powerful interaction design. I would call this kind of feedback Adaptive Feedback.

A small link to this is a minor, but very useful, aspect of the Apple pop-down ‘Save’ menu. The menu uses Progressive Disclosure to show the user more or less customisation options when saving a file to their hard disk.

Giles Colborne sums up the merits of this design in his book, Simple and Usable:

“The Save dialog box is a classic example of this. The basic feature is nothing more than two core questions:

  • what would you like to call this file?
  • where, from a list of options, would you like to save it?

But experts want something richer: extended options to create a new folder for the document, to search your hard disk for places to save the document, to browse your hard disk in other ways, and to save the file in a special format.

Rather than show everything, the Save dialog box opens with the main-stream version but lets users expand it to see the expert version.

The box remembers which version you prefer and uses that in the future. This is better than automatic customization because it’s the user who chooses how the interface should look. This is also better than regular customizing because the user makes the choices as she goes, rather than having a separate task of creating the menu. This means mainstreamers aren’t forced to customize. That model, of core features and extended features, is a classic way to provide simplicity as well as power.”

This is only a very minor example of an interface simply showing characteristics of Adaptive Feedback. The true potential of this type of feedback and anticipation of user needs is even greater, but it’s important to consider if details like this could help on a smaller scale too.

But how does this link to video game interfaces? A quick example could be this: Imagine a player just finished a tough wave of combat and has taken cover nearby to protect themselves as their health is very low. The player quickly opens up their inventory. Perhaps the interface of the game can interpret this, and anticipate that the player’s priority is probably to use a Medical Kit or Health Potion and heal themselves. The interface could then put this option in the forefront or highlight it somehow—similar to how Google and Apple gave me my most recent documents first—to save the player time in this crucial, tense moment of running low on health. That is of course, if the game wants to help the player. Some gameplay may benefit from making healing during combat more difficult, rather than easy, in order to more accurately convey a feeling of desperation, tension or realism. As well as this, what if the constant changing of where something is in the inventory actually hindered the player? Once they learnt where things were, it wouldn’t work too well if the game went and changed this each time (not dissimilar from how supermarkets move things around to encourage shoppers to look around more). These are all questions whose answers are dependent on the particular design in question. 

Dynamic interfaces and innovation in how we view information

While reading Interface Culture, Steven Johnson mentioned an early (1996) concept of Apple’s known as V-Twin.

“In early 1996, Apple began showing a functional demo of its new Finder software, in which all file directories include a category for “most representative words”. As you change the content of the document, the list of high-information words adjusts to reflect the new language. At first glance, this may seem like a superficial advance—a nifty feature but certainly nothing to write home about. And yet it contains the seeds of a significant interface innovation”

“Apple’s list of high information words raises the stakes dramatically: for the first time, the computer surveys the content, the meaning of the documents”

“Apple’s new Finder was the first to peer beyond the outer surface, to the kernel of meaning that lies within. And it was only the beginning.”

“It is here that the real revolution of text-driven interfaces should become apparent. Apple’s V-Twin implementation lets you define the results of a search as a permanent element of the Mac desktop—as durable and accessible as your disk icons and the subfolders beneath them.”

In Apple’s OS X Tiger in 2004, the original idea behind V-Twin and Views was shipped as Apple’s new ‘Smart Folders’. I find it interesting to note that it took nearly a decade for this innovation to reach and be received by the masses.

With View’s came a few breaks in consistency, as the function of these folders differed from regular folders, and therefore so did their various interactive behaviours. certain actions may not happen the way the user is accustomed to, based on their existing knowledge of Apple’s set-in-stone Interface conventions. “Wasn’t the user experience supposed to be all about consistency?” “The fact that the View window departs so dramatically from the Mac conventions indicates how radical the shift really is, even if it seems innocuous at first glance.”

I think this is particularly relevant when it comes to my questioning of ‘Innovation vs. Convention’, which I plan to discuss in detail towards the end of my Research Report. Here, in Apple’s example, breaking convention was a necessary result of innovation. Certain conventions could not physically exist within this innovation, as they directly contradicted the function of the innovation itself.

Further Views reading can be found on on Johnson’s Website.

“The contents of the view window, in other words, are dynamic; they adapt themselves automatically to any changes you make to the pool of data on your hard drive.” “If there is a genuine paradigm shift lurking somewhere in this mix—and I believe there is—it has to do with the idea of windows governed by semantics and not by space. Ever since Doug Engelbart’s revolutionary demo back in 1968, graphic interfaces have relied on spatial logic as a fundamental organisational principle.”

I find a link in modern interface design here. For example, I have a multitude of documents in my Google Drive. When I open up the Google Docs Web App, I am faced with, firstly, the option to create a new document, secondly, followed by a list of all of my documents. However, Google doesn’t automatically order these by Name or Type—as some apps may default to—but by Date.

A screenshot from my Google Docs home-page, showing the hierarchy of where I will go when searching for the document I want.

The above screenshot illustrates this. When I arrive here, once I have decided that creating a new document is not my goal, I focus my attention on the next ‘chunk’ of information (highlighted by the red). Google has decided that it’s very likely I’ll want to resume working on something I have opened recently. Even if I haven’t opened the document I want in the very recent past, chances are it’s one of those I opened in the past month (orange). Failing that, the search bar is only a few pixels away at the top of the screen, so I can search precisely for what I want.

The majority of the time, though, Google’s first instinct is in fact exactly true, and the exact document I came in search for has pride of place in the hierarchy of the information. This is an example of the application organising information (or files) by a meaning that it has perceived through interpreting more static, regimented data (when a user has opened a file). The application associates ‘recently used’ with ‘higher priority’. As I said, the majority of the time, this is very accurate – it is probably my research report files (for this exact report) that I want to access, as that is almost solely what I am working on currently. However, sometimes that may not be the case. Perhaps I am working on my report, but I want to dig out an older document that I find relevant to what I’m working on now, in order to reference it. This organisation of information does not interfere with that, as I explained with the Orange and Yellow – everything else is still close at hand.

Apple also does this in their modern interfaces. Let’s take a look at my (suitably and conveniently disorganised) desktop at this present moment.

A few moments before taking the screenshot above, I’d taken another screenshot. At the time I took that screenshot, it’s likely I was planning to use it for something very soon after — whether that was to share it, move it to a different location, or open it in an image editing app to annotate it. I know that the screenshots save to my desktop, so my first step would be to open my desktop (in this case, I’ve opened it as a folder, rather than going to the actual desktop itself. It is also worth mentioning I could have opened the ‘folder’ ‘All my files’ and the result would be the same). As you can see, Apple have kindly organised my desktop files by Date Last Opened. This places my recently taken screenshot, again, in a prominent position set aside from the rest – it’s right at the top, under a section specifically for files I have created today. It is the first file I see; this makes the workflow of Save Screenshot → Find Screenshot → Share Screenshot (something I do often) about as streamlined as it has ever been.

The same principle would apply in a range of different scenarios, for example if I had saved an image from the web or maybe from another application. It is also worth mentioning that I can further organise the screen above. Apple gives you the option to organise the files here firstly by Date Last Opened (sectioning them into Today, 7 Days, 30 Days, as above) but within those sections you can further organise them by Name, Kind, etc. So, you might know that the file you are looking for was opened in the last week, and you also know it begins with a particular letter, so you can use those details combined with Apple’s intuitive sorting to then find it in a fraction of the time than if you were faced with your entire desktop listed A → Z.

This is just a small example of how modern interface designers are streamlining our workflows by interpreting and extracting meaning from data. This particular example isn’t even as complex as interpreting the content of documents, merely the time they were created or last opened. It also stands as an example of how very early interface design (going back to Apple’s 1996 Views) paved the path of innovation with their own breakthroughs along the way — not just in the form of new metaphors or visuals, but by questioning the way we think about and utilise data and information. The notion of a semantic—rather than solely spatial—file system is one of these.

“What the view window does, in effect, is say this: why not organise the desktop according to another illusion? Instead of space, why not organise around meaning?”

Dead Space Analysis

Video: Designing Dead Space's immersive user interface

Video: Designing Dead Space’s immersive user interface

Dead Space’s Diegetic UI

The integrated nature of the diegetic UI within the gameplay meant that it was used to enhance the story driven gameplay, not just laid over the top as a means of control. (As explained by the Lead UI Artist above) The broken, dystopian future of Deadspace was emphasised by details such as the interface being broken or unpredictable at times with static, scanlines, flickering lights, etc. These details did not necessarily make logical sense in a futuristic world (why would this advanced futuristic technology suffer these analogue traits?) but were included for the sake of enhancing the atmosphere and ability to tell the story. These elements communicated feeling and emotion to the player and increased their immersion within this world.

This is an example of innovation in Interaction/UI Design creating a more successful experience for the player. As a result of these decisions, the player had a more seamless existence within an imaginary sci-fi world. Usually, interactions such as checking your remaining life, opening a door, or fast traveling on a map can break that sense of immersion and temporarily bring a player out of the imaginary world. However, in the case of Dead Space, the designers remedied this by innovating and thus creating an experience where the player didn’t need to leave the game world to perform these actions. This was done by making it so that the in-game character showed their health on their back, interacted with maps in game, etc. This level of immersion benefits the game by intensifying the emotions the designers set out to instill – fear, or ‘a horror experience’.

(The minor exceptions to this are screens such as the pause menu, settings page, etc which do not ever ‘belong’ to the in-game avatar but solely to the player of the game. The designer mentions how this was approached by ensuring these interfaces were always set behind Isaac, the in-game player avatar)

🔗 Links to: Skeuomorphic Design

(Using familiarity by retaining ornamental design cues that were necessary in the original form of an object but no longer technically are) E.g. Using scanlines in Dead Space, the paper texture background in Apple’s notes app.

Screen Shot 2015-10-27 at 10.14.16.png

The above screenshot shows the original UI that needed to be implemented but the team realised this distracted too much from the gameplay and broke the immersion. The player’s attention would never be focused where it needed to be. The ‘rig’ is the answer to this, implementing the elements diegetically into the game.

  • The initial design for helping the player navigate, which was later deemed 'unsalvageable'
  • The innovation to solve this dilemma - the 'locator', a much simpler, still fully diegetic, glowing line on the floor
  • Showing how the locator has evolved over time into the later games. Much more clarity is achieved here, by dimming the rest of the world and lighting up the focus of the player's vision with bright blue lighting

As shown above, with the player navigation system, the team made many innovations throughout the design process in order to stay truly committed to their diegetic interface. It seemed illogical to allow this diegetic illusion to break at any point, as it would reduce the effect of the rest of the design.

Screen Shot 2015-10-27 at 11.12.20

However, the design of the ‘Bench’ proved it very difficult to stick to these self-set conventions. Trying to create a more immersive method of upgrading equipment using the ‘Bench’ resulted in favoring full diegesis at the expense of usability.

Dead Space’s the workbench began as a way to tie Clarke’s engineering background to the game as he created weapons from what he found in the environment, Ignacio said. Its redesign in Dead Space 3, while offering more traditional weapons, was also way to push the idea of Clarke as engineer farther by allowing him to actually craft his own weapons.

The first attempt to redefine the workbench for Dead Space 3, which included Clarke in frame and multiple windows on the bench, was “unusable,” he said.

“You know when you’ve screwed up a system,” he said, when those working on the game would rather use the debug system than the one in the game.

The compromise involved using a more traditional UI element that took over the whole screen. Despite a break with the diegetic design principles, it’s a decision he stands by.

“At the end of the day, none of that is important if your users really can’t interact with your game,” he said. “The bottom line is that fun and usability are more important than the bullshit I was talking about in the beginning.””

Polygon article on Dino Ignacio, Visceral Games’ lead UI Designer

The bottom line is that sometimes you have to let go of the conventions you’ve set yourself, or the previously established industry conventions, in order to ultimately create a better experience for the user. When the ‘Bench’ system the team were designing was failing, they needed to let go of their stubborn desire to 100% stick to a completely diegetic interface and instead settle for a full screen, more traditional UI screen. The result was much more usable for the player, despite not being as seamlessly embedded in the in-game surroundings. This compromise was worthwhile and had a positive effect on the overall experience. This is a case where breaking the rules was evidently the right choice to make.

Above are some images showing the iterations of the inventory design. The end result was simplified as much as possible which was necessary for the diegetic nature of the design to function. It was important to keep Isaac on the screen to maximise the effect, as the interface is being projected as a hologram from his equipment. Consequently, though, this reduces the ‘real estate’ available for the UI to use. It was important to focus on the readability and keep the features minimal. While doing this, it is also clearly important for the team to keep to the aesthetic style and theme of the game. Mostly solid colours and lines were used to minimise confusion and clutter, with each section clearly in its own ‘chunk’. However there are still more subtle elements of texture and pattern, which could be considered decorative (and therefore possibly considered not truly minimal), but it can be justified that they do serve important function – as explained above – these details are a subtle example of using skeuomorphs such as scanlines to contribute to the atmosphere.

My personal reflection is that in a situation such as this, it is important to prioritise simplicity, usability, readability and clarity above all else, especially given the limited screen space (as I would say the designers did at the time). Most of the time, this would involve removing everything that is not entirely necessary for the function of the interface – which would generally include more ‘decorative’ elements. However, it’s also important not to sacrifice elements that are in aid of the story, atmosphere and overall feel of the experience. If scanlines and other more superficial details allow the design to blend into the game world more effortlessly, then this is equally important, providing that they don’t interfere, such as by obscuring the text. I feel that it is important to add details like this in the least obtrusive ways. For example, the subtle shapes of the corners and lines add a technological, futuristic feel to the interface while also ensuring these headings stand out to the player – meaning they can quickly skim over the interface and locate the section they are looking for more quickly. As well as this, a colour palette of muted grey-blues with highlights of bright blue and white— making clever use of varying opacity—also contributes to the sci-fi theme while simultaneously creating text that is easy to read against the background of the game world.

Having said this, if I were to design the interface for the next Dead Space game, I would opt for something even simpler. It’s easy to say that details such as texture and scanlines add to the ‘sci-fi’ look, and this is true based on established conventions and trends – but is almost becoming too reliant on stereotypes. It might be beneficial to consider this from a different perspective. When I think of ‘futuristic’ user interface, I think of innovation. But by recycling the nature and characteristics of older, analogue technology, is this not going in the opposite direction? When it comes to the Deadspace 3 inventory in particular, the combination of patterns, shaped borders, scanlines, textures and detailed item thumbnails are beginning to teeter on the verge of clutter. I think from this point, it may become beneficial to take an aesthetic direction akin to the new Star Wars: Battlefront interface or perhaps Destiny.

The Star Wars: Battlefront 2015 Beta interface, shown above, makes use of:

  • Bright, solid, flat colours (and a limited, consistent palette)
  • Blurred and frosted backgrounds for clarity
  • Clear and consistent grid systems
  • Flat, informative pictograms over more detailed thumbnail artwork
  • Strong use of clear and modern typography
  • Spacious and generous use of white-space to avoid clutter

The design avoids feeling bland, clinical or plain by making use of full scale, high definition artwork occupying the background space (blurred when necessary to retain clarity of the overlayed options). The large character and asset models in the background rotate slowly, adding subtle life to the interface without becoming distracting. These backgrounds also take this opportunity to showcase the high quality modelling and artwork featured in the game – giving the player a chance to look more carefully and close-up – as opposed to when they would usually be in fast-paced battle action, glossing over the rich details.

I personally feel that this clean and slick style could be combined very effectively with the Dead Space team’s experienced knowledge of implementing diegetic interfaces. This is one aspect the SWBF interface did not take advantage of – all interface is completely non-diegetic. [It is worth noting that in the following, I am thinking completely in terms of game design, out-ruling any preconceived biases about the studios themselves, their signature styles, usual methods of working, their available resources, capabilities, creative freedom, etc] The lack of diegesis in Star Wars may be due to wanting to stick to the safer choice of a more traditional UI for such a large-scale, online multiplayer FPS – something that may not be able to take as many risks or is more bound to established conventions and patterns within its UI.

However, more prominently, the use of Diegesis could pose consistency issues. Players can take control of a variety of different characters – from both sides of the battle. My first thought was that the designers could have made use of the fact that a character such as a Stormtrooper wears a helmet, and could therefore use a Diegetic Helmet HUD – or at least have the existing HUD slightly slanted and fish-eye to replicate looking through a helmet (see: Destiny). The problem created here is that… what if the player was playing as Luke, a character without a helmet? Or piloting space craft or a vehicle such as an AT-AT? These would call for very different HUD designs which would likely result in either: a) far too much work or b) inconsistency and therefore lack of usability. These are all considerations for when trying to streamline a design to suit it’s context most appropriately.

On the other hand, a new Dead Space would (if in keeping with the first three games) most likely feature a single, unchanging playable character who can continue to seamlessly make use of diegesis through holograms or similar in-game technology. As said above, I feel that if this also took advantage of a more contemporary design aesthetic like that of Star Wars, the result could be an even more seamless and functional interface that was also in keeping with the sci-fi theme — without falling prey to any outdated sci-fi stereotypes.

The book Game Development Essentials: Game Interface Design also covers the Dead Space interface and its use of diegesis. [To be updated with scans from the book]

Studio Work Update + Planning

I have had to restructure and rethink my Studio Work (Creative Practice) due to my work placement next year. I am no longer following my original project plan, and therefore my deliverables for January will be much less substantial. I am still following the same idea (3D Sea Dragon Prototype) but will only aim to complete about 10% of what I originally had in mind, as the other 90% will be completed as part of my work placement.

I created a Progress Report in the form of a few slides to update lecturers and peers (as well as clarify things for myself) on where I am at currently and what my plans are now.

View the slides as a PDF: Progress Report BA3a

I am now mostly devoting time to my Research Report, but alongside this I am still researching and planning towards my newly decided Jan 4th Deliverables. For my January submission, I am aiming towards:

  • Character Controller (the beginning of, may still be experimental and unfinished)
  • Collectible Script, working with the character controller
  • Sketches and Initial Ideas for Branding (possibly some early logo type/mark designs)

To accompany these, I will also have:

  • Research + Reflective Journal (this blog)
    • Research and tutorials into UE4, Blueprints, 3D Prototyping, etc
    • Research and inspiration into Branding and Identity for Games (as well as more general findings)
  • Individual ‘Mini’ Tasks
  • Completed Art Test
  • Any applicable Sprung Studios prep work
    • e.g. Research into prototyping apps, journal entry on 1-week trial (December)

All of the above will contribute to my graded submission.

Task 3: Frogger Clone Progress

Frogger_Clone_Space

The sprite artwork has been updated to reflect a different theme

I have made some progress with my Task 3, the Frogger Clone. I am now working towards creating something more original, starting by creating new artwork. I made some quick space-themed sprites, aiming for an arcade shooter feel. I replaced the Trucks and Cars with Asteroids and Meteors. The logs became light-beam platforms. This dramatically changed the theme but the gameplay still fits. I would like to work further into the artwork, but first I will turn my focus to improving the functionality. My next steps will be to add some kind of Score System and Win/Lose/Restart Screens.

Later down the line, I would like to improve the artwork further by adding some simple animations to the projectiles, platforms and possibly the background too. This will bring the scene to life a bit more and add more visual interest to the player. It would also be interesting to experiment with some visual feedback in the sprite when the player loses, such as the ship falling or exploding – right now it simply disappears instantly and is quite anti-climactic.

Task 3: Frogger Clone

The third mini-task we have been set is to create a Frogger clone.

Image of Unity Screenshot

The project so far in Unity 5

It made more sense for me to approach this task in Unity rather than UE4 as it feels more suited to 2D/sprite based games and my Unity experience means I can take the concept a little further. Although I would like to improve my very minimal knowledge of UE4 and Blueprints, for this task in particular it felt most logical to use Unity.

Video Gif of the game being played

Gif showing the game in action

Above shows the current state of my Frogger Clone. I made use of tutorials to get to this point, using Unity. I used pre-made sprites while I got the game functioning correctly. Now, I would like to make the game more original and build in my own features and artwork.

A few things I would like to introduce next are:

  • Custom Art – Sprites and Animation (potentially changing the theme)
  • ‘You win’ and ‘You lose’ feedback
  • Points Scoring system with High Score
  • Collectibles for extra points
  • ‘Start’ and ‘Restart’ GUI buttons
  • Additional level(s) with increased difficulty

 

Task 2: Art Test

Image of Both Gemstones and Reference

Both Gemstones and Reference Image for Comparison

Image showing detail of Green Gemstone

Detail at 400% Zoom of Green Gemstone

Image Showing Zoom Detail on Red Gemstone

Detail at 400% Zoom of Red Gemstone

Screen Shot 2015-10-16 at 20.19.03

W.I.P. Shot of Red Gemstone showing workflow process

Final Learning Agreement + Work Schedule

I have finalised my Learning Agreement and Work Schedule drafts to be approved.

Preview of Learning Agreement

Preview of Weekly Work Schedule

By planning my work in this way I can:

  • Ensure I am planning a reasonable and achievable amount of work
  • Keep on track throughout the year
  • Avoid spending too long on certain tasks and not leaving enough time for others
  • Communicate my plans to others
  • Achieve the set Learning Outcomes
  • Utilise my available time most efficiently

I now feel more confident in what I can achieve throughout this project and have a better idea of how manageable it will be. These documents will be useful to keep referring back to, making sure my work stays on the right path.

Blueprints and UE4 Learning

I attended an extra session today in order to begin learning Blueprints and UE4 first-hand. We covered:

  • Basics, Interface etc
  • Blueprints for:
    • Character Controller and Movement
    • Firing Projectiles
    • Ammo pick-up
    • First person and Third Person
    • Jump pads

View post on imgur.com

View post on imgur.com

View post on imgur.com

I now feel much more confident about approaching my prototype. The Learning Agreement I’ve been working on now seems much more achievable and I can more accurately think about what is and isn’t going to be possible, as well as more reliably set schedules and goals for myself. My next step will be to plan a weekly schedule, setting myself mini deadlines or goals for what I could like to be completed each week. I will also continue experimenting with UE4 in my own time as well as attending any extra sessions I can.

The basics of Blueprints today will allow me to now begin experimenting with ideas more closely relating to my personal ideas. I can use the Ammo pick-up to work into my own Item collection script, for example. I can also try using the built in Flying game pre-fab, observing how its built and then try to edit it myself now that I have more of an understanding of how things function.