Above shows a quick clip of my up-to-date prototype in action. The movement is now much closer to how I envisioned it, I simply need to carry out playtesting while continuing to iterate (during the next project), and I can perfect it and possibly even look to develop the full concept. This easily meets the goals I set myself for this project submission, so I am pleased with the progress I have made up to this point.
There is only one issue left in the prototype, which unfortunately I don’t believe I can fix for submission, but I will continue trying and can at least fix it inbetween now and April. When using the mouse, the camera often ‘judders’ which can be really frustrating, and reduces the fluidity of the movement. I believe this could be to do with the various mouse-based interactions conflicting with each other, but it will take more research and most likely some guidance to fix it. Other than this, the prototype is in a good state to submit.
Various sketches, notes and workings out done while working through my Studio Work. Included here in order to show my thought processes and methods of working through ideas.
I created the above Branding Document to showcase my finalised design and briefly summarise the concept behind it.
Since my last update, I refined the logo very slightly after taking in some feedback from others. I liked the look of the Icosahedron placed behind the ‘D’, but was concerned this might cause it to be misread as an ‘O’. I slightly tweaked the shape to be less rounded and decided to use Magenta (the colour of Weedy’s eggs) for the shape itself – creating further distinction. As well as this, I added some glitched details to the shape which integrates it into the design and theme better and also improves the overall aesthetic.
I’m pleased with how the design has turned out but so far, by just creating the logo itself, I have still only scratched the surface of the overall brand. Further down the line, in order to create a coherent identity and brand (rather than just a logo) I will need to consider and focus on:
For now, considering this project is focusing on Pre-Production, I am satisfied with the progress of the design and have actually overshot my target (earlier, I had only planned to submit some initial sketches for the logo, not a finalised design).
Since last working on my prototype I have started a fresh with a slightly different and much simpler script (with the help of George). This should help editing and troubleshooting easier, as there is much less complex script to cause errors or confusion. I can build up more features and in-depth movement later on, such as in BA3b. For now, though, I am still experimenting with different methods and figuring out how everything should function.
As shown above, the current prototype:
The character successfully tilts on up and down movement, which I had some problems with initially where the actor was rotating but the camera wasn’t Various attempts at fixing this are shown in the gallery above, but I eventually managed to fix this by simply attaching the Rotation to the Camera as a second target, as well as being attached to the FlippyMesh (For some reason it didnt work when attached to the SpringArm, only the Camera).
However, this also causes the camera to rotate on MoveForward as well. This is a problem I still need to fix, as I only want it to happen on MoveUp (on Moveforward, the Mesh should rotate but not the Camera).
My next step to try and fix this will be to try copying the logic and hooking each function up separately, making one catered to rotating both components while the other only rotates the Mesh — this should fix my issue.
Next steps (goals before submission):
As well as looking to art history itself, the rise and history of interaction design also offers interesting perspective. The invention of the Graphical User Interface (GUI, pronounced ‘Gooey’) was a significant turning point in design, hugely influencing how we interact with technology and—consequently—the influence of that technology on our lives. As video games are fundamentally interactive experiences, the advancements of the GUI and other developments within Human Computer Interaction (HCI) play a crucial role in the evolution of game play.
To trace the roots of the GUI, it may seem logical to travel back to the dawn of Personal Computers (PCs). After all, what use would we have for a graphical interface before then; what would we possibly put it on? However, ideas of such an interface can indeed be traced back much further than the personal computer, far before technology was capable of realising them.
Particularly, the late 1930’s, when Vannevar Bush wrote about a hypothetical device named the Memex, published in an article named As We May Think. His idea would have far reaching influence on interface design, long after it’s time.
The Memex, pictured above, was envisioned as a desk with two touch screen displays, a keyboard and a scanner attached to it. The idea was to allow the user access to all human knowledge using connections very similar to the hyperlinks we are familiar with today. The fact that this idea was conjured as early as the 1930’s is hugely interesting, and the way in which technology has panned out since is remarkably close to Bush’s ideas.
“The irony here, is that a middle-aged army scientist, writing thirty years before the first PC, understood interactivity better than all the Web titans in Silicon Valley. [..] After all, sometimes the best way to understand a technology is to approach it with no expectations, no preconceived ideas. Unhampered by any historical precedent”
—Steven Johnson, Interface Culture
We do, in the present day, have access to a huge expanse of human knowledge through the internet. Whether this is exactly what Vannevar had in mind or not is unclear but the concept is certainly not too far of a stretch — especially considering our method of interacting with that knowledge is through graphical displays, input devices and a system of hyperlinks all very much like what Bush described.
It is innovators like Vannevar Bush who we can thank for paving the way towards the methods of interaction we often take for granted now. In the words of Frank Chimero, innovators “do not stand on the inside of what is possible and push; they imagine what is just outside of what we deem possible and pull us towards their vision of what is better. They can see through the fog of the unexplored spaces and notice a way forward”.
I think this is a quintessential way of describing not only the ideas of Bush, but of all the other hugely influential innovators of interaction, some of whom I will briefly cover in the following sections. In fact, I could go as far as to say that Chimero’s words epitomise the very core sentiment of this entire paper.
One of those innovators who pulled us toward their vision with great impact is Douglas Engelbart. Although the Memex wasn’t developed because of the lack of technology at the time, the ideas proved hugely influential later in the century. Douglas Englebart, often considered the ‘father’ of the GUI, began to work on a machine which would serve to improve human intellect. He recalled Vannevar Bush’s essay to conceptualise such a machine, where the user could build models of information graphically and navigate around them dynamically.
In 1962 this was a huge leap of thinking, undoubtedly difficult for most people to comprehend; the computers which existed at this time were room-filling mainframes operated by specialists only. Despite being a difficult concept to persuade, by 1968 his ideas, technology and staff had grown sufficiently and he demonstrated his ideas publicly in front of over a thousand computer professionals.
This was the “public debut of the computer mouse” but this was only “one of many innovations demonstrated that day” [Source]. The mouse was mechanically different to modern mice, however the way which the user interacted with it is virtually identical. This demo would spark the widespread adoption of the GUI and therefore “dramatically changed the way in which humans and computers interact” according to Johnson (1997), who continues: “The visual metaphors that Doug Engelbart’s demo first conjured up in the sixties probably had more to do with popularizing the digital revolution than any other software advance on record”
Johnson’s comments certainly carry weight which can be seen simply by noting the similarities between Douglas’s 1960’s demo and modern technology. Douglas was undoubtedly one of the most influential figures in interface design; his technological advancements provided a solid base for which designers to work upon in the coming years. Although his work was focused more on mechanics and physical technology rather than design, without his intellect GUIs would not have received the foundational technology on which to base themselves, which was necessary in order for them to eventually achieve the result we experience today.
Researchers at Xerox PARC were amazed by Douglas’s demonstration, which inspired their creation of the Xerox Alto in 1973. Despite being a commercial failure, the Alto is widely considered a large influence in interaction design, making some important breakthroughs with the design of its GUI. It was widely used for research purposes [Source] and therefore allowed for further developments within Human Computer Interaction.
The Alto began with an interface that resembled a command terminal more than a desktop environment, but eventually resulted in the creation of SmallTalk in 1975. Originally conceived as a programming environment, SmallTalk went on to become the first modern GUI and in turn conceived the earliest use of icons and pop-up menus.
As well as this, the Alto also demonstrated the first use of a diagonal-pointing bitmap cursor pointer which we recognise in modern computing today. What is most notable about this particular cursor is it’s behavior – it alternated between different shapes depending on the task. For example, today the cursor may change into a hand when grabbing; a watch, spinning wheel or hourglass when loading; various different arrows for re-sizing, etc. This is a significant example of early visual feedback, a crucial element of interface design.
A final advancement to note is that the Alto also inspired the creation of Alto Trek, one of the first network-based multi-person video games which was also the first game to utilise the mouse, and would later inspire the creation of Microsoft’s Allegiance [Source]. It is clear that Xerox contributed many important and varied developments within interaction design.
One of the most crucial of all these developments, though, occurred by inspiring the work of another important innovator — Steve Jobs.
As established, the influences of the Xerox team were far reaching, but their effect on the developments of Apple was one of—if not the—most crucial of all; what Steve Jobs would go on to create from this sparked revelation he experienced during a visit to Xerox PARC would alter the landscape and direction of User Interface and Experience indefinitely.
Development for Apple Lisa began in 1978, with some members of the team being former members of the Xerox PARC group. The project was to design a powerful personal computer with a Graphical User Interface that would be targeted toward business computers.
The Lisa used a desktop metaphor and saw the birth of the first pull-down menu bar, with each menu always appearing horizontally across the top of the screen. This is just one convention created then that still exists, almost entirely unchanged, in Mac OS X today (at the time of writing, the current version is 10.11 OS X El Capitan, as shown in Fig above). The Lisa also introduced many other elements which we take for granted today: check-marks for selected menu items; keyboard command shortcuts; greyed out inactive items; the trash can; the use of icons to represent the entire file system; drag-and-drop; double-clicking — just to name a few. The developments here allowed for progress towards a universal structure for organising information on the screen, in a way that is familiar, versatile and user-friendly.
However, it wasn’t exactly the Lisa itself that went on to make history. Despite being such an advanced machine, sales were limited mostly due to the $10,000 price tag and difficulty of writing software for it. This called for a much more simplified, lower cost version of the Lisa. Steve Jobs took the task upon himself and achieved this goal with the original Apple Macintosh, which was introduced to the world in dramatic and iconic fashion in 1984, retailing for $2,495. It retained most of the GUI features of the Lisa, and even shared some of its low-level code, but the operating software itself was written from scratch to fit in the small memory footprint. It was this machine that would succeed, and mark a significant turning point for interface design.
Questions surrounding who invented what or who stole from who are often hotly debated within the technology industry. However—regardless of personal opinions or accusations—if artists, entrepreneurs and inventors didn’t take influence from one another, then that would be a true hinderance of innovation. Influence is an intrinsic, fundamental element that is necessary for innovation to happen.
This sentiment can, in fact, be best described with a Haiku written by Yosa Buson.
“Lighting one candle
with another candle—
In the words of Frank Chimero,
“Buson is saying that we accept the light contained in the work of others without darkening their efforts. One candle can light another, and the light may spread without its source being diminished.”
As creators, we must accept that creation and innovation is an accumulative effort – one that is ever progressing. Our own ideas along with everyone else’s snowball together to collect new thoughts and developments along the way — a movement that never ceases. As our malleable inspirations travel down the infinite branches of the thoughts of others, they become reshaped — moulded into something new. An improvement here and a new perspective there, the result is perpetual growth and change. At each stage, no one can claim ownership to an idea, for it is the combined product of a hundred others. Did Karl Benz, Edouard Michelin, or Henry Ford steal the wheel from the Sumerian people of the Bronze Age? Equally, did Douglas Engelbart steal from Vannevar Bush? Or did their personal contributions build upon an ever-progressing concept; the summation of the ideas and contributions of many, always pushing the boundaries of what we know, drawing us closer to that which lies beyond what we currently see?
This is how we innovate; we take inspiration and then develop it. This is what distinguishes innovation from thievery—personal, individual development. Steve Jobs took a mouse that cost Xerox $300 to develop and made it cost $15, while also simplifying it and improving the ease-of-use. “If you lined up Engelbart’s mouse, Xerox’s mouse, and Apple’s mouse, you would not see the serial reproduction of an object. You would see the evolution of a concept.” —Malcom Gladwell, Creation Myth [Source]. An idea must be built upon, for it to not be stolen. Developed, adapted, improved.
As Isaac Newton said,
“If I have seen further it is by standing on the shoulders of giants”
But even this was adapted from another,
“Bernard of Chartres used to say that we are like dwarfs on the shoulders of giants”
— John Salisbury.
Creation requires influence (Kirby Ferguson, Everything is a Remix). The forces that shape our lives can’t be attributed to individual owners; we are the product of everyone before us. In order to see beyond what we know, we must stand on their shoulders.
I began sketching and brainstorming initial ideas while also gathering reference and images to take inspiration from. I mixed between sketching ideas on paper and digitally, occasionally taking an idea from paper into Photoshop or Illustrator in order to see whether it works in practice or not.
When brainstorming ideas for a type face based on geometry, it occured to me that the letter ‘W’ is visible in the edges of an Icosahedron. I thought it may work well to base the logo on a single Icosahedron with the ‘W’ highlighted along the edges. In the sketches above you can see some attempts at visualising how this could work, as well as experimenting with the faces of the shape ‘exploding’ outwards and the ‘W’ being revealed in the gaps.
It then became apparent that each letter could be made from various combinations of edges. This idea instantly stood out to me, with a strong geometric foundation that’s closely linked to the shapes and themes of the game. The shapes of the letterforms are very similar to the stylistic nature I had in mind and are reminiscent of the “futuristic cave painting” style alphabet I designed for the in-game world.
It’s important to me that the design has a strong, meaningful foundation – even if implicit evidence of that is subtle in the final design. I want the design to have solid grounding and reasoning. Details such as this are what drew me to this concept.
Initially, I intended to keep the 3D images of the Icosahedra behind the letters in order to show the viewer where the letter shapes actually come from. However, I wanted to also experiment with the type being isolated from the Icosahedra. This will allow for a more minimal design which is always important when it comes to identity and logo design; I want to keep it minimal while also injecting as much personality as possible. I was concerned that the varying dimensions would make the logotype look unbalanced, but this actually functions quite well for the concept as I wanted to represent a feeling of imbalance – the main concept of the game revolves around a universal and environmental imbalance that Weedy journeys to fix, so it makes sense for a sense of this to come through in the design.
I experimented with various different stroke styles and weights for the letters before settling on the medium weight above. I wanted a fairly chunky logotype that would give enough substance to work with and display well at a variety of sizes (bearing in mind the likely display on small mobile screens).
I utilised some of my previous text ideas to experiment with how to handle the rest of the title. The type above uses letters made entirely from the lines of an isomeric cube, continuing the theme of type created from geometric forms. My inspiration from this comes from Sol Lewitt’s Variations of Incomplete Open Cubes, 1974 and other artists’ subsequently inspired works.
Typeface Sol was created as a continuation of Sol LeWitt’s 1974 project entitled ‘122 variations on incomplete open cubes’ which consisted of 122 views of unfinished cubes constructed from wooden planks. The character set of Sol is defined by the spatial potential of a cube. The new definitions are dependent on the viewer’s imagination and ability to recognize letters in seemingly abstract composition.
Although I do really like the concept, the actual visual result wasn’t working well for me – the varying angles between the heading and the subtext were clashing and the geometry didn’t flow or seem balanced and harmonious. I continued to experiment with other ideas.
I wanted to use ‘Weedy’ as the main type with ‘the seadragon’ supporting it as sub-text (see examples above). This creates a strong, memorable association with the main protagonist while also providing some more context on the theme. By using just one short word (Weedy) as the main stylistic element, it means I can work more into the design of this one word without it becoming too complex and long – as it would if I tried to design the other three words as stylistically.
I decided to leave the sub-text for the time being and focused on the main type. This would allow me to focus more on shape and form and then create sub-text to match later on, which is a more logical way of approaching it. I took inspiration from the glitchy, melting text in my earlier Mood board and experimented with how I could recreate this effect in Illustrator. I iterated until I was happy with the overall weight and form. I was aiming for a piece of text that evoked a sense of liquid and water while also retaining its original geometric nature.
Once I had the ‘Water’ element in place, I also wanted to add the ‘Glitch’. I broke up, fragmented and further distorted the type to create the result above. I then moved on to create the rest of the text in a style that would complement it.
I decided to proceed with the third design and begin iterating on it further. I wanted to mock the design up in different settings so I could ensure it would be versatile. When designing logos, I always bear in mind a few fundamental rules:
The design should always be able to function in one solid colour, allowing it to be perfectly versatile well into the future. As well as this, trying to implement colours, shading, gradients and other decorative effects in the early stages only distracts from the core design itself.
The design should be versatile for any application – whether it’s the size of a postage stamp, on a mobile phone screen or on a billboard.
This kind of theory knowledge I have adopted over time from studying the work and reading the books of designers such as David Airey (in particular, his book Logo Design Love), Paul Rand, Alan Fletcher, etc.
At this stage, I feel like I have created some strong designs, but I will need to gather feedback to help get a better idea of which is the most effective.
I used starter content and followed tutorials to experiment with a 2D Side scroller. I did this to improve my knowledge of more varied blueprints as well as start playing with a basic collectible script which I could later use the logic of for my 3D prototype.
Coins spawning every 2 seconds
Showing the use of text strings to return when the player has touched the coin – this helped me figure out the logic of how to determine when the coin had been triggered, which could then be developed to add the coin to the player’s total count, and destroying it.
Overall, doing small tests like this is helping me expand my knowledge in a varied way. This will help me know how to apply my knowledge better and start to figure out how to create things more independently.
My focus today was to iterate on my movement controller based on my evaluation in my previous post. Firstly, I wanted to make the Pitch of the Actor rotate based on vertical movement. So, when Weedy swims downwards his nose will point forward and vice versa, to increase the feeling of realistic swimming movement. Otherwise, it appears as if he is just hovering up and down. I found the solution to this very simple.
In the Movement Sequence Blueprint, I have highlighted the edits I made to achieve this. The script already did a similar action for MoveRight and Roll and I used this to figure out how to do the same for MoveUp and Pitch.My basic understanding of this is:
Although this was successful and it functions in the way I wanted, I do feel like I need a better understanding of exactly why. I have a rough idea of how the nodes work, as outlined above, but I’m not entirely confident in it. I think the main focus for me right now should be to continue learning Blueprints and improve my knowledge much more, so I can more confidently make changes and understand how to create the desired functionality for my prototype.
The above Gif shows the above in action. When I move Weedy vertically, his body tilts in the right direction, giving more realism and life to the movement.
What I have noticed, though, is that in the above graph, the nodes to tilt the Pawn in the direction of Forward movement are disconnected. This is something I need to look into and may have happened accidentally when editing the BP. This is another reason why I need to improve my BP knowledge, so I can quickly understand and fix details like this.
Otherwise though, I am pleased with the progress and will simply keep working on various details of the script in order to refine both my understanding and the functionality of the prototype.
Another area that needs improvement is how the script handles collisions. There is an area of the graph which deals with what happens when the actor collides above a certain velocity, in order to release particles and carry out other behaviour. Right now, the collision can be a little extreme and I feel this makes sense in terms of a spaceship or plane, but not an animal swimming through water. When Weedy collides with the wall, I would like to achieve a much softer impact.
Currently, when moving vertically, the camera remains looking at the back of the Pawn. Instead, I would like to test out how it feels if the camera rotates towards the direction that Weedy is facing when he moves vertically up and down (LMB/RMB). I feel like this may be helpful for the player if they can see where they are going when rising up or down through tunnels within a level. I did some research into this and begun playing around with the script. I did make some minor progress but struggled to achieve much. I found a helpful answer that broke down the necessary steps to achieve something like this (below), so I can use this as a starting point, but as I have said before – I need a better understanding of Blueprints first, so I think it’s wise I take a few steps back before attempting too much at once.
I think a good idea might be to see if I can attend some extra UE4 Drop-in sessions while also going back and trying some more BP tutorials before I progress.
Building on my initial tests, I have been further developing my 3D Prototype Controller. I started by creating a simple 3D model in Blender to use for the prototype in order to better visualise how the movement is working with my particular character, Weedy the Sea Dragon.
By modelling manually like this I am improving my ability to quickly knock out prototype models for a faster workflow. Previously I have made most of my models by creating Voxels in Sproxel and then using the Decimate Modifier in Blender to generate a low poly model, which results in much less control over the shape and less desirable topology. I think it’s important to improve my ability to model more traditionally and manually. Although I have little knowledge of 3D and my topology is far from ideal, it still functions perfectly fine for a prototype, which is all I need.
A turntable of the final model.
I exported my model from Blender as an FBX and then imported it into Unreal Engine. The image above shows the model as a Skeletal Mesh, but I soon realised a Static Mesh would be fine for this stage and re-imported it. If I need to rig or animate at all later (which is likely depending on how I progress with the movement controls), then I can make use of a Skeletal Mesh. For now though, I wanted to keep things as simple as possible so I can focus on one thing at once.
I first opened up the Flying Starter Content and replaced the mesh with my own, just to quickly test how those controls felt as a starting point. It worked fine and I could tell this would be a good place to start editing the Blueprints to refine the movement myself.
However, I also wanted to try the “Space Shooter” Starter Pack I had found in my earlier post. It took a fair amount of trial and error and lots of extra research and troubleshooting to get the content into my new project and tweak it to work with my model and my ideas. I studied the blueprints and other assets carefully, removing what I didn’t need in order to give myself the most simple resources to start working with. This has taught me a lot more about UE4 and Blueprints. The above image shows an early test, controlling the character after setting up my model with the Blueprints and other content I had imported, and after making several tweaks.
I continued to play around, refining what I had and learning more about how it works. I adjusted some Speed values and also the Input Bindings quite a lot, resulting in a control scheme that I feel works more fluidly — although there is still a lot I would like to continue to change. For example, I altered it so that up/down movement was controlled with the Left/Right Mouse Button. The player would already be using the mouse to steer the camera, along with WASD to steer the Pawn, so it makes sense to make use of LMB/RMB. Previously, this was set to use Shift and Control, but I found this much less natural.
The above image shows the current state of my prototype. I began adding in some blocks to practice navigating around in order to get a better feel for how the control is working. The game would most likely feature tunnel-based levels so it’s important to create a controller that works well with tighter spaces. It’s also important to experiment with the up and down movement, not just the forward/back/left/right, as the vertical movement in the water is something that is important for me to get right.
A few details I would like to work on from this point are:
It is important to me that I end up with a movement controller that is uniquely my own. Although it is very useful that I have found starter content I can make use of as a starting point, it’s necessary that I actually do fully understand it and am not relying too heavily on the work of someone else. In order to do this I will continue to deconstruct the Blueprints and try rebuilding them from scratch so I can break down each section individually and examine how it works. I will then continue to build my own adjustments into it until the product is something unique to myself and my concept.
I also think it is very important to begin collecting feedback from others at this stage. The way that people interact with control schemes varies a lot person to person and some feedback would be very valuable in order to understand where I should put my focus next and what details need more tweaking. It may be necessary here to add the option for inverting the controls, as some people may require this in order to give accurate feedback. The way in which the camera moves is similar to how the control of a FPS or in-game Flight works – these are examples where a lack of option to invert can sometimes be completely game-breaking for players, resulting in them feeling very frustrated.
My next steps should be to address the areas above and look into creating a set of play-testing questions regarding the control so far.
My current focus is experimenting with 3D Third Person movement control in Unreal Engine 4, in order to work towards experimenting with a prototype (and gauge whether this is possible for me to complete). The UE4 ‘Flying’ starter project was a good place to start and I have played around with this project, viewed other people’s work based off it and sought out as many tutorials as I could find on the subject.
Although the start project focuses on a flying spaceship pawn, this is actually still relevant to my prototype idea – which is an attempt to create an underwater swimming controller (see Ecco the Dolphin for reference). If you imagine that the empty space around the flying ship is in fact water, the movement is almost identical.
I have been looking at games that use 6DoF (Six Degrees of Freedom: forward/back, up/down, left/right, pitch, yaw, roll) controllers to gain a better understanding of how this movement can work, how I want my player to move and whether this will be suitable or not. I’ve been looking at games such as Descent, Retrovirus, Shattered Horizon, etc. These games are mostly space-themed first person shooters, that involve zero gravity or otherwise flying upside down.
I feel like this method of movement is very close to how I would like my player to move, but possibly just without the full Roll (so, no swimming upside down). However, this is exactly what a prototype will allow me to figure out. This type of movement works best with tunnel style levels, which is also what I have had in mind for my prototype – or at least areas of it.
I found an accompanying tutorial for this type of movement but unfortunately couldn’t manage to make it fully functional due to it being outdated for a much earlier build of the engine and also my limited knowledge of Blueprint. Despite this, following the tutorial still taught me much more about using UE4 and Blueprint and I did manage to create a partially working controller.
An initial result of mine after following Tom’s tutorial – the basic ship movement is functional (WASD), but the mouse input wouldn’t work. Amongst a forum post, I found an updated version of the graphs he used and tried those also. The mouse input was responsive but was juddering and wouldn’t allow you to look around 360 degrees, feeling very restricted. After troubleshooting, I decided to continue researching and experimenting from alternative reading and tutorials
After moving on from this initial test, I found this “Space Shooter Starter Pack” from a user on the Unreal forums. The post is also fairly old and a little outdated but I did manage to get it to work in 4.9 after a little tweaking, allowing me to play around with the controller and also delve into it’s Blueprint to see exactly how it works. This project is a much improved version of the starter flying controller bundled with the engine, which I have also been playing with. It isn’t perfect for my needs but is definitely a valuable resource to improve my UE4 knowledge and also get me started on my own creation. Below I have included a set of gifs I recorded while testing out the project files myself.
I have broken down the controls to look at them individually, and see more closely exactly how they function. This will help me get an understanding of details which I would like to use myself and areas I could focus on for improvement. It also improves my understanding of the technical side of movement and how various details can come together to produce a controller which is much more fluid and satisfying to use.
[It is also worth noting that the UFO can fire projectiles and use an abduction beam which are not shown above, as I am mostly focusing on player movement for now and will move onto these kinds of mechanics later]
Overall, the controls feel fluid and the functionality is very close to what I would like to end up with in my own project. I found that using Shift/Ctrl to rise and sink was a little cumbersome on the keyboard but I can imagine this being much nicer on a game pad – however, I would like to create a control system that works seamlessly between input devices – not tailored to just one. I also need to keep in mind that the ideal platform of my concept is touchscreen tablets such as the iPad, so the control system would be much simpler (as detailed in my earlier project’s Pitch Doc).
Understandably, the Blueprints are much more complex than what I have been dealing with so far, but the author has helpfully commented and arranged them in a logical way, so I can begin to pick them apart. This also means that if I would like to know how to create—for example—the speed boost, there is a neatly commented section that I can view to see exactly how this works and begin to pick it apart. There are various more complicated techniques used to improve the overall fluidity of the control – such as changing the FOV (field of view) of the camera dependent on speed and also using a spring arm to re-position the camera more effectively as the player moves.
I think a wise next step for me would be to begin trying to recreate this controller in a blank project using these project files as reference, but minus the extra details such as the more complex animations and 3D model (so I can focus purely on movement). I would like to create and use a simplified, prototype Sea Dragon model (not rigged) to help me visualise it with my own concept. This will allow me to strip the Blueprints down to the essentials and worry about polish such as animation later, once the functionality is where I would like it to be. I will also be able to build a more thorough understanding of each element. I think it would be logical to take these project files one section at a time and focus on remaking it myself. Hopefully this will not prove too problematic as a result of my limited Blueprint knowledge. If I find this too ambitious or challenging, I can scale back my goals and take a step backwards to build a stronger foundation of UE4 knowledge first.