THE GREAT WAR - RECREATING THE WESTERN FRONT FROM THE AIR - PART 4 by sandy sutherland

In this post, I cover the first part of the process involved in populating my terrain with objects

 A tiny tease of substance terrain textures in engine - Colour, roughness and a normal map for each tile. There is also a water mask map that will be used to boost specular. 

A tiny tease of substance terrain textures in engine - Colour, roughness and a normal map for each tile. There is also a water mask map that will be used to boost specular. 

SET DRESSING - APPLYING AN OBJECT LIBRARY TO REAL WORLD DATA

The key to set dressing this giant terrain was to keep things as procedural as possible. I wanted to minimize manual placement of objects at all costs. The kinds of objects I was placing were things like trees, buildings, trench lines, barbed wire, craters, rubble, vehicles, artillery, fires, that kind of thing. I would be creating a library of these objects and then distributing several million copies of them all over the terrain.

Secondly, I needed a combination of randomly placed objects, and very precisely placed objects - not just position, but precise rotation and scale as well. Finally, I wanted to use real world data to inform distributions of trees and buildings, along with other terrain features and landmarks. Again this is where the Open Street Maps data comes into play.

Thirdly, for performance reasons, I needed to reuse objects and meshes as much as possible. I made heavy use of Unreal Engine's Hierarchical Instanced Static Meshes (HISM's). For a great overview on how to use HISM's, I highly recommend this video by @TechArtAid

 

THE FIRST APPROACH - STORING ARBITRARY DATA IN MESHES

I'll warn you up front that this is not the method I ended up using. I've included this description just for peoples interest, it's a method that could work, but there is an easier way I only discovered very recently. I also wrote this description down a while before posting, as will become obvious...

So as I've dicussed in my previous post on texturing, I had all sorts of data extracted from open street maps. This included buildings and trees, probably the two most common objects my terrain would be covered with.

BUILDINGS:

So I had all these building footprint polygons, and I needed to somehow get that into a format where I could replace them with several 100,000 instanced mesh buildings in Unreal Engine. Each instanced building had to have the same alignment/rotation, size, and building type tag as the ones in the .osm files. Typically, instanced meshes in Unreal Engine are used for foliage, things like grass and trees, often placed via painting onto terrains and meshes with totally random size and placement. You can go in and manually move and rotate individual instances after the fact, but there was no way that was an option in this case, I needed everything placed correctly up front.

So the problem was 2 parts, for each building, first I had to extract the building rotation/size/type properties, and then I had to define that buildings position, all in a format that could be read and accessed in Unreal.

My solution - generate 2 new meshes in Houdini, and store each buildings key values as point data in the meshes. Then once the meshes are imported into Unreal, make a Blueprint that loops over each point, extracts the required info, and generates a mesh instance building at each point position.

HOW IT WORKS:

The Houdini graph I built loops over each building footprint polygon and does a few different things:

  • First, resample the polygon edges, adding more points
  • Make each points normal aim at its neighbour, convert the normal vectors to a rotation value
  • Tally up the most common rotation values, and define the most common rotation
  • This becomes the overall rotation direction for that building - store it as float value called "rotation"
  • (The above step is very much the same process I used for defining field rotations in a previous blog post)
  • Next, rotate the polygon by the negative of the rotation value - this cancels out the rotation, making the polygon world aligned
  • Find the bounding box X and Y size of the polygon (Z up world). Store these 2 values as "xSize" and "ySize"
  • Rotate the polygon back as it was
  • Replace the building polygon with a single point at its centroid. Store these 3 new values as attributes on the point

The bit that might be slightly abstract at first is that I'm not using the position attribute to actually store position values. An attribute is simply a container. Position is just a container for a vector, which itself is just 3 floating point numbers (X,Y,Z). 

So what I can do now is create two meshes, one for storing the actual building positions, and the other one can store my 3 values for rotation/xSize/ySize in the position attribute. Visually this second mesh looks like a total mess, but it doesn't matter, I'm using the mesh itself as a container that Unreal can read and understand. 

So to clarify, I create:

Position Mesh : The points of this new mesh just define building positions

Data Mesh :  This mesh stores the rotation, and X/Y size info for each building

So the 3 XYZ position components of the data mesh actually store

  • X = rotation
  • Y = xSize
  • Z = ySize

The two meshes have corresponding point counts and point IDs.

Right, I think I've covered that enough!

I needed to be able to access these values in Unreal. There is a very handy blueprint function I will cover in a moment, but vertex position, uv's, normals and tangents can all be accessed per point, in Unreal. Different attributes have different levels of precision. Position and UV's for example are more precise than normals, so I didn't use normals to store data.

You may remember I also had another attribute I was creating, which defined the building type. I can store that value in the U component of my UV coordinates.

Now I should point out that there is probably a bunch of other ways I could have got this data into Unreal, save it as a text file or an .xml file or something. I guess this is a good example of how I think visually. It just seemed like saving data in mesh points was the most logical thing to do, especially as Houdini is so powerful at processing geometry.

These 2 meshes never actually get rendered in UE4, again, they are just data containers.

The last thing I did was to project the positions mesh I had created in Houdini, down onto the height map mesh in Houdini. The Data out of Open Street Maps was totally flat, so projecting it down meant it now fit the contours of the height map. When imported into Unreal, the position mesh lined up nicely with the terrain, and my Blueprint could then do it's thing, copying instanced buildings onto the points of the mesh.

 Green shapes are building footprints as they are in Open Street Maps. The yellow boxes are the bounding cubes I generate and align to each building after I've generated a rotation value for the building.  

Green shapes are building footprints as they are in Open Street Maps. The yellow boxes are the bounding cubes I generate and align to each building after I've generated a rotation value for the building.  

 

THE OTHER WAY - HOUDINI CSV FILES AND  UNREAL ENGINE DATA TABLES

So if you bothered reading the bit above, you will see I had a hunch that storing my object data in a spreadsheet format was probably a better way to go. Thankfully, the guys at SideFX are always a few steps ahead and there is a CSV file exporter built into Houdini, which I only discovered in the last week. 

All my buildings data was ready to go, all I had to do was save it out as a CSV instead of storing that data in meshes. I decided I would save CSVs for each tile, initially for buildings, but I'll do the same for trees and all other objects I want to place.

In Unreal Engine, all i had to do was define a data structure that corresponded to the data I was writing out from Houdini, import the CSVs, and Unreal would create a Data Table asset that I could then access.

I made some quick changes to the blueprint I had made to extract the data from the old meshes I had, and instead I directed it to read the data from the data tables. All that was needed was a tiny bit of adjustment to some of the values to account for Houdini and UE4 having different coodinate systems. Then, it all just kind of worked - I had buildings that lined up exactly to the substance textures on the terrain. Woo!

Now, I haven't actually made the building assets yet, right now it's all just cubes, but again this is just getting the workflow in place. 

One of the awesome benefits of this method is just how quickly I can update things. I can make changes in houdini, say, to the type of buildings I want in an area, save off the CSV again, refresh the asset in UE4 and boom, the new building distribution is engine. The blueprint takes about half a second to process and spawn the buildings onto the terrain. 

 The Unreal Engine blueprint that reads the data tables and spawns each individual building. 

The Unreal Engine blueprint that reads the data tables and spawns each individual building. 

 

IN ENGINE

And finally, here are a small handful of in engine pics

 Box Buildings, in engine!

Box Buildings, in engine!

 Another cluster of towns

Another cluster of towns

 The view from a higher altitude. This is only showing 2 tiles worth of terrain textures and their buildings. 

The view from a higher altitude. This is only showing 2 tiles worth of terrain textures and their buildings. 

The Great War - Recreating the Western Front from the air - Part 3 by sandy sutherland

This post continues on from Part 1Part 2.

Here I cover an entirely new approach to texturing my terrain.

This is a BIG post. I've pretty much redone all of the work covered in the last 2 posts. Strap in.

Rebuilding everything, using OPEN STREET MAPS

So as I have covered in earlier blog posts, I have been working on recreating a massive area of Northern France as seen from the air, as it might have appeared 100 years ago.

I had originally bashed together the road network that defines the fields and farmland from screen captured areas of google maps, processed through a site called Snazzymaps. It wasn't ideal, I was already starting with a source that was too low resolution, and wasn't vectorized. It was a pain to put together, it wasn't procedural, and it needed a bunch of clean up work when imported into houdini and converted to polygons. I lost much of the finer shape detail through fusing points and resampling. I felt like I needed to take a step back and find better, more manageable source data.  Open Streep Maps was the answer.

At first I was just going to use Open Street Maps to gather better roads data, but once I had gathered all the .osm tiles I needed, I realised that all the data I wanted was in these files. Roads, buildings, water, forest areas - everything. It all lined up, it all came from the same source, it was vectorised and tagged with all sorts of useful info - it was obvious - why not derive everything from these .osm files?

So, I bit the bullet and decided to start fresh -  I was going to pretty much throw away all the previous data I had generated. It would mean a lot of doing things over again, but it would pay off because I wouldn't need to manually line things up from different sources, I wouldn't be fighting my existing, slightly dodgy source files.

This was an instance where a procedural workflow really pays off - I already had my Houdini scenes for manipulating and analysing the fields, they were still valid and reusable, I would just be feeding them better source data, from the .osm files.

 On the left is a browser view of Open Streep Maps (OSM), which can be downloaded as vectorised data. On the right is Googles satellite view of the same area. It shows the kind of textural detail and complexity I would need to create.

On the left is a browser view of Open Streep Maps (OSM), which can be downloaded as vectorised data. On the right is Googles satellite view of the same area. It shows the kind of textural detail and complexity I would need to create.

 

Open Street Map (OSM) + HOUDINI

As luck would have it, an open street map importer has been released as part of Houdini's game development toolset. It lets you download regions of OSM data, reads them into Houdini as polygon primitives, and tags primitives with all sorts of useful data from OSM. 

The core idea I had here was to extract the various types of data from the .osm files, such as buildings, wooded areas, waterways and roads. This data would then be used in a few different ways such as texture masks, and asset placement for Unreal Engine. The first step was to download all the info I needed from open street maps. There is a great intro video over on the Houdini page on how the process works. Because the area I'm working with is so big, I had to go through and download smaller regions bit by bit. I stored the coverage coordinates of each tile in the .osm  file name, along with the zoom level.

For example - amiens_zoom300m_n50.0783_e2.6182_s49.6935_w1.9940.osm

Each file was fairly big, averaging around 400mb. I loaded them all into Houdini, and then started the process of breaking down the files.

PROCESSING THE OSM data AND FITTING TO the height field TERRAIN

TRANSFORMING:

My terrain height field was already in Houdini (see previous blog post). So the next big step was to get the .osm data into the same transform space as the terrain. This involved a certain amount of lining things up by eye. I had already perfectly aligned each .osm tile to its neighbour, and fused any overlapping points. Then I just had to align the .osm to the heightmap. The old road network I had assembled also came in handy here as I guide. In the end it's not 100% perfect, but it's very very close - certainly much more accurate than before.

FILTERING:

There is lots of info stored in each .osm file, and I only needed some of it. The first step was to filter and split the data into things like roads, buildings, forest areas, railways, urban and industrial zones, bridges, key landmark buildings like churches and cathedrals,  and waterways. I could remove certain things that were not required like modern day highways, footpaths, very small or very large buildings etc etc. I also took the added step of colour coding everything so it was easier to read visually in Houdini. Once I had the filters, I created groups of primitives for each filter. All the geo from each .osm got merged together, as did the groups, and I saved it all off as Houdini's native geo format before I continued working.

TAGGING WITH ATTRIBUTES:

The buildings in the .osm data only exist as a basic outline shape describing the basic perimeter of each building. Some are tagged with extra info like how many stories the building has, or maybe if it's a landmark like a Church, but for the most part all I have to work with is each buildings size and shape. It was up to me to decide what kind of building I would place in the location of each building footprint. I would be replacing the many thousands of real world buildings with a library of maybe 20-50 simplified structures, copied over and over again.

So I came up with some pretty simple rules like how big, how square or long, how densely surrounded by other buildings etc etc, and then assigned a simple number index to each building stored as an attribute. These numbers would later correspond to a building type I would assign to each building footprint later on.

One more thing I did at this point was augment some of the data. For example, I knew I wanted lines of trees along the sides of roads, which weren't in the .osm files. So at this point I scattered points along certain sections of roadsides, and then grouped and tagged them like the rest of the incoming data.

Lastly the polygons that were created by the .osm files were often messy, so to clean things up I extruded their edges, and intersected them with my height field mesh. Using Houdini's awesome boolean tool in shatter mode, that gave me clean polygons to work with again, with all the required attributes from the .osm attached still.

This data preparation was a slow process, but when I look at the variety of detail in the files, and how natural the distribution of everything is given it comes from the real world,  it makes it totally worth it. There was a lot of back and forth comparing my osm data against google maps satellite view as I tried to make sure I was getting all the key features you might see from the air.

So now the osm data was all prepared. It could now be used for creating texture masks, and later on I'll cover how I've used it for object placement.

 Purple nodes are each loading in an osm data file for each area of the map. The transforms nudge them into alignment with the neigbouring regions. The blue nodes are each  a sub-network that reads attribute data on the geometry, and sorts them into groups, and then colour codes each group. Finally each regions outputs are merged back together, and all regions are saved off to disk via the green nodes at the bottom.

Purple nodes are each loading in an osm data file for each area of the map. The transforms nudge them into alignment with the neigbouring regions. The blue nodes are each  a sub-network that reads attribute data on the geometry, and sorts them into groups, and then colour codes each group. Finally each regions outputs are merged back together, and all regions are saved off to disk via the green nodes at the bottom.

 

structuring my houdini file and working in tiles

I spent some time cleaning up my houdini working file, organizing things into containers that related to the order of operations. this keeps things separated and neat. There was also some overlap between steps. For example, I had aligned the osm data to the height field. Then I went back a step to the heightfield processing, I loaded in the roads, and waterways data into the heightfield, and used them as masks to influence features. The roads were used as a blur mask, smoothing the height field where roads exist, and a similar thing was done for waterways, leveling off areas of water.

Instead of working on the whole data set all the time, I broke it down into a grid of 8x8 tiles, which matched the number of landscape components I was using in Unreal engine. This made things much easier to work on as I could focus on one tile at a time. 

This is all pretty dry stuff, but it's the only way I could stay on top of so much data, but just as crucially, make sure the output for each regions was consistent.

 My Houdini working fle - The red CONTROLS node is my main switch that toggles which tile I'm currently working from (0-64), covering the full 200 km x 200 km area I'm working with. The nodes that follow each cover a particular step in the process, and contain their own graphs inside.  There was so much data to crunch and keep organised that trying to be as meticulous as I could was crucial. Each step often required rendering from a particular camera for that step, so the camera nodes on the left correspond with the process on the right.

My Houdini working fle - The red CONTROLS node is my main switch that toggles which tile I'm currently working from (0-64), covering the full 200 km x 200 km area I'm working with. The nodes that follow each cover a particular step in the process, and contain their own graphs inside.  There was so much data to crunch and keep organised that trying to be as meticulous as I could was crucial. Each step often required rendering from a particular camera for that step, so the camera nodes on the left correspond with the process on the right.

TEXTURING

Starting fresh with substance designer

I wasn't entirely happy with my previous method of randomly applying tiling textures to each field. Instead, I wondered if there was a way I could procedurally create a texture for each field, one that reacted to the unique shape, size and coverage of objects that any one field may contain. It seemed like the perfect excuse to pick up and learn Substance Designer. 

Using the .osm data, I set up a very simple render scene in Houdini. It moved the camera to the middle of each field, and rendered it in isolation. Various data from the .osm file was colour coded and used as a component of the output image. Each colour could then be used as a mask in substance. I ended up creating 2 mask images, the first with the following:

  • RED  = roads
  • YELLOW = buildings
  • GREEN = broadleaf forest
  • BLUE-GREEN = forest
  • WHITE = roadside trees
  • BLUE   = waterways
  • CYAN = wetland
  • PINK = railways
  • VIOLET = railyards
  • LIME = urban area
  • PURPLE = industrial area
  • ROSE = splitters (additional boundaries needed to get around some substance flood fill issues....)

In a second image, I rendered out the field itself, just the main shape, without any of the detail features from the first image.

This gave me 2 input mask images for every single field, that I could feed into Substance Designer. I then UV mapped each field with a UV projection from the houdini camera view, so when I later reapply the generated texture, it fits perfectly to the field. I didn't scale fields to fill the camera view, wasted space didn't matter as it basically gets cut out by the shape of the fields polygon. Also, by not scaling, I was sure that each field was receiving the same texel density.

One optimisation I made was to filter out very small fields, and very small urban fields (fields in close proximity to buildings) deciding that they could just get a generic field texture later on. This cut down the number of fields from around 900K, to 40K.

masks.png

 

THE SUBSTANCE

This was my first real go at making a substance. I gathered a great big pile of screenshots from Google maps, giving me a good guide on what kinds of different crop field and farmland patterns I would need to make. I tried to boil down the key features to as few controls as possible. I then created master parameters for all the things I would want to adjust, set lower and upper limits for all the values and then exposed them so they could be manipulated. I won't go into a huge amount of detail on the substance itself here (maybe in a future post), but basically I broke it down into 3 possible types of fields (plowed fields, field with crops, and natural untouched fields). I then created a bunch of other controls that did fun things using the colour mask info from the input images.

 There is some method to this madness, the two input mask images are loaded in on the left, then they are split into their colour coded components. Most of the complex stuff happens up the top of the graph where the farmland patterns are generated. Each blue box contains the main cluster of nodes to generate the texture for each element, and then the output results for each element are gathered in the green boxes. The 3 diagonal clusters of nodes on the right are where all the elements are merged together for the final output as colour, roughness and normal maps.

There is some method to this madness, the two input mask images are loaded in on the left, then they are split into their colour coded components. Most of the complex stuff happens up the top of the graph where the farmland patterns are generated. Each blue box contains the main cluster of nodes to generate the texture for each element, and then the output results for each element are gathered in the green boxes. The 3 diagonal clusters of nodes on the right are where all the elements are merged together for the final output as colour, roughness and normal maps.

 

DRIVING SUBSTANCE WITH HOUDINI + Substance automation toolkit

Like many other 3D apps Houdini has a Substance plugin, so you can load and manipulate controls of a Substance file. I set up a Houdini graph that would feed the substance file with the correct input mask images for each field, along with randomised values for all the other input control parameters I had exposed on the Substance. At first I thought I would just be able to then run a render, getting maps exported for every single field. It turns out there isn't a way to do that natively in houdini for substances - so this was a bit of a dead end.

Instead, I used the automation toolkit for Substance, which among other things, uses python scripts to automate batch jobs for Substance. They have an example on how to run out variations of a substance file and write out all the export maps here. So after hacking my way through the example script, I was able to make a script that could chew through all the input mask maps I had rendered from Houdini, and generate a substance for each field, rendering out base colour, roughness and a normal map for each field. For each field the script would pick randomised values for the parameters I had defined, creating a huge amount of variety. It took roughly 3 or 4 hours to render out the approximately 1000 fields that would make up a tile, so I would just let it run overnight. The end result of this step was another folder of renders for each tile, but this time textured with substance.

 The usual suspects - as you can see not all the main features come from the OSM data directly. The substance did a bunch of subdivision of fields into smaller fields, and then assigned different criteria to each sub field which defined whether it was planted with crops, or maybe it was plowed dirt, or perhaps it was left natural and wasn't farmland at all. 

The usual suspects - as you can see not all the main features come from the OSM data directly. The substance did a bunch of subdivision of fields into smaller fields, and then assigned different criteria to each sub field which defined whether it was planted with crops, or maybe it was plowed dirt, or perhaps it was left natural and wasn't farmland at all. 

 

Stitching it all together

Once all the individual maps were rendered, I applied a material to each field primitive in Houdini. Houdini has a concept of local material overrides, so I could define the 3 input maps as a local override for each field. For example, the field with the primitive number 287, would us the col/rough/normal textures numbered 287 for the given tile...

So now every single field had a mapping to its own unique set of substance maps, I then just had to render them all together as series of textures, one colour/roughness/normal for each tile respectively.

From here I was free to do whatever touch ups, filtering and processing that I wanted in Photoshop for each tile map. I saved them off as 4K textures, ready for import into UE4.

Tuning the look

While I was putting together the substance, I was only ever looking at 1 field at a time. It wasn't until I had all the fields for a given tile together that I could get a good sense of how it looked at a macro level. Focusing on 1 tile, I went through 10 or so iterations tweaking all sorts of values, constantly comparing it against screenshots from google maps, until I had everything in the right ranges.

 Bit by bit, it started to look more like the reference screenshots I gathered from google maps. Getting an exact match was never the goal, instead I was trying to get the right balance of distributions at a macro level

Bit by bit, it started to look more like the reference screenshots I gathered from google maps. Getting an exact match was never the goal, instead I was trying to get the right balance of distributions at a macro level

 A full tile all together!

A full tile all together!

 A closer look at a smaller area

A closer look at a smaller area

 The real location for comparison - The town of Doullens, north of Amiens, France.

The real location for comparison - The town of Doullens, north of Amiens, France.

The texture itself still needs a bit of dialing in, but for the most part the core process is in place.

One last thing that caught me out was how to make sure each tile joined seamlessly with its neighbour tiles. There was a problem in that fields near the edge of a tile would overlap into the next tile. However, Substance was being told to randomise all of it's input parameters, so the same field polygon would get a completely differnt look applied each time it was used across tiles. My solution in the end was to always render the first version of any one field, by just copying the geometry from a tile and rendering it slightly in front..

  • So I might render tile 35 first, with tile 35s folder of substance textures....
  • Lets say tile 35 has a field polygon numbered 666, that crosses over into tile 36....
  • Then I might render tile 36, but patch it with field 666 from tile 35 also using all the other overlap fields from tile 35...

It sounds convoluted, maybe it is, but it was all setup to happen automatically in the end and was a non issue.

NO MANS LAND

This of course, is only the first part of the texturing process. I'm creating this area as it was in the middle of World War 1, which means a giant strip of war torn and heavily cratered no mans land is going to cut right through the middle of it. I'm still working out a few possible ways to create that and I'll cover it here when the time comes. 

The main limitation of my current workflow is that once I decide upon a particular range of settings for the substance, there is a huge amount of processing to get all the renders out for all the tiles. I'm currently just working on a single i5, 32GBram pc with a 1060 GPU. There is however no reason this could not be automated more and then sent to the cloud to be processed. 

I plan to implement the no mans land battle damage textures and objects in a way that leaves them to be paintable and editable once in Unreal Engine.

 An early Quixel mixer test for a no mans land texture

An early Quixel mixer test for a no mans land texture

 

 

SUMMARY

That covers the key aspects of the new texturing workflow. It turned into a bit of a science project, but it was really interesting to see how far I could push things, at such a scale. I think it's pretty cool that there is not a single repeating texture on the entire terrain. Something else that was interesting was that I was trying to design a texture as it would be seen from no closer than 500m-1km away. It was all about the macro details, not the micro details. Each single tile is a 25kmx25km area, and there are 64 of them, all entirely unique. 

As I was working through all of this I kept wondering if I was spending far too much time on it. I possibly was, worrying about details that might not be very important. It's not even an area of the game that the player will directly interact with and you could argue that it's just the basis of an elaborate skybox. However, it was a great test of perseverance. There were really not that many issues that cropped up that I didn't see ahead of time which was also reassuring. Lastly, when you combine this with Truesky, the plugin I'm using for clouds and dynamic sky lighting, I now have a huge portion of my entire games environment completed. I have a space that I can use as a sandbox for fleshing out more gameplay stuff. 

 

KEEP AN EYE OUT FOR PART 2 VERY SOON, WHERE I'LL SHOW HOW IT ALL COMES TOGETHER IN ENGINE.

 

 

 

 

 

 

The Great War - Recreating the Western Front from the air - Part 2 by sandy sutherland

Following on from the previous blog post, this one covers how I used Houdini to generate all the fields and crop distributions that cover the majority of the terrain.

Houdini ?

If you are not familiar with what Houdini is, it is 3D content creation package, known for it's powerful procedural tool-set. It's probably better described as a tool that lets you build other 3D tools. By wiring together nodes that perform very specific tasks, you can build up a chain of processes that crunch data and spit out pretty pictures, simulations and models that would often be impossible to make manually. If I was to sit and try and paint all the natural looking divisions and distributions of fields that cover my terrain it would never get done. Using Houdini I can churn out revisions in a couple of hours by focusing on defining the "rules" that lead to the fields being where they are.

PIXELS TO GEO

So I had the image I had assembled of all the roads covering my terrain in Photoshop. It was a simple black and white mask. Houdini has a handy node called "Trace" that reads in an image and cuts up a piece of geo based on the image. So I dropped down a simple grid, cut it up with a trace node and then I was left with a single polygon for each "field" area between my roads. All in all there are about 19,000 of them. From here I did some simple edge re-sampling and clean up.

USING ROADS TO DEFINE FIELD ALIGNMENTS

It's important to note that in the previous step, I'm basically getting 1 field = 1 polygon. This make it easier to set up an operation that loops over each input polygon 1 at a time. The polygon by itself is a bit messy, it has too many edges and is non-triangulated but it doesn't matter right now, the 1:1 relationship is the important bit.

Looking at real maps, I decided that it would be logical for fields to be aligned to the main direction of the fields shape, or perpendicular to that direction. It's not a random distribution, but it does appear to be different field to field. So, I had to work out a way to calculate the main "direction" of each field.

 Roads, and the 19,000+ "fields" between them

Roads, and the 19,000+ "fields" between them

The basic method was this

  • Get the normal of each point along the polygon border. Re-sampling the edges from before gives me more points to work with
  • Make the normal point towards the neighbouring point
  • Convert the point normal direction value to a rotation value
  • Down sample the potential rotation values into a smaller range of options (15 degree increments for example)
  • Tally up the rotation values rounded up to sit within the smaller range of increments
  • The rotation value with the highest number of points rotated in that direction is the dominant direction for the field
  • Finally, store that rotation as a colour value in the blue channel of the per polygon colour attribute

Now I'm not totally sure how sound that logic is but it seemed to work pretty well. I'm sure in the real world the rules that govern how any one field is planted are more complex. Sun direction, slope of the terrain, wind etc I'm sure are all factored in. They are things I could work out but for now, this will do. I made a loop that does that calculation for every single polygon, and stores the resulting direction value as a custom attribute on the polygon.

 The field "direction" mask. The texture itself probably isnt that useful, but the attribute it represents is used for rotating UV coordinates per polygon

The field "direction" mask. The texture itself probably isnt that useful, but the attribute it represents is used for rotating UV coordinates per polygon

 Roads and Field rotations together

Roads and Field rotations together

Now, originally I planned to render out a texture map with this blue value stored, and use it to rotate the UV's inside my material in Unreal. This was a bad idea from the get go, because texture compression was always going to corrupt the rotation values per pixel. 

In the end, doing the same kind of operation, rotating the UV's in Houdini was the way to go, and it simplifies the Unreal material anyway. Win win.

 

FIELD & FIELD CLUSTER GENERATION

So now I have each polygon UV mapped and its rotation calculated. Time to generate some crop patterns! This was a pretty simple process, but it did go through a few revisions to get the right kind of look. The beauty of doing it in Houdini is that it's super simple to experiment without breaking anything.

 Always work with reference! Here is a small patch taken from Google maps, sized to fit a stretch of the Somme river on my map. It shows a tiny sample of the field complexity I need to create.

Always work with reference! Here is a small patch taken from Google maps, sized to fit a stretch of the Somme river on my map. It shows a tiny sample of the field complexity I need to create.

The idea was to make a square image, rendered as an RGB mask, with a unique field arangement on it. Then I would render out 500 or so variations, and assemble them again into a cluster of combined field arangements giving another level of complex variation. Then in turn I rendered out 1000 of those combinations.

Generating the patterns simply involved mixing and merging simple grids with randomly generated numbers of rows and columns. Then I might rotate the whole grid 90 degrees. This maybe/maybe not switching was controlled by switch nodes that picked a stream at random. Then, a range of red colour values were assigned per primitive, so each polygon in the grid got a random colour. Then, I resampled the polygon edges to get some more detail and applied a very subtle amount of noise, just to break up te straight lines.  Then, I looped through each polygon again and applied a tiny amount of jitter to its position and rotation. I also added a variable amount of smoothing to the edges of each polygon, so the fields had slightly rounded corners. Last thing to do then was to stick a flat grid colour under the whole thing to fill any gaps, and when you render it out you get these kind of random combinations. This was all super quick and dirty, I didnt care about geo overlap, or nicely fused edges, non of that mattered. I just wanted random shape arrangements.

 A small snippet of a typical, all be it messy Houdini graph. The blue nodes are switches that randomly pick which chain of the graph above to read in as data flows down the chain. 

A small snippet of a typical, all be it messy Houdini graph. The blue nodes are switches that randomly pick which chain of the graph above to read in as data flows down the chain. 

So here is the kind of thing I would get back from the first pass of just creating field patterns

 Field patterns

Field patterns

Then I read them in as textures and further combine them to create field clusters

 The first pass of field patterns applied per polygon to another grid

The first pass of field patterns applied per polygon to another grid

Now for the fun bit - I apply the cluster textures WITH the rotation values from the first step to each polygon field. It gives me something like this:

 A bit of a jumbled mess, but it's promising

A bit of a jumbled mess, but it's promising

RENDERING AND PHOTOSHOP ASSEMBLY

I render out the entire terrain covered with these field patterns as 4 x 16k tiles. I then have a single 32k Photoshop file where I'm assembling all the layers. It's a massive document, but I have some Photoshop actions that cut the layers into 8k tiles that fit together as described in my last blog entry. There are lots of layers including roads, trees, waterways, height and slope masks, city and town placements, various noise textures etc. All of these will get mixed and enhanced with hand painted details at some point.

MASK AND GRADIENT MAPPING

Now that I have this field texture made up of shades of red, it's time to do some gradient mapping. I took a screenshot from good maps and sampled it to make a new gradient in Photoshop. Remapping the reds to match this new gradient, and then pasting the google screengrab on top looks a bit like this.

 The google screengrab in the middle almost melts away as it mixes with the surrounding Houdini generated fields. Pretty cool! Still needs some colour tweaks and noises mixed in but it's a good result I think. 

The google screengrab in the middle almost melts away as it mixes with the surrounding Houdini generated fields. Pretty cool! Still needs some colour tweaks and noises mixed in but it's a good result I think. 

That's it for now, next post will cover tree distribution, detailing the colour maps, normal map generation, and other good stuff. Stay tuned!

The Great War - Recreating the Western Front from the air - Part 1 by sandy sutherland

This is the first part of a "making of" series of posts I will do, covering the process of how I put together this giant terrain. The goal is to share knowledge, but also to show how things are often not a straight path to a finished result.

The Goal

Flak Jack is a game that takes place entirely in the air. Even so, to provide the proper context and backdrop for the action in the sky I wanted to depict the landscape, battlefields, and towns of the Somme and Western Front, to a reasonable degree of accuracy. There are no guides in the game or magic arrows that point you to your next objective, so using geographical landmarks like the Somme river, or trench lines will be all the player has to get their directional bearings.

I also wanted vast view distances as you fly around in the clouds, giving the player a sense of the scale of the battlefields, but also the sense of freedom and loneliness fighter pilots felt high above the trenches. VR is naturally quite a claustrophobic experience and the goal is to counteract that as much as possible.

Given my target frame rate is at least 90fps, making such a vast landscape is a big technical challenge, taxing the game's render resources even before I add any planes, characters, FX etc.

So some quick stats on what I'm currently working towards

  • A 200km x 200km area of Northern France and the Somme river, derived from Satellite DEM terrain data
  • A view distance of 90-130 km, with the player flying at an average altitude of 1.5-3km
  • 8K terrain textures for base colour, roughness and maybe normals
  • Real world road data
  • Procedurally generated crop and field patterns and tree distributions
  • Dressing the terrain with possibly millions of trees, basic buildings for towns, mesh based trench lines, towers of smoke FX breaking up the horizon and ground battle FX
  • A playable flying area of approx 50km at any one time, varied from mission to mission
  • Truesky driven dynamic lighting, with clouds casting shadows over huge areas of terrain
  • A mechanic in the game that limits the player getting any closer to the ground than 1-15km above the terrain, with a story based transition effect that occludes the landscape at that close range.

The last bullet point there is a design choice I made in order to save time, meaning I don't need to create terrain that holds up at a close distance. The game is all about the flying, there is no air-to-ground combat, no take offs, no crashing into the ground. The landscape is just a backdrop, there to provide context and atmosphere.

 

Reference material

I'm not trying to be a slave to reality, but I do want to create an environment that feels correct. Anyone who has been in a plane has a sense of what its like to fly over farmland, what it feels like to pass through the clouds etc. With that in mind I decided to use a mix of real world data, and procedural generation. The real world will give me all the natural scales and distributions, the details will come procedurally. I also have the added complication of creating a real place, but as it existed 100 years ago. On top of that, huge areas of my landscape will be heavily damaged, dotted with artillery craters and trench lines. Luckily, there is a stack of material available online. Aerial photographic reconasissance was actually a huge role for aircraft of the time, and many of the photos taken then still exist. They are amazing and terrifying in equal measure, as they give a true sense of the scale of destruction that occured along The Western Front. 

Here is a small sample of the reference I've gathered so far. Some is from the area I'm trying to create, some is from other wars like the Vietnam War, some is just good reference for scale, lighting and atmosphere. Other games of course are also an interesting point of reference (The Battlefield 1 concept art book is brilliant inspiration!).

 

 

Try and try again

When I started this, I didn't know exactly what approach I would take. It has been a huge amount of trial and error. I've started over from scratch a few times now, sometimes because the results didnt look good, other times because the approach didnt allow for much revision, and also because the methods might not be very efficient and optimised once in engine.

I grapple with technical tasks in a strange way - I don't have a very solid maths or computer science background, so I don't often see the clearest path from the get go. I have just enough grasp on the technical side of things to be able to try different ideas, and more often than not I do manage to get the result I was after, but it's far from an efficient path. Most of what I write on this blog is going to be the messy, unfiltered break down of my process.

Before going to much further, I have to thank my friend Dan on the great guide he put together for getting height maps into UE4 at the correct scale - check it out!

Below is a quick summary of things I've tried and then started over again on in regards to making a 200kmx200m terrain:

First attempt

  • Get DEM data, assemble it into tiles as per UE4 terrain docs
  • Cover it with the maximum allowable sized single texture (8k), generated from the height information in the DEM data.
  • Add roads extracted from assembled screengrabs taken from snazzymaps, assembled into an 8k texture mask
  • No good solution for generating the field patterns at this stage...
  • Tested procedural foliage placement tool for trees. Good start, can use texture masks, not totally sure how well it will scale at this point
  • Basic truesky added for atmospheric perspective test. View distance feels good. Set base cloud height to 1.5km

Outcomes

  • Assembling and processing the DEM data involved jumping between a few different apps. Not very procedural
  • Way too many terrain tiles! Good for a streaming tile system, but I need all tiles loaded all the time because of my huge view distance
  • Nowhere near enough resolution trying to cover the whole terrain with a single 8k map
  • Roads took ages to assemble, and then the base image resolution still wasnt high enough, resulting in blocky, very wide roads. Throws off scale.
  • Shadows from clouds in Truesky are done as a light function. At the time of this test light functions don't yet work with the new forward renderer in UE4.
 First attempt - 1024 individual landscape tiles! 1 8k colour map stretched over the entire 200kmx200km terrain. Terrain itself has its height values doubled across the whole terrain to exaggerate the shapes. In the real world it's a very flat area. From this distance, it *kinda* works but any closer the texture resolution falls apart. 

First attempt - 1024 individual landscape tiles! 1 8k colour map stretched over the entire 200kmx200km terrain. Terrain itself has its height values doubled across the whole terrain to exaggerate the shapes. In the real world it's a very flat area. From this distance, it *kinda* works but any closer the texture resolution falls apart. 

Second attempt

So after getting to this point I decided a few things coud certainly be done better. It was obvious texture resolution was going to be a problem. I didn't know how I was going to generate the farmland crop patterns yet. The number of terrain tiles seemed silly. So I came up with a few revisions.

  • Use Houdini - especially the new terrain tools in Houdini 16
  • I sourced the DEM data from a higher resolution source. 30m res instead of 90m I think it was...
  • Cut back the terrain tile count. I'm not streaming them in, so I don't need anywhere near as many. I think each tile is a draw call, so the less the better
  • Make a field patch generator in Houdini
  • Refine the roads texture mask, so the roads were thinner
  • Cover the entire terrain with a series of smaller, tiled textures. The middle of the map could be higher resolution than the outer half
  • Treat all these base textures as masks for distributing tiling detail textures
  • Use Houdini to work out the general "direction" of each crop field, so that the field patch textures could be aligned in a logical way based on the direction of the roads that border them
  • Maybe generate a texture where this direction angle is stored as a colour, and then applied as a rotation value going into the UV's of my material textures

I prepared the newer higher res DEM data into a single image, and loaded it into Houdini. I enhanced a few areas a bit, just making some of the features more exaggerated. I also tripled the height range towards the very edge of the terrain. I figured it would help break up the horizon, which is an area of the map the player will never get to anyway. I was also able to output some textures from houdini including height, slope, erosion masks etc.

 

 Simple processing of the height map in Houdini

Simple processing of the height map in Houdini

I made some big changes to how I mapped materials and textures to the whole terrain. Instead of using 1 material, I took advantage of the fact that the terrain was already split into tiles, and then made material instances that were mapped to groups of those tiles. In terms of textures, I split into 4 middle tiles and then 4 outer tiles. The middle tile textures are 8k, the outer are 4k. They overlap of course, but when applied to the groups of terrain tiles the overlap is not a problem. I originally tried "stitching" the tiled textures all in the one material. This presented me with a big problem where textures were smearing outside their UV borders. Eventually I realised this was because of mip mapping and general texture compression side effects. I also had a bunch of fiddly UV offsetting to manage to get the texture tile to sit where I wanted it. I have to thank Cory from Morepork Games for helping me out here, explaining the problem. Masking the texture borders could have been done in the material, but it would have added unwanted shader complexity, and in the end his material instancing / mapping to tile groups suggestion was the way to go. Thanks Cory!

So the texture tile breakdown looks something like this:

 This is the full 200kmx200km area. The inner 4 coloured squares are each covered by an 8k texture. The larger more transparent 4 tiles are also covered by 8k textures, but cover 200% more terrain so are effectively 4k. The blue lines in the top left quarter show the landscape tile divisions in UE4 (8x8 = 64 landscape tiles). The rings are distance markers, 100km, 50km, 10km and 1km area. Each colour tinted tile has a material instance applied, so 8 instances in total.

This is the full 200kmx200km area. The inner 4 coloured squares are each covered by an 8k texture. The larger more transparent 4 tiles are also covered by 8k textures, but cover 200% more terrain so are effectively 4k. The blue lines in the top left quarter show the landscape tile divisions in UE4 (8x8 = 64 landscape tiles). The rings are distance markers, 100km, 50km, 10km and 1km area. Each colour tinted tile has a material instance applied, so 8 instances in total.

That is probably enough for this post, part 2 will be coming soon covering how I made these crazy field patterns in Houdini!

Look Ma, I'm a Farmer!

Somme Terrain progress by sandy sutherland

Things have been moving along nicely in regards to terrain work. I pretty much went back to square one, after trying several different process. The main hurdle has been trying to pack 40,000 sq km's worth of texture data into a manageable set of textures, that give a respectable level of resolution. Things like getting the width of the roads to feel approximately correct has a big influence on how you perceive the scale of the world. 

As I've mentioned a huge tutorial will be done outlining the whole process, but until then here is a small update video. Hopefully it gives a little sense of the atmosphere and scale I hope to create.

 

The Vintage Aviator - Gallery by sandy sutherland

As mentioned in the last blog post, earlier this year I got the chance to visit Hood Aerodrome in Masterton, home of  The Vintage Aviator. This is an amazing collection of WW1 aircraft, many of which are still fully functional flying machines. It was a great chance to find out about the history of the planes, the early design ideas of the time, and also just how fast things developed over the period of the war. This was still a time where the rules of flight were still being figured out.

I took a stack of photos during the visit, with the goal of recording a lot of the small details of the planes, what materials they are made of and how the surfaces react to light. I also took a bunch of video which I may post at some stage.

Enjoy!

Follow this link to go to the full page gallery - The Vintage Aviator GALLERY

A long overdue update by sandy sutherland

It's a little embarassing to notice my last blog post was August last year. I've actually been thinking about and working on Flak Jack a whole lot since that time, but my day job at Weta has taken over. For almost a year I've been working on a major sequence for the film, War For The Planet of the Apes. We are days away from finishing which hopefully means more free time to dive back into Unreal Engine.

Scope vs time

At this stage, progress on the game has been very slow in terms of actual on-the-box development time. I'm being very ambitious, and available time to work on the game has certainly been less than I would have liked. Having said that, I'm not overly worried. I don't have a deadline I'm being held to, and I have plenty of experience working on things that just take time, so motivation and persistence are on my side. I've also had time to document a lot of ideas and spend time thinking about systems. I have a solid day job that pays the bills, even if it requires crunch periods that take time away from other things. Basically, this thing is for fun, it's not something I have to do to stay off the streets. Time is just starting to free up again so I'll be able to block in a few core game systems like flying and basic shooting, as well as a solid pass on the environment art.

Design progress

I've done a huge amount of writing and re-designing the structure of the game, really thinking about ways I can make the game a smooth, seamless, immersive experience in VR. On one hand, I'm making "just" another third person shooter game. However, there are a whole lot of really strange idea's I'm planning to try that I hope will set the game apart. It's cinematic but with no cutscenes. It's a small character study, on a massive stage, its silly and fun but with some darker undertones. One positive side effect of work getting in the way is that I've had time to think over ideas and concepts. Many have evolved into much better ideas. Some faded and died. Overall though, the ideas have had time to just sit with me, and I can say that I'm still super excited about what I'm aiming for. Words are cheap of course, so until I start to implement some of these designs, who knows how well they will really click together. It's going to be really hard to strike just the right tone. One thing is for sure, nobody has made what I'm going to try, especially in VR. That is both terrifying and exciting. More details will come down the line once some implementations are further developed.

Vive!

Late last year I applied for a developer grant from Epic Games. I shot them a quick summary of my game, and then kind of forgot about it. A few weeks later, I get an email back saying they were going to send me out a Vive headset. Nice! Thank you Epic! I could say goodbye to the Oculus DK2, and Razer Hydra. 

The Vintage Aviator / RAF Museum Visit / Imperial War Museum

Earlier this year we had a chance to go and visit Hood Aerodrome in Masterton, a couple of hours north of Wellington. This is the home of The Vintage Aviator, one of the best collections of functional, flying WW1 airplanes on Earth. We were there for one of their flying weekends, where you are given incredible access to the planes, there are flying demonstrations and a very informative walking tour of each plane as they sit out on the field. It was an extreamly valuable opportunity to gather photo, video and audio reference. Some of the planes in the collection are the only ones in existence in the whole world, and they are only a few hours drive from my house. Amazing.

In February, we went on a 3 week trip to the UK. I took a day to myself to go and visit the RAF Museum in North London. They have a relatively new exhibition focusing on the aircraft of WW1 and the history of aviation as part of the war effort. It's an amazing display, and I had the whole thing almost to myself for a few hours. I also paid a visit to the Imperial War Museum which itself has an amazing exhibition of WW1 stories and relics. WW1 sure was an insane time period.

I will post a gallery of the reference photos and video I took on these trips very soon!

Terrain / Environment development

One major goal for the game is to give you that vast sense of scale, of what it feels like to be 3km's up in the air, in an open cockpit wood and fabric plane, cold air and engine oil blasting you in the face, high above the vast wasteland of The Western Front. To achieve that I need a space with huge view distances, right now I'm targetting an area of 200kmx200km in size, with an accessable play space maybe 100 - 150km in size. I'm using real world Satellite terrain data, and then detailing that procedurally. I will eventually do a far more detailed blog post about it, but right now I'm going through a second revision of the terrain process as I try and use Houdini 16's new heightfield tools, combined with Quixel Megascan textures and Substance Designer textures. I am still using Truesky for my clouds and lighting solution, which is due to receive a pretty major update in the not to distant future. Some very cool stuff.

 Some early progress on terrain. 200km square. Real world scale and real roads.

Some early progress on terrain. 200km square. Real world scale and real roads.

 A very quick test of mixing textures in Megascan Studio. No mans land craters + puddles

A very quick test of mixing textures in Megascan Studio. No mans land craters + puddles

 

 

Bonnie snaps - Some visual references for Flak Jack by sandy sutherland

Thought I would post some of the images I've found as inspiration for Flak Jack. Some of these are targets for what I want the game to look like, some are character relationships,  some point to the sense of humour I'm going for. Others are just weird and wonderful things I've discovered looking into WW1 history, early aviation and Scotland.

 *** NONE OF THIS IS MY WORK - It's all pulled from the net and made by people far more talented than I.

"It's nae a skirt!" - the first 5 months developing Flak Jack by sandy sutherland

What? Why?

Back in early April, I got slapped in the face with a ridiculous idea. It was so vivid from the get go, that I knew I had to make it; 

"A comedic VR game where you fly a plane with your head movements, and shoot with motion controllers in air to air dogfights, set in WW1. The pilot is a mad Scotsman. He has a co-pilot dog. It's called Flak Jack."

That all came to me in that very first instant. It was so daft, I had to dive right in. Nobody else was going to dream that one up.

Early reference images - The 3 main elements of Flak Jack

 

I've been tinkering with game development for a couple of years now since the release of Unreal Engine 4. Spurred on by a couple of friends at Weta who are also interested in game dev, I've been stepping it up, working away on various things several nights a week. In that time I've realised that the bit I really enjoy the most is game design itself, dreaming up worlds and gameplay systems. So, after a fair amount of time playing with certain aspects of UE4, learning the engine, I decided I was just going to go all in, and make a complete game. I was also going to try and do it just using UE4's blueprint system for visual scripting. That means I would be typing next to no code.

It's a stupidly ambitious thing to try and do. Even a "simple" game is more complex than most people would imagine. Still, if there is one thing I've learned from 13+ years of film VFX, it's that I've got a pretty solid sense of how to break complex things down into manageable pieces and construct a system. I know how to create spectacle. I've got a fairly good idea of how drama and pacing work. In that way, there is a lot of crossover between VFX and game development. More than that though, after years of making other peoples stories and worlds, I felt it was time to put something out there that was my idea. 

Where the hell did that come from?

At first glance it seems like a pretty random game concept, but ideas do not come from a vacuum. My Dad's side of the family is Scottish and I grew up exposed to Scottish culture (I grew up in Australia). My Mum's father built model air planes and I used to build model kits as a kid. I spent many a school holiday at their house playing flight sims...and Doom. We played lots of Doom. It was my grandfather who first introduced me to 3D graphics, he used to draw plans for his model builds in AutoCAD. I've played plenty of shooter games over the years, and despite the vast numbers of shooters out there, I feel I've found a somewhat fresh take on things - especially for a VR game.  There are dozens of other references and influences that have since fed into the game idea; Tintin and Snowy. Hans and Chewie. Wallace and Gromit. John Woo movies. Tarantino. Slapstick comedy. Indiana Jones. Bagpipes. Sea Shanties. Homing pigeons. The advancement of technology. It's a good ol' mix.

Get on with it.

So I started. Armed with UE4, an Oculus DK2 VR headset, and a Razer Hydra - hand held motion controllers.  Beginning with the flying game template provided by Epic, I took their crude flying saucer demo and wired it up so that the pitch/roll/yaw of the DK2 controlled the pitch/roll/yaw of the spaceship. Then I swapped out the spaceship for a dodgy model of a plane I found online. Already I could fly around a level steering the plane with my head, the plane sitting in front of the view, moving at a constant speed.

VR Camera + Plane + Distance Markers

The absolute basics of flight were now working. There is a huge amount still to do on flight mechanics but I'll save that for another day.

Clouds

One of the main goals I have for Flak Jack is to be able to use 3D volumetric clouds as a play space, not just a backdrop. Clouds and skies in many games are either an image mapped on to a giant sphere that surrounds you, or 2D cards/sprites with a cloudy texture applied. Neither was going to be enough in my case. UE4 does not have a native method for creating volumetric FX like clouds. They are typically heavy to render and require lots of data. So, I had an idea and fired up Houdini. The basic concept was to make cloud volumes in Houdini, render them from a bunch of angles with lighting baked in, and then use UE4's imposter sprites to draw the clouds in the game. Imposters are a trick used to show the impression of a 3D object from a series of images taken at slightly different angles. Here is an early test result.

 Houdini volume renders with lighting baked, as imposter sprites in UE4

Houdini volume renders with lighting baked, as imposter sprites in UE4

This was an interesting test, but I knew it was going to be limited and time consuming to generate enough variety. 

Then I discovered Truesky. It's a plugin for UE4 that allows for the creation of very natural looking clouds, with automatic sun light control and atmospherics. It's also fully volumetric and controllable with blueprints. It runs great in VR, and they have an Indie licencing option. Boom! Clouds. Sorted.

Early Truesky test with a terrain ripped from Google maps. Promising!

Blueprints / UI

I realised fairly early on that I still had a huge amount to learn about UE4's Blueprints system. Mainly, how various blueprints should talk to one another. After I hooked up the headset/flight control I saw straight away some kind of sensitivity control should be exposed to the player so that the plane movement wasn't too twitchy.  I got started on a UI, and made a slider to control sensitivity. In VR games you really need to have any kind of UI as a physical thing in the 3D world, you cant just slap it on the screen with no distance from your eyes, otherwise it makes you want to puke and you can't focus on it. The Oculus Store has a nice UI interaction system where the selection cursor is locked to the middle of your view, so you just look at the thing you want to click on. I decided on the same thing for Flak Jack. I made my UI buttons, stuck them on a 3D widget, added the widget to sit a bit in front of my game camera, and then did a line trace from the camera to the widget to work out what button was sitting in line with my gaze. I added a crosshair graphic, and made the buttons change colour when hovered over. I decided to use the sensitivity slider purely as visual feedback on the UI - the user changes it with the left stick on the controller.  It's not a slider that you click on and grab. I did this because it makes it faster and easier to change when mid flight. Originally, the slider was visible whenever you call the game UI, but now it only shows up when you are actually in flight and call the UI.

Oculus style menu selection for VR UI, with slider feedback

Motion controllers / Jack IK

Flak Jack is a third person game. Jack flies standing up in the cockpit so he can shoot out over the top of the planes wing with hand held weapons. I wanted his arm movement to be in sync with how the player was moving the motion controllers. I set up some really basic arm IK, and did some calibrating and offset of the Hydra controllers, so that the player motion was offset from their position, to where Jack was standing in front of them. Eventually I'll need to do something where I get the players maximum reach / arm length and adjust the IK and Jack animation to match. The plan is to incorporate gesture controls for reloading and things like that.

July rebuild

After a couple of months of slapping things together, my blueprints and project were already getting messy even at this early stage. I stopped adding new shit and did some house keeping. I had originally just added lots of things as components on the flying pawn. The plane mesh, Jack, all the motion controller stuff, particle FX, the UI etc etc. I began to bundle things into their own blueprints and then added those blueprints as child components to the flying pawn. I then added all the required functions to each blueprint, moving them off the flying pawn. It's now much cleaner and logical and is more in line with how I should have done things to begin with. I began to get more comfortable with casting, and getting all the blueprints talking to each other. All rookie errors I'm sure, but I'm glad I caught them early.

Mission control

One goal is to make the steps from first opening the game, to flying an actual mission as seamless as possible. It needs to ease you in to things so you can get comfortable being in VR - once you are actually flying and shooting its going to get pretty hectic. The game loads up with you positioned just above the clouds. The idea is you pick a mission type and then various aspects of the mission are randomly generated, one of those being the clouds and time of day. So, before the mission actually starts, the game goes through a little 15 second sequence I'm calling the "pre trip". It gives the player a moment to get settled in. During the pre trip a timelapse happens and the clouds/sky changes, you get transported to the mission area, and Jack swoops in from behind you, does some funny things or starts telling a story, and you are gradually given control of the plane flight as it settles in front of you. It's visually cool, gives a chance for some character dev with Jack, allows time for the mission elements to load and basically takes you from point A - the start menu, to point B - the mission start in one smooth move. Sweet. I'm still sussing out the clouds timelapse and need to smooth out the travel speed, but the basic idea of it I think will work.

I created a blueprint called "Mission Control" that handles all of this pre trip stuff, sets all the parameters for the selected mission, and just generally initiates and keeps track of the current mission setup.

Art - I mean "Art"

Over the years my art skills have got pretty rusty, but I started to flesh out a few stand in art assets to act as place holders. The first of those was the logo, which I wanted to make look like trails in the sky. I also mocked up a super simple model of the plane you will fly with in the game - The mighty Bristol F2b.

OOOOOOOOOO purty

Probably the best 3D model ever made

Where it's at 

So that covers a lot about what's been involved in getting things to this point. Right now it's all still something I'm just bashing out a few evenings a week, I'm happy with the progress however. It's got a long long way to go, but here is a video of it all together.