Stephen C Wright Creative Development

Regarding Oculus and "GiantGate"

Yesterday was the kick-off for E3 2016; the premiere gaming event in the calendar year for both players and developers. The event is marked by flashy press conferences by the big players such as Microsoft, Sony, EA and Ubisoft, as well as hundreds of announcements and teasers by smaller studios and indie developers regarding their upcoming releases. In spite of the obvious overplaying that the larger publishers tend to do (“E3 Quality”), it’s an event full of promise and excitement.

I was pretty pumped when I started the day. Bethesda announced some cool bits, especially Fallout 4 VR, EA was…well, more of the same really. Microsoft knocked it out of the park with 2 new console announcements and “Play Anywhere” (although still too many “exclusives”). Sony had lots of VR content for PSVR, and the PC gaming show continued their tradition of announcing cool and quirky titles.

Then it all fell apart.

Just before I was heading to bed (towards the end of the PCGaming show), I scanned Reddit to see if there was anything I had missed. One post in particular suddenly caught my eye:


To my horror, one of my most anticipated games for VR, Giant Cop, had silently dropped support for OpenVR (SteamVR/Vive), and gone Oculus-exclusive. At first, I thought it must be an upload issue, and that the Reddit community were jumping to conclusions (no surprises there.). However, this was not the case. About an hour later, Marc McGinley, the Design Director for Giant Cop (and someone who I’ve met and found to be a pretty cool and down-to-earth guy), posted an update in this reddit thread:

Sorry guys, I’m not ignoring comments, I’m out of the office, I’m also not at E3 and I don’t make business decisions. I’m a designer, and I’m responsible for making a kick ass game.

To address your questions:

It’s a timed exclusive, meaning it will be on the HTC Vive. Anyone with a Humble preorder who is unhappy should be able to obtain a refund from them, if not they should contact us. We’ll make sure it’s sorted out. Please email for support requests.

Like I said before, I have nothing to do with this decision, I can’t comment on the why, how or any of that stuff. Please also understand that we’re a team of people who want to make an awesome game, we’re not after a money grab. With Giant Cop we’re making a piece of art so please respect us as human beings who love what we do.


Now, There are a couple of issues here that Other Ocean need to deal with. First and foremost, people who pre-ordered the game through Steam under the knowledge that they would be able to play it on Vive as soon as it was ready, should be able to get a refund. It looks like Other Ocean are dealing with this, so that’s good. The second, slightly more dubious issue, is that if Other Ocean received a Vive Pre for free from Valve/HTC for developing this as an OpenVR title, they are really acting against the best interests of the community and organisations that support it, and give hardware out for free in good faith.

The bigger problem with Oculus

In spite of all of that, I don’t really blame Other Ocean for doing this, because I don’t know if I could have stuck to my morals if I was offered $100,000s for simply pushing out a release date on one platform by a couple of months. Indie studios have it tough enough as it is, and that much money would likely be a godsend. We also don’t know what position Other Ocean are in financially with this game, so it’s possible the game simply wouldn’t have released without the money.

For Oculus’s part though, they are effectively “poisoning the well” of the VR community, which, ironically, is something Oculus have said they wouldn’t do, and have accused others of doing before. This isn’t a situation where Oculus step in as a publisher for an indie studio, fund the game, and it exclusive to Oculus Home. Although I think that the gaming world needs fewer “walled gardens” of content distribution, I totally understand and respect a publisher’s decision to protect it’s investment.

No, this situation is pure and simple Oculus paying a developer to not release on it’s rival’s platform. The title is already on Steam, so they don’t need the publisher, and pretty much the entire VR community knows about it (bearing in mind at this point the VR community is no more than a couple of hundred thousand, based on the number of Vive/Oculus HMDs that have shipped). It’s anti-consumer, anti-competition, and just plain wrong. It’s the Zuckerberg behemoth swinging it’s “everything must be ours” net, and forcing consumers out of having a choice in their hardware.

There is absolutely no reason to engage in this behaviour. It would be like Razer saying “OK, there was this game which was developed for Windows, but we saw it was really popular and decided that for the first 6 months, you can only play the game if you own our DeathAdder mouse.” Rifts and Vives are peripherals, plain and simple. Once you have the runtime installed on your PC, there’s very little additional work that you need to do inside a game engine to get it to work (that hasn’t already been done by the engine developer). It’s not like you can even draw parallels between XBOX and Playstation, because at least in those instances there is a tangable financial cost associated with choosing the architecture, as they’re both so different.

It needs to stop. Since the news went out about Giant Cop, one of the developers for Serious Sam VR commented saying that they too had been approached by Oculus with “a shitton of money”, but thankfully turned them down…

Setting up SteamVR support in Unreal Engine 4

A couple of days ago, a package arrived for me:


Hot diggity! I’ve spent the last couple of days playing different tech demos and calibrating it (Valve/HTC take note: you really need some HMD calibration instructions regarding IPD, strap positioning, FOV etc…)

Since i’ve had a couple of hours this morning to myself, i’ve started looking at setting up the HMD and controller tracking in Unreal Engine 4. The process is surprisingly straight forward, but there are a few quirks with regards to the SteamVR assets that i’ve had to battle with, so i’ve outlined that below.

Setting up SteamVR with Unreal Engine 4.11.2

OK, so SteamVR support is baked into Unreal Engine as of 4.9. You shouldn’t need to enable the SteamVR plugin to get stuff to work, and you shouldn’t really need to do anything particularly special, assuming that you follow this setup guide. One thing i’ve found with the Unreal Engine tutorials is that whilst they are pretty comprehensive, they can go out of date really fast, so best practice might change at some point in the future…

Once you’ve got the headset tracking set up (which is as simple as creating a new Pawn class, and attaching a SteamVRChaperone component to it), you can move on to setting up the controller tracking.

Now, on the face of it, this is really straight forward. In your VR_Pawn class (or any pawn class for that matter), you add a new Motion Controller component:


make sure that it’s assigned to the correct hand (left/right):


Then, you create a new Static Mesh component as a child of the Motion controller:


OK, now here’s the slightly tricky part if you want to use the Vive Pre controller models. If you were to assign basic cubes to the mesh at this point, everything works as expected. Your controllers will rotate/translate as normal. However, if you want to use the Vive Pre controller models within your environment, you can import them from SteamVR. They live here:

C:\Program Files(x86)\Steam\steamapps\common\SteamVR\resources\rendermodels\vr_controller_vive_1_5

If you import any of the models in this directory (I use the vr_controller_vive_1_5.obj model), you will get a warning that the model is very small. Therefore, you want to import the model with a scaling factor of ~100 (I used 85, and then had to scale the controller up a bit more).

Also, you should probably fix the rotation of the model import so that you don’t have to do this:


I’ve had to rotate the controller on the X and Z axis in order to get the correct orientation.


Finally, there are a couple of gotchas that aren’t in the Unreal Engine wiki for setting up motion controllers.

Firstly, in order to get anything to display at all with the controllers, you have to set the Collision options on both the Static Mesh and the MotionController component to NoCollision. I don’t know if this is a quirk of using a Pawn for the player, rather than a Player Controller class, but if you have the collision options set to anything else, you won’t be able to see your controllers, because they won’t spawn (as they’re set to 0,0,0 as their initial location and thus intersect).

Also, if you don’t have a camera component in your Pawn class, your controllers will be offset compared to your head and arms. You don’t need a Camera component if you’re not using controllers, but if you are, it’s a requirement. (This one definitely is a result of using the Pawn class, but using the Player Controller class has…other issues)

Getting Back Into It

This is a bit of a story…

A little bit of history

Back in 2010 I graduated university with a first-class degree in “Computer Games Programming”. In the previous 4 years I had gone from knowing precisely 0 about actually developing software, to being fairly fluent in C++, C#, and Java. I had dabbled in DirectX, OpenGL and a plethora of other 3rd party graphical APIs, and had developed maths libraries based on my substantial knowledge of vector and matrix maths.

I’d like to say that that I left university feeling pretty good about my prospects in the games industry. I’d like to say that I felt like my 4 years of study had resulted in a developer who could put his mind to any problem and solve it, given the right set of tools and time. But I can’t say that. I left university feeling worse about my job prospects than when I had entered in 2006. My final year nearly destroyed me, a situation caused mostly by a pretty disasterous dissertation, where I attempted to write a rigid-body physics engine based on real mathematical formula. My formulae were actually fine, but a few weeks before the end of the project I realised that I had coded myself into a black hole of epic proportions, and my engine wouldn’t be able to cope with more than a couple of rigid-body collisions at the same time.

I remember the exact moment I realised my mistake; I remember the blood draining from my face like I had discovered some horrifying secret or received some terrible news. I remember frantically changing the code and versioning and re-versioning my code so in a vain attempt to correct the problem, but eventually I came to the realisation that my project was lost.

I don’t really have an recollection of the final few weeks of my course. I wrote up as much as I could and documented a large amount of the theory I had intended to include in the engine, and I think that alone was responsible for my overall grade (although I had done pretty well throughout the rest of the course which obviously contributed). My then-girlfriend (now wife!) did her best to console me, and with the greatest credit to her I do think she kept me going through that period from a mental point of view, but overall my confidence in my own ability was completely gone, and I felt like no self-respecting development organisation would take me on.

Luckily, I had previously spent a year working at IBM, which, although not a games development organisation, gave me the way in to a career. For the last 6 years I have worked for the Storage development organisation there, slowly making my way back into development, through test and team leadership.


On the 1st August 2012, A company called Oculus VR started a Kickstarter campaign for a device known as the “Rift”. The kickstarter raised $2.4M and allowed Palmer Luckey’s team to develop one of the most ground-breaking peripheral devices of this generation. In 2014 Facebook bought Oculus VR for $2B. Through successive iterations of the technology, another player entered the field; that of HTC and Valve through the HTC Vive. Cut to 2016, and having tried both the Rift and the Vive, I was hooked. The spark that had left me was re-ignited, and I wanted to do nothing else but develop games for these incredible pieces of technology. Unfortunately, I’ve pretty much forgotten everything that I knew about developing games, so i’m re-teaching myself everything from the ground up.

The next few blog posts will likely be about my baby-steps back into the universe of games development, and boy am I excited about it!

Workflow on Windows

I’ll admit, most of my development is done on OSX. My primary “work”station at home is my early 2015 15” rMBP, and i’ve got a pretty optimized workflow on there for web development. However, for application development and games development my new desktop is my powerhouse…and it is a powerhouse!

Node, Ruby and Git

That being the case, I don’t feel like Windows has the right command-line integration to truely be as optimised for the kind of gem and npm work that comes with web development. So the first thing that we’re going to want to do is get Windows playing happily with those two.

Node & NPM

First off, you want to install node and npm. To do that, head over to and download whichever version of Node that you want (either the LTS version or the Current). If you’re confused by which one you will need, this link has the details for the release plan. The Node installer comes with NPM now (it didn’t used to), so once you run the installer, you should be all set!.

To validate node, open Command Prompt or Windows Powershell and run:

node -v

You should get a response that looks something like:


To check NPM, it’s the same thing:

PS C:\Users\Stephen> npm -v

OK Cool! So that’s Node/NPM installed!

Ruby & GEM

The next thing you’re going to want to do is install Ruby. This is slightly more tricky than Node, but not by much.

First, grab the Ruby Installer. You’re going to want the Ruby Installer for Windows (I chose the 2.2.4 x64 variant), and you’re also going to want the DevKit-mingw64-64-4.7.2-20130224-1432-sfx.exe (The Ruby Development Kit, you’ll see why in a sec..)

So go ahead and install Ruby. It’ll install it somewhere like C:\Ruby2.2.4, which is fine. Check that it’s installed by issuing ruby -v, which should output something like:

ruby 2.2.4p230 (2015-12-16 revision 53155) [x64-mingw32]

Once that’s done, you’re going to want to install the Ruby Development Kit that you also downloaded. This one is slightly more tricky.

  • Firstly, run the .exe that you downloaded and have it extract somewhere that it’s going to live (I extracted mine to C:\RubyDevKit).
  • When it’s finished extracting, you need to cd into that directory and run ruby dk.rb init, which generates a config file containing the path to your Ruby installation.
  • Once you’ve validated the config.yml file that it creates (you can use ruby dk.rb review for that), run ruby dk.rb install to complete the installation.
  • Finally, to validate that you’ve installed GEM correctly, you can do something like:

    gem install json --platform=ruby

  • and then

    ruby -rubygems -e "require 'json'; puts JSON.load('[42]').inspect"

Which will confirm that both GEM and Ruby are working.

So now you also have Ruby installed on the command-line. Good Job!


The final piece of this puzzle (from a toolchain point of view), is Git. On Linux and OSX this is really straightforward (either sudo apt-get install git, or install xcode tools), but on Windows it’s a separate application that hooks (optionally) into cmd.

Head over to the git website and download the git tools for Windows. Once downloaded, you need to start the installation, but change a couple of the install options along the way.

  • Make sure that the “Use git on Windows Command Prompt” option is selected, otherwise you will only be able to use git tools in git-bash (custom bash instance)
  • Make sure that you use “pull files as-is, push as Unix line-endings” if you are doing cross-platform development
  • Make sure that you allow GIT to use the command prompt options

Once that’s installed, you can validate it by doing:

PS C:\Users\Stephen> git --version
git version


With these three tools, you should be able to happily develop feature-rich full-stack web applications using either Ruby-on-rails, or Javascript, and use the power of git to control your source code versioning.

Getting Started with GitHub Pages

Until last week I had 4 sites hosted on a Linode VPS. Personally, I have never had any issues with Linode, and I would actually recommend them as a VPS provider, but there have been recent issues, and although I found that in general they dealt with this problem pretty promptly, the whole thing has made me rethink how I host stuff.

Out with the old…

Having 4 sites is sort of unnecessary for where I am as a developer right now. I was previously hosting:

So I decided to shut down the wedding website, get my free-loading sister to move her Wordpress-based portfolio off my VPS(!), and move both my personal blog and the landing page for Trailio over to GitHub Pages. My “organizational” page is hosting my blog (which would be pointed at, and the project page for Trailio sits as a subdomain of that by default.

In with the new!

I didn’t really care about any of the content from my old personal blog, but I have it as a backup from my VPS should I choose to host it here again some time in the future. (For reference, I was previously using a ghost powered blog, which was pretty cool!)

GitHub pages has good integation with the Jekyll “hacker” blogging site generator. What it actually does is compile a bunch of markdown and layout pages, and serve that out as a static site. I don’t really blog enough to need a powerful CMS, so this suits me, and also allows me to write posts in Sublime Text!

Getting Jekyll up and running was a fairly simple task. I chose Lanyon as my Jekyll theme, as it’s quite similar to my old theme Uno for Ghost. You basically create a new repository called, clone the lanyon repo into that repository, and commit it, and GitHub does the rest!

The slightly tricky bit was mapping my custom URL to my organizational page. My domains are currently with 1and1. I’m not linking them because frankly they’re complete shit and i’m looking to move each of my domains over to Namecheap as soon as each come up for renewal. At any rate, in order to map your domain to GitHub pages, this is what you need to do:

  • Remove any existing DNS settings from the domain (mine were currently set to Linode’s DNS)
  • If your domain supports ALIAS records, fantastic! Map your alias record to your URL and you’re done! ———- Otherwise… ———–
  • Create a new subdomain for www (stupid, I know, but 1and1 don’t support ALIAS records…)
  • For the APEX (@) domain, you need to map the A record for it to one of the two GitHub IPs ( in this case)
  • For the www subdomain, map the A record for it to the other GitHub IP ( in this case)
  • In your pages repository, create a new file called CNAME, and put a single line in there for your URL (in this case,
  • Once you’ve done all that, wait for various DNS caches to flush, and you should be up and running!

To double check, you can use the following command: dig +nostats +nocomments +nocmd

It should look something like this:

; <<>> DiG 9.9.5-3ubuntu0.7-Ubuntu <<>> +nocmd +nostats +nocomments
;; global options: +cmd
;      IN  A   3600    IN  A

For a project page, the steps are exactly the same. The important piece of work in this is the creation of the CNAME file. (Don’t ask me how it maps, I have no idea!)

Speaking out

In order to post, all you need to do is create a new file in the _posts directory called, and you will be able to write markdown to your heart’s content! Then commit the file, and the Jekyll engine running on GH will auto-magically convert everything to static pages and host it for you.

Job Done!