Battle Among The Stars, Day 8

It had been a while since I had found time to work on this project – I had been busy doing a few other things inbetween, and while I was inactive the Unreal Engine progressed further – and i got a better understanding on how a few things are supposed to work.

So when I decided to start with Day 8, my agenda was:

  • Get the game ported to Unreal Engine 4.9
  • Rebuild the way the camera is handled
  • Start using components like they are supposed to be used – to add behaviour to Objects

Porting the Game to Unreal Engine 4.9

This point was actually quite straight forward, the automated conversion by the engine did a mostly good job, and only left me with two Problems, both being error messages displayed that actually had no impact except being shown. In both cases this seemed to be related to artefacts from previous Versions of the Engine / Previous Upgrades.

One was simply solved by deleting the folder in which UE is saving some extra information (Saved/Config), the other one required me editing an ini file (Config/DefaultEngine.ini) to remove some references/aliases that where set, and pointed to the path of the 4.8 version.

After those two things where taken care of, the game run flawlessly within Unreal Engine 4 – infact it also seems to have elimitated a Bug that I had problems with since 4.7.

Rebuild the way the camera is handled

I was unhappy with how the camera worked – panning around with the mouse felt rather awkward, and you couldn’t move it by just getting the mouse close to the edge.

To save some time I decided to buy the “RTS Camera PC and Mobile” for about 22 Euros. When i tried to add it to my project i had to first surprise: it wasn’t allowing me to add it to a 4.9 project. So to get arround this i added it to a 4.8 project, and then exported it from there to my game.

When I tried out the demo level that came with it it didn’t work though – after a while i figured out that for some reason (maybe the way i got the files into my project?) The hit boxes it created for moving when the mouse is neer the edge had their name set to “Unknown”, while the rest of the blueprint was relying on the names to be set to Left/Right/Bottom/Top.

After some time I realized that using this camera would require me to rebuild the way controls and my player is handled (the RTS Camera PC and Mobile is build as a pawn, however in BATS the pawn is the ship that the player controls), and had a few further problems (moving the mouse close to the edge = cam is moving, moving it to the edge = camera stops), the camera requires that the game mode that comes with the camera is used – without reason. So in the end i ended up changing my own camera, and throwing away the stuff that i had just bought.

Start using components like they are supposed to be used

I figure this is a learning effect that i got from working with unity a bit. As i was running out of time i just wanted to have used it to learn how to – so all i got to do with it was make a component that can be added to blueprint objects, that will make them rotate in a ranom speed and direction.
This was extremly straight forward, and, as you can see in the video, worked like a charm.

Slim Framework 3 + king23/di (replacing pimple)

I’ve promised this post to a few guys on the slim framework irc channel (#slimframework on, so here we go.

As this post is not an introduction to slim framework 3 itself, I might be skipping over some information that, if you don’t know the framework, might be important to you. I recommend that if something is unclear – just have a look at the code of the example app and/or the documentation of slim framework 3

This post is basically in 3 pieces:

  1. the slim3-skeleton as an example – a very quick introduction in the relevant parts of slim3, and how it is “normaly” done
  2. replacing pimple with king23/di
  3. ok? whats the benefit

the slim3 skeleton as an example

to start this off, lets take akrabats slim3-skeleton as an quick example, as it basically does all the ground work a simple app would do.

you can get the slim3-skeleton from github (note that i linked a specific commit, the skeleton might have been improved / advanced after this post, so if you want to use it, look up the changes and use the latest version ;)

Now if you look at line 4 of the app/routes.php file, you will find there something like $app->get('/', 'App\Action\HomeAction:dispatch') which is defining a route for get requests to / (first parameter), which will be routed to ‘\App\Action\HomeAction:dispatch’. Which means an instance of \App\Action\HomeAction will have its dispatch() method be called.

If you have a look at the \App\Action\HomeAction class, you will notice that the constructor actually requires a \Slim\Views\Twig and a \Psr\Log\LoggerInterface instance to be instanciated,
so lets have a quick look where this instance is created, which is in the app/dependencies.php file, on line 44
the app/dependencies.php file is used in the skeleton to configure all dependencies that the dependency injection (pimple) is taking care of.

So what happens when a request to / is routed is, it will break up the ‘App\Action\HomeAction:dispatch’ string, will then ask the di container to provide an instance of App\Action\HomeAction, and then call the dispatch method on the resulting object.

replacing pimple with king23/di

If you read the documentation of slim framework 3, you will notice that they made the wise choice of allowing the DI container to be replaced, all that the new DI container must do is implement container-interop/container-interop, and a few basic dependencies that every slim3 application has.

When I wrote my post about Slim Framework3 + King23/Castle23 + React a few weeks ago, i basically already wrote the stuff necessary for this, however i decided to improve the interop part a bit (cover exceptions as well), and release an king23/di-interop package, which conveniantly makes king23/di available to all frameworks supporting the interop interface.

So, whats left todo?

  1. add king23/di-interop to composer
  2. edit public/index.php so it will use king23/di-interop as a container
  3. edit the app/dependency.php, so it uses king23/di syntax for configuring the container

Hint: this is mostly to explain how the container is switched, if you want to use this, you don’t have to follow this step by step, you can simply use ppetermann/slim3-skeleton

add king23/di-interopt to composer

Now this once is simple enough, we are adding king23/di-interop: "~1.0" to the require block of the composer.json, then bump the requirement for the twig view up to ~2.0, as older versions are dependend on pimple specific stuff and in theory are done.

Now what I like to do here, is to add an additional “replace” block, where I claim that the package replaces pimple, but be careful with following that example, as it is a bit of an abuse of the replace that composer offers.

The idea is: i have king23/di-interop in my app, so i don’t need pimple anymore, so why have those files litter my vendor directory?
“replace” allows you to define that a package is replacing another package in composer, the thing is, its meant to have the package you are writing the composer.json for say “i replace package x”, so in this case our skeleton app is replacing pimple with itself. Now as it is requiring king23/di-interop instead, and using it, this is not causing any problems, but I think its important to keep in your mind that it was meant to be used slightly different.
(The way the documentation is written it seems to be meant to be added to packages that are drop in replacements for others – say if king23/di was a fork of pimple, which could replace it while being 100% compatible on the public interface – but thats a bit off the reality of replaceable components. A better solution would be what i described a while ago in my post about Composer & virtual packages, but at the moment thats not supported by all packages involved.

edit public/index.php so it will use king23/di-interop as a container

When you want to replace the container used by slim framework 3, you do that by handing it the container as parameter in the constructor. This parameter defaults to an empty array, and in the skeleton is actually taking the contents of app/settings.php.

So, to make this work, we remove the line that is loading the settings (we will do that elsewhere), and we actually inject the return value of a new file called app/container.php, where we instanciate the king23/di-interop container (King23\Di\Interop\InteropContainer), and add the minimum required dependencies for slim.
Then we rewrite the app/dependencies.php file to use king23-di syntax for configuring it, and delete the ‘Action Factories’ section of it (yes, thats right)

ok, what’s the benefit?

Well, first of all – running it with pimple is totally fine, so don’t take this as “pimple sucks” or something in that direction.
What I don’t like is that “action factories” configuration which i removed in the end of the last step. Having actually the DI container use the classes (idealy interfaces like with the logger) that are typehinted in the constructor to pick up what it should inject to me makes it feel a lot more “rounded”, and i can spend my time writing actual logic rather than keep doing the same boilerplate code for intialization of Actions (and other classes using the DI) over and over again.

a bit of salt

There is two downsides to using this:

  1. all the main examples in the slim documentation or in skeletons out there assume you are using pimple, while that in most cases shouldn’t be an issue you should know that not all of this has been written with the container-interop interface in mind – quite often you find code thats accessing the container through ArrayAccess, something thats not available in container-interop (and not in king23/di either). I’m sure this will get better with time, but in the end you will every now and then run into code that you have to refactor if you want to use it.
  2. if king23/di does not know a key, but it can load a class by the name of the key it will, and it will then use reflection to determine what to inject into the constructor – thats how it can create the Actions in this post without being configured for each Action, however that means that each time an object is created this way, there will be some reflection overhead, that won’t be there with the pimple solution. In the end it shouldn’t be much, but you should know this.


King23/Castle23/ReactPHP/Slim Framework 3.0 – for the fun of it

Sometimes I sit down, and just do some code for the fun of it. Yesterday I took the time to marry my own Framework with Slim Framework 3.0beta1 – just for the fun of it. The result is quite fast.


I’m not even sure if I should make this post. Most certainly I shouldn’t make this post without this warning: almost everything you will see in this post is unstable so if you get the idea to use this in production, keep in mind that you have been warned, and that you probably will burn in hell (or have to invest some time to get this stuff stable).
Also: this is not a tutorial

The Idea

The Idea is simple: load slim framework 3.0 into my own framework, to allow it to use my frameworks react based webserver.

Why? For fun, and to see how fast its gets.

Technologies used


ReactPHP is a framework for event-driven, non-blocking I/O with PHP, which has been around for a while. I’ve been toying with it before, and even had some form of react integration done for the pre-rewrite king23.
One of my websites ( runs the api for the route planner on a react based app since about 10 month, and while i had little to no trouble with it, I still feel reactphp is not exactly where it should be yet. It seems that many people in the PHP world either don’t care for non-blocking, or go the “nodejs already has that so lets use nodejs instead” way. Sadly there isn’t any non-blocking libs for many I/O tasks that one has yet.

Reacts code being less than well documented probably does its own part, but personally I hope it will gain more momentum the more it ages.


King23 is my own Microframework – i created it in 2010 for a small statistics website that I build around a game that i was playing back then. Ever since I’ve been using it for almost every web project that I’ve done in my freetime, adding and changing stuff very much to what i needed it to be on the fly.

As you might notice, you will find very little documentation on it, and its tests directory is still empty.

A few weeks ago i decided to rewrite parts of it to modularize it more, adding a dependency injection (dropping the service locator it was using before), and making it PSR-7 conform. This spawned the king23/di package, and shortly after the knight23/core package as a starting point for console scripts.

As I’m writing this post those changes (that are essential for this post) are still in the develop branch, and have not been released yet (as I’m struggeling putting a 1.0.0 on it while I don’t feel it to be fully stable yet).


king23/di is the di container I wrote a few month ago, it is probably the most stable component in this post, and its main purpose is to constructor-inject by interface.
In this post we will “abuse” it a bit, as we need it to be compatible to container-interop/container-interop to work with slim framework 3. More about this later on.

    $container = new King23DIDependencyContainer();

    // register a service
        function() {
            return new MyInstance();

    // also you can register a factory (the default for services is to be singletons)
        function() {
            return new MyOtherInterface();

    // an example class to have our interface injected to
    class Example
        public function __construct(MyInterface $myIntance)

    // get an instance with the injections done
    $instance = $container->getInstance(Example::class);

    // the same method can also be used to retrieve a dependency from the container
    // this will return the MyInstance instance


When I was rewriting parts of king23, I decided to change the way it handled command line quite far from how it worked in 0.9.*. Also, i changed my understanding of it from being a part of the framework (old), to being a framework for cli apps, based on king23 (new).
The result will surely change over time, as much of it is very rudimentary yet.


So whats castle23? After having used ReactPHP (see below) for a while, and even having had some previous integration in the old king23, I decided to build a small webserver within Knight23, which would use ReactPHP, and which would be able to use psr-7 conform middlewares for everything. The main goal at that time was to run king23 applications within this webserver.

Slim Framework (3.0beta1)

Slim is a microframework which has been around for a while, lots of people know it, lots of people use it.

Lately the slim guys have been rewriting huge parts of it, basically to allow incorporation of other DI containers, Psr-7 support, and modernizing it. Part of the result is that Slim Framework 3.0 can be used as a Psr-7 conform middleware – which caused me to look at the option to load Slim into my middleware queue, replacing my own router with the whole framework, and by that having an extremly easy and effordless way of loading slim framework 3.0 into castle23.

The Execution

I had castle23 already build, the slim framework guys have released beta1 of 3.0 – so i figured all that was left to do was to build a configuration.

I will list a few files here, and comment on what is done in them, I won’t go much into detail though as this is pretty straight forward, and the code should be easy enough to read.


The script boostrapping and starting the actual knight23 application is almost identical to the one of plain castle23, the main difference being that there is no default route added (as the router is removed from the Middleware queue, that wouldn’t make any sense), and the package name / version fields have been set to something more sensible.


This file is load by the bin/slim-castle23 script, and basically loads all the files containing the dependencies to be injected by the king23/di.
As most of the services are happy with the default from the castle23 example, I simply load the files from there, and only load those where there are differences from this projects config/services/ folder.


This basically contains the middleware queue for castle23, which has the following entries:

  • Whoops – this will catch all exceptions that are uncaught, and print them pretty
  • StaticFileMiddleware – this middleware will serve static files from a public/ folder (or which other folder is configured through the \King23\Core\SettingsInterface service).
  • The \Slim\App as it is registered in config/services/slim-app.php.


Once I had the first take on castle23 + slim framework running, i decided to adapt the king23/di injection so it could be used within slim framework, replacing the one used by default.
As slim has a hand full of services that it requires by default, those are configured in this file. Basically everything except the notFoundHandler is identical to slim-default setup, the notFoundHandler however had to be replaced by a simple own one, as slims default tries to use a method on the request object which is not psr-7 conform, and thus doesn’t exist in the request object thats given to the middleware.


Basically i just register the Slim App here – for its own classname, as i have no interface, and as this is just an example, i go along and declare a basic hello world route here too. then, when i added the DI container, i decided to use it at least once in the example to show that it works – hence I’m reading the package name and the version of the runner (which have been set in bin/slim-castle23).


So, when I decided to hand my own DI Container to slim, I needed to make sure that the container-interopt interface is implemented. The solution was to simply extend my container with a wrapper that would map the two required methods to something equivalent in my container.
That said: it still feels a bit messy, because while it works, it is now looking up instances by random strings, which is not exactly how it was meant to be used – once I’m over that I’ll probably move this file to its own package, so the DI can in theory be used by everything supporting container-interopt then.


Now, as you might have noticed, marrying the two frameworks was actually quite easy, but now – what does the result give us? some frankensteinish monster which loses tons of performance due to the amount of abstraction and frameworks?

To compare performances, i basically build the same app as the one that i injected here, however for the pure-slim approach i actually left out getting the package / version, as those information don’t even exist. So the pure-slim approach is basically: get name from uri, and print hello name. while for the castle23 based solution it is get name from uri, print hello name, print package and version names.

I then run apache bench with a concurrency of 100 and 1000 requests at the slim-only installation. This is over network, so there might be slight fluctuations in measurement.
In this case this is running through nginx as a webserver and hhvm as the runtime
the command being:
ab -c 100 -n 1000

    Concurrency Level: 100
    Time taken for tests: 1.310 seconds
    Complete requests: 1000
    Failed requests: 0
    Total transferred: 219000 bytes
    HTML transferred: 11000 bytes
    Requests per second: 763.23 [#/sec] (mean)
    Time per request: 131.022 [ms] (mean)
    Time per request: 1.310 [ms] (mean, across all concurrent requests)
    Transfer rate: 163.23 [Kbytes/sec] received

763.23 requests – thats not bad.

so the next run was to start bin/slim-castle23 listening on port 8001
hhvm bin/slim-castle23 serve 8001
and the run apache bench against that:
ab -c 100 -n 1000

    Concurrency Level: 100
    Time taken for tests: 0.476 seconds
    Complete requests: 1000
    Failed requests: 0
    Total transferred: 72000 bytes
    HTML transferred: 53000 bytes
    Requests per second: 2102.35 [#/sec] (mean)
    Time per request: 47.566 [ms] (mean)
    Time per request: 0.476 [ms] (mean, across all concurrent requests)
    Transfer rate: 147.82 [Kbytes/sec] received

2102? now that is an awesome result right there.

so why is it that much faster?
the main speed difference should be that it has compile only once, and keeps an running application – so there is no initializing class loading, etc going one – once the request has been run once it is just a matter of executing the same stuff over and over again.
I would like to give a bit of credit to the non-blocking factor – however with there being very little I/O, there isn’t much of a difference to be made because of that.


  • Running Slim Framework 3.0 apps withing castle23 is simple
  • Running Slim Framework 3.0 apps within a react based webserver can actually help with its performance
  • Sometimes toying around with tons of unstables just to see where it gets you can be a lot of fun.


PS: thanks for taking the time to read this, next time I might look into a more practial post again :)

A look at Elite: Dangerous – Powerplay

Powerplay is the latest version of Elite: Dangerous, that adds mechanics which deal with ownership of Systems.

I’ve been waiting for this Patch, as E:D had turned to be a bit dull lately, and the basic ideas seem to promise some conflict and even maybe some PVP.

Having seen that it was released yesterday i decided that I’d spend a few hours, and see if/how it works

So, when i logged in, i located the Powerplay menus quick enough and had a hard decision to make.. to what Power would i pledge my allegiance – now I’m holding the Knight Rank within the Empire, so picking one of the Empire Powers seemed the right thing to do, as it would go well with my reputation / rank progression – or so i thought. So which of the Empire factions? Basically i ended with what i felt was a tough choice – a woman who is supposed to increase bounties & payouts – so far my only income source, OR some dude who lower the prices for high tech weapons – something i have to throw a lot of money on because my ship currently has only two pulse lasers. Well, i decided for the income.

Both had the benefit that two of their 3 activities are combat, so I did expect a lot of combat missions.

At the time of pledging, however i was still outside of the influence sphere of A Lavigny-Duval, but the system was in a status where i could have started a preparation – at least thats what the Powerplay missions screen told me. For that i was supposed to get some documents proving corruption.
Well, ok, it didn’t really tell me where to get those – or i missed it, but i thought “well, why not, I guess i have to shoot up a few local officials, and will get those documents” – well all it gave me was some local bounty, so after a while i decided to dock a last time, upgrade my FSD and the cheapest fuel scoop i could find with my last money (my wallet was at ~450 after that), and head towards one of those controlled systems..

14 jumps later i had made the following experiences:

  • spending a lot of time orbiting a sun to scoop fuel
  • spending a lot of time in jump animations
  • being intercepted by 2 eagles who belong to the new Powerplay pirate faction.. as it turns out one gave me nothing, and the otherone 420 credits in a local voucher – which i decided not to collect seeing that the station was 28k light seconds away.

So, after flying about 2k ls to the station in the controlled system i finally looked at the Powerplay missions screen again. So.. it tells me i can’t do fortifying because i don’t bring any goods for that. Hum so more hauling crap. It also tells me that it accepts vouchers from expansion fights, so yay fighting things.

Oh, and it also offered me to get 10 of those corruption papers that i would have had needed for in the system that i started in – so thats how you get them.

A quick look at the map shows me the next expansion system (of which there are three in total) is about 26 jumps away.

26 boring, mindnumbing nothing-happening jumps (including extended scooping periods orbiting around nameless suns) later, I arrive at a location where i can do “Crime Sweeps” – the expansion-action within this faction, so now a bit excited, hoping that it would turn this into fun i drop into one of those crime sweep signatures.
I’m constantly under fire by several quite heavy ships, I’m getting euphoric, thinking it would be my lucky day. The ships within that signature seem a bit tougher than the ones in the usual conflict zones, so one of the anacondas that i’m working on taking down actually manages to burn through my shield on various occassions, and take me down to 36% hull – but in the end I’m the victor.

After cutting through 3 anacondas, 2 pythons, 2 viper, and 2 eagles i feel I should have earned quite a bit and should get my rewards, maybe repair the ship and upgrade it a bit. A quick look into my list of rewards shocks me though – rather than having something saying “Powerplay Rewards 1000000 Credits”, it says “Powerplay: 9” so, it lists basically a reward of 1 Powerplay per killed ship.

Mh, Somehow that seems weird, as the anacondas should give a lot more reward than an eagle, right? So i’m thinking “maybe they just screwed up display, and it will all resolve in cash-cash-cash once i turn ’em in.

Theres a station 3ls away from where i am, so a quick docking maneuver later i learn that this station will not offer me an option to give the vouchers to them. So i figure “well i guess i have to go back into one of the systems in the influence-sphere of my power”, and pick the closest system within the bubble thats describing said sphere.
8 boring jump-n-fuelscoops later, the 36% of my ship that still move and me make it to a system that is “exploited” by the power i’m working for – and as it turns out, exploited systems are not good enough for turning in the vouchers either.
So back to jump-n-fuelscooping, for another 13 jumps. Finally arriving in a “controlled” system again i head to my powers contact – and finally get my reward for all this..

9 merrits (one per voucher), and 900(!) credits – not even enough to repair my vulture.

So what can i do with the 9 merrits? Maybe there is some money in that?
Well after some research:

Merrits are basically something that helps you “rank” within your powers rating system.
Right now I’m in Rating 1, thats the one i entered with.

What does Rating 1 give me?

  • 1000 credits per week
  • 0 Preparation Nominations (those allow you to take influence on which system should be tried to taken over next)
  • 10 Power commodity allocation every half hour (whats that you ask? well those corruption documents? thats the cap on how many you can get, in my rating maximum 10 per 30 minutes)

so basically no real reward either.

At 100 Merrits, i would get Rating 2, which gives

  • 50 000 Credits
  • 2 Preparation Nominations
  • 15 commodity allocation per 30 min
  • 20% more bounties in controlled/exploited systems (the first real benefit)

Rating 3 at 750 merrits is barely worth talking about, 500k credits a week, 5 nominations, and 20 power commodity per 30min

However, the real rewards would be on rating 4, where you start to have weekly income of 5000000 Credits, and rating 5 where you get 50mio credits weekly and double the bounty in controlled/exploited systems
the catch with that is that you need 1500 merrits for rating 4 and 10 000 for rating 5.

Now if that all still seems reasonable to you because “oh, but you get quite high passive income, even though when you will lose money at first” – well but its not that either, let me quote from the rating screen of my power “To Maintain your rating you must earn enough Merits to unlock it again this week”.

When playing Powerplay,
for killing one ship, no matter what size you get 1 Merrit, and 100 credits, to make it worth your while you need to at least get it to Rating 4, which means 1500 dead ships. Per Week. so, lets just for the fun of it assume you manage to get the easiest ship to kill that i saw – the eagle constantly supplied, and you need a minute to take each eagle down, then you have to constantly grind 25 hours to get to Rating 4. But killing 60 ships an hour is quite unrealistic, its more likely that you do 20 to 30 in an hour.

I assume (i haven’t tried, as i have no cargo hold) that delivering the power commodities (the corruption papers) yields 1 Merrit per Paper, which would while in rank 1 put you in about the same speed of making merrits than the guy shooting ships, and when in rating 3 at about 1.5 times the speed (and in rank 5 you’d reach 100 merrits an hour) – but thats pure speculation (but If I’m wrong it would mean that once again the hauling/trading profession got the advantage – which is kinda odd for a combat expansion).

Powerplay’s basic idea is a good one, having contant conflicts driven by factions, and various ways to contribute to those efforts seems a good idea to keep people from getting too bored – however the idea of cost (+risk)  vs reward is so far of in this that there is (at least for myself) simply no point in playing it. 

building a game that reads your mind in 6 hours… OR the worlds first jump ‘n’ blink!

What happend before:

A few weeks ago, on the 17th of April I did attend to the first european neurogame jam which was an event about creating games utilizing the ability to read brainwaves through an off-the-shelve eeg neuro-headset such as the Emotive Epoc.

Sadly the timeframe that was set for that game jam (just a couple of hours) was way to short to actually build something, so we ended up having Jens and Iris do their presentations – which tought the rest of us about how the technology works, and the basics about the toolchain Iris created. After that we spend most of the time coming up with and discussing ideas and aspects of what one could do with this technology, while some of us did give the devices a spin, and we had some fun comparing brainwaves – but in the end we didn’t have the time to actually build something.

Also there is still a few problems with the technology – the two main points probably are:

  • While the machine learning that Iris build into the toolchain works extremly good, it has to learn all commands that can be used with in a game for every user, every time it is used (as taking off and putting it on again might already change what the device is picking up from you) – this means games build upon this would still have a ramp up time of 10+ minutes from “I want to play” to “I can play now”.
  • Latency is too high.

I left having had a good evening with some very interesting discussions, but I thought the next time that I’d hear about the technology would be in a few years when it is more ripe to work with it, and more devices are available.

So how did I end up building a game?

A few weeks later Iris contacted me, as she for the evaluation of her work still needed someone to implement something using her toolchain.

Now the catch: the toolchain is build for unity3d, which had very little experience with – but the invite was still standing, so last Monday I visited her at the teco and after a short refresh on what i had learned during the first european neurogame jam,  we set to build the game.

Knowing from the game jams that i’ve been to that time is extremly valuable, and knowing that not knowing unity would cost me some time, I set myself the following goals before thinking about an idea:

  • the game must be completely playable
  • as the game should have a quick “let me try that out” experience, doing a long learning-phase was a no-go.
  • asset creation? uhm.. not with less than 6 hours left

So i came up with a very very simple game – the character of the player is a simple ball which needs to get from the floor to the ceiling, using a set of steps – where some steps are white, and some steps are black. The ball would fall/jump through white steps, while it could stand/move on black steps.

Now the controls for movement would be regular keyboard commandos (to avoid frustration factors like learning phase for the user)

The twist? To get up to the ceiling the player needs to switch the colors of the steps. The control of changing the colors is linked to the game using Iris toolchain to watch the brainwaves of the player, and find the very distinctive pattern that happens when someone blinks. Whenever a blink is registered the game then changes the colors.

So to win the player needs to jump from step to step, blinking in the right moment, thus allowing him to reach the top.

The main problem building it was my unfamiliarity with unity, but since unity is using c#, at least i didn’t have to learn an unfamilliar language. (I think what took me the most time was actually trying to figure out how to color the objects. And what did also cost me some time was that changes done while the game is in “play” mode are forgotten afterwards).

Well in the end the game worked, and we had quite some fun trying it out – multiplayer (or extra-hard-mode) is when one does the jumping and the other the blinking ;)

So where can you see this game?
Well that is a bit of a problem at the moment, even if I’d provide a download, you would need the toolchain Iris build as well as an emotive epoc headset (which is quite expensive).

So the best I can offer is this video:

A few thoughts about composer and how people use it

A few thoughts about composer and how people use it

composer has changed the PHP ecosystem like no other tool introduced – almost everyone is using it today. Now, I have written about composer before, and have always been a big proponent of using it. However, as I have spend some time with looking more closely on a few things, there is a few problems (some with composer, some with how people (ab)use composer) that I would like to write about.

I’m pretty certain that my point of view is not the only valid one, and that some of you will disagree with what I say – use the comments if you want to add something or have questions, and if you blog a repsonse, please use a pingback so I’ll notice your post.

Please keep in mind: this is not a rant against composer. You should be using it.

Number 1 – composer gets slow and resource hungry

composer builds up quite huge dependency graphs to find out which packages to install, which means projects that depend on packages that have a lot of dependencies with history, and lax to no restraints on those 2nd level dependencies, will cause composer update to eat tons of ram, and take quite a while.

One option to “fix” that is to add your 2nd, 3rd etc. dependencies to your root composer.json, with tighter version contraints. While this actually limits the problem to some extend, its not a real fix though, as it means sooner or later you end up doing composers job manually yourself.

Another option is to turn of your memory limit, and keep supplying more memory to your machines, however I’ve been told that this breaks for windows users. Also it gets quite cumbersome when a composer update starts taking longer than a cigarette/coffee break.

Thats not the only thing that keeps it slow – composer is retrieving meta-data (and the contents) in a blocking manner, which means if one request is slow the whole process will be held up. Using something like reactphp could actually decrease the time composer needs for those tasks.

Number 2 – people are using composer as an installer

Something I did wrong myself when I started working with composer. Not only does composer make it easy to be abused in that manner, having the option to have composer check your php and your php extensions makes it seem like a logical step.

However, that is problematic for various reasons:

  • composer is a developer tool, as such it shouldn’t run on live machines
  • composer will fetch dependencies from a lot of sources, which means that your live systems would need to be able to fetch from there (ask your admin what he thinks about that), and to not get rate limited on github for example, you would need to put credentials on your live machine…
  • if you don’t backup your dependencies (and when using composer as an installer you don’t have an explicit step for that) you might get a bad surprise when someone decides to remove his project. Packagist only stores meta-data, if someone removes the git or the dist zip from github (or any other repository) you can only pray that you have cached it somewhere. I’ve seen a hand full of packages on packagist that are not available on github anymore.
  • your deployment becomes dependend on systems that are not under your control – if github gets ddosed again, or packagists provider fucks up routing again during deployment, you might end up with a much longer maintenance period that anticipated.

Let me from my perspective mention how I think composer should be used:

  1. developer initializes a new projekt using composer create-project
  2. developer adds requires as needed, he runs composer update to create the composer.lock file, which is added to the scm. this developer at this time is responsible to verify that the version(s) he updated to are working as required.
  3. when pulling a changed composer.lock from scm, each developer working on it will run composer install to have the correct setup.
  4. once a release is necessary the person who holds the release manager role will create a build of the project (it does not matter if he runs a build script or elevates a build in your build system, in the end someone is responsible to decide when something should be released). Most likely within this build there will be done something like composer install --prefer-dist --no-dev -o the result of this step should be a package (directory, zip, tarball, .deb, or whatever else your deployment system needs).
  5. the admin uses the result of 4. to install the release on a live machine

now obviously this does not cover all variants, for example if your release is a library that you want to publish on github, then step 4 is the process where you tag your version number and step 5 is not relevant. Or you have more processes inbetween (staging/QA etc) but this should illustrate how responsibilites should be separated, and it should prevent you from getting in a half-installed-live-system-situation.

Number 3 – people use their own paths

Now, I’m pretty sure this is one that quite a few people will disagree with me.

Composer offers the ability to use custom installers, and even provies the composer/installers package, which provides an easy way to install packages into other folders.

Quite a few Frameworks and CMS make use of this, usually for one or two of three reasons

  1. to install assets somewhere
  2. to keep an own directory structure (for legacy reasons)
  3. to distinguish various types of packages (regular depedencies, modules, themes, plugins…you name it)

where 3. seems to be the most common usecase.
Obviously its conveniant, by slapping those dependencies into different directories, you can have your module loaders etc, very easy detect a new module, or a new plugin.

So why am i listing this as a problem?

  • own folders lower the interoperatbility of packages (ok, granted, this is bean counting, your blogs theme wouldn’t work with my forum anyways)
  • own folders lower the transparency (vendor code is in vendors, own code in the project),
  • and raise the learning curve (ok, if you only use one framework constantly, you probably can remember 5 paths)

as you see, the argument against it is actually a bit weak, however to me those are still valid arguments, and there are better solutions (puli, assetic (for assets), your installer managing a configuration file vs. using a dir to find all modules) available.

Number 4 – people don’t adhere semver

Semantic Versioning actually makes a lot of sense – the version contraints in composer follow the assumption that semantic versioning is used, still people choose not to. Why? i don’t know. But if you release bc breaking changes in a PATCH release, then there is a special place in hell for you.

Number 5 – people don’t tag their releases / don’t release

This goes hand in hand with Number 6 – I’ve seen packages that are on packagist, without ever having a version tagged. Thing is, the moment you put it on packagist, in theory, you should have a first version. If you don’t tag it, people can’t properly set their dependencies, meaning they shouldn’t be using it.

Don’t be scared of releasing, release often, and Majors if needed.

Also, as i’ve seen this lately, there are people who use branches instead of tags – well at least thats how it looks to me (maybe someone wants to enlighten me for the reason), which means you end up with versions like “2.3.x-dev, 2.2.x-dev, 2.1.x-dev”, which means you will always be on unstable versions.

Number 6 – people release packages with dependencies to unstable versions

Do I even need to say something about this? To my surprise it is not exactly uncommon, and quite often a result of number 5. Oh the joys on you requiring a package that has a stable version, but relies on a dev-package, that itself relies on a dev-package from a -develop branch.

Those who know composer a bit better might already have spotted the hidden evil in this combination. Your root packages composer.lock is the only composer.lock that is relevant. When you run composer update on your root package you might end up with a completely different version of dev-randompackage-develop than the developer who build the dev-otherrandompackage, and yet another one than the one who build stablepackage – which makes for nasty heisenbugs.

So, what now? / TL;DR

As you can see the things i listed are quite different aspects.

Number 1: to tackle this problem it probably requires a major rewrite of composer, and maybe even rethinking of how to use it

Number 2: one can get around a part of this problem using toran proxy or a local satis install but in the end the real solution might be if there was more tutorials on how to get the installing part right.

Number 3: i named some better options already

Number 4,5,6: Thats us, all of us who publish packages, work on opensource projects etc. We have to show more responsibility and discipline here – and clean up the chaos we are having right now.

Battle Among The Stars: Day 7

So, this time I guess I don’t have a fancy screenshot to show.

Most of the work on Day 7 was restructuring some more of the Blueprints I had already build (if you read the Day 6 post, you might see that this has become a common theme, which is due to me still learning the language & engine), as well as learning a bit more on how the unreal engine integrates c++, as I have found a few things that I’d like to do that don’t seem possible with Blueprint yet.

Also I seem to have a problem with the Engine not running the construct scripts of my ship-characters in “standalone” mode – while in editor and in a done build it works (so this is mostly just messing up testing for myself).

Besides that I’ve looked into how to do networking, for which I started building a networked pong (no, not going to release that) in the Unreal Engine, just to see how it works.

My next goal for this is to build an Asteroids-Level – basically the first playable release of something, so people who are interested can play (… asteroids), and give me feedback on how the controls work for them. (If you are interested, drop me a comment – it might still be a few weeks till that happens though).


not so fancy pong screenshot: