I haven’t played EVE much lately,for many reasons. But whenever I look at Google now lately, it presents jet another juicy bit of drama to me, quite often leaving me in a state of  uncontrollable giggle out of schadenfreude.

When I first read about themittani.com kickstarter on a book about the fountain war, I tried to Google what that fountain war was, as I hadn’t seen anything that I’d call a relevant, power shifting war since the fall of the northern coalition. I came up mostly empty handed –  apparently some minor conflict in which goons had some stakes. I was a bit surprised they [goons]  had managed to partner up with CCP, who usually are quite strict about their IP, for something that looked to me like a propaganda piece that mostly goons [+allies] and a few collectors would buy. On the other hand, why not? If any community is large enough to do so, it surely would be goons who with enough members buying their merchandise would be able to pull it alone. I didn’t pledge, as the comics ccp had published with dark horse already had given enough of an impression on how true to history stories told from the goon perspective are. 

So when I look at the drama now live wonder how far off the expectations of some where. Did the guys behind it really think they where creating something for the eve community, with their propaganda? How could they?  Goons, especially some of those involved have always displayed great pride in not being part of the eve community, but belonging to their own [SA], and now they are blaming the eve community for being poisonous? Why? For not buying goon merchandise? Or for calling it out as such? Tell you what, spend a while in a local chat with a goon fleet present, and you will read poisonous.

So what are the lessons to learn here?

About goons:

  • The goon community is falling apart, they have trouble pulling their own project, even if taking part is just buying a book.
  • The Mittani and friends still have trouble estimating their own importance or lack of in the eve community.

About making money with fanfiction in a community driven game:

  • Either distance project to an objective level from all community conflict
  • Or only count on support from the community you are catering to.
  • Kickstart with realistic numbers.
  • Don’t make people who are hated by large parts of the community figure heads or representatives for your project.

And finally:

  • Don’t bee a goon.

regards, PP

The Hammerzeit IV GameJam / Rack Wars

So this weekend was time for the Hammerzeit IV, an internal GameJam at my workplace. The Hammerzeit has a special place in my heart, so i was looking forward to this for a while.

Theme & The Idea

The theme this time was “growth”, which right away got everyone exploding with ideas. Personally I had the Idea that we ended up doing in our team quite early in the 20-minute idea-finding-phase, even though I still had a space setting for it on my mind.

As I didn’t really find an Idea that I wanted to do more, but some other teams mentioned a space setting already, I adapted and pitched the following Idea:

  • You are a Hacker.
  • You are fighting on a network with another Hacker over resources (Servers or Currency)
  • The Game is Multiplayer (Networked), Head on 1v1 PVP.
  • The Game is timeboxed, so the winner will be decided when the Countdown hits zero.

And since the Jams rules say a Minimum of 5 people, I opened the Team for 4 more, Graphics Artists and Sound dudes prefered.

The Team

I ended up with two students (this year for the first time we allowed a limited set of students to join our holy event), one who wanted to do code, and one who wanted to do graphics.

The other programmer did not have any game jamming experience, and no experience with any game engine. The graphics artist had done two prior gamejams, and according to himself failed at both.

Creating the game


We started of talking through the game idea, and roughly drawing the game out on a whiteboard so we all could have an understanding of the task in front of us. Then we split our responsibilities for the first day. Our graphics artist started drawing a few sketches, and while I created a basic scaffold for our game, and then approached building up the multiplayer-server-browser, our second programmer started out with watching a few tutorials about Unreal Engine 4. Once he had a basic idea about how the engine works we agreed that the next day he would take care of game logic, while i would see to the networking and the UI.


With very little sleep (1.5h for myself) we started sprintin further to the goal, however at around 17:00 it started to become obvious that we where falling behind the time plan – while most of the UI and Network stuff was working, we where behind with everything else.
However when we ended the day at ~02:00 in the morning, things where looking better again – even though we didn’t have a playable level yet, and balancing was still on the todo list it seemed as if we would have figured out most problems. So we created a small todo list for Sunday and called it a night.


When I arrived in our office we had roughly 5 hours (+1 hours for final build uploading etc) left, and we started putting pieces together. While doing so we discovered that there was multiple things not working, or only working on the host, but not on the client.
at about 15:00 (one hour before the final deadline), it looked like we could make it, we just would have to use up that one hour buffer a bit more, but the closer we got to 16:00 the worse it started looking. At about 6 minutes to 16:00, we had a merge conflict in one of our blueprint files, so the build did not work at all, even stuff that had worked before appeared completely broken.

We failed. we didn’t make the time, and there was no excuse for it.
I felt completely devastated.

While the other teams started presenting their games I went for a few cigarettes and had two beers.

The aftermath

I hate to fail – especially so close, so i decided that I’d at least want the game running properly before I’d go home. I moved my machine from the office that we used to the office I’m normaly working in, and started debugging.

As it turns out, it wasn’t really the merge which was responsible for the debacle. Well yes, during the merge something broke – but that was a very simple fix. However a lot of the things that didn’t work in that final build would still not work – It seems we didn’t test changes well enough, or we (again) only tested them when hosting a session, and not on the client.

About 40 minutes later, I then had a working version – not polished, not balanced, not in time – but working, which means even though we failed the gamejam, I will be able to sleep tonight.

I’ll put up a video, and a download link and some few more information on the game later / tomorrow.

Until then, have a good night.

Edit 1:
I’ve put a first video on youtube, enjoy

Edit 2:
I’ve build a version for Windows and Linux that actually works. Its not balanced yet, but, well at least its playable Downloads etc

Edit 3:
A “Tutorial” as the interface might not be very intuitive

Edit 4:
One of my colleagues blogged about the Hammerzeit on the Making Games Blog

Battle Among The Stars, Day 8

It had been a while since I had found time to work on this project – I had been busy doing a few other things inbetween, and while I was inactive the Unreal Engine progressed further – and i got a better understanding on how a few things are supposed to work.

So when I decided to start with Day 8, my agenda was:

  • Get the game ported to Unreal Engine 4.9
  • Rebuild the way the camera is handled
  • Start using components like they are supposed to be used – to add behaviour to Objects

Porting the Game to Unreal Engine 4.9

This point was actually quite straight forward, the automated conversion by the engine did a mostly good job, and only left me with two Problems, both being error messages displayed that actually had no impact except being shown. In both cases this seemed to be related to artefacts from previous Versions of the Engine / Previous Upgrades.

One was simply solved by deleting the folder in which UE is saving some extra information (Saved/Config), the other one required me editing an ini file (Config/DefaultEngine.ini) to remove some references/aliases that where set, and pointed to the path of the 4.8 version.

After those two things where taken care of, the game run flawlessly within Unreal Engine 4 – infact it also seems to have elimitated a Bug that I had problems with since 4.7.

Rebuild the way the camera is handled

I was unhappy with how the camera worked – panning around with the mouse felt rather awkward, and you couldn’t move it by just getting the mouse close to the edge.

To save some time I decided to buy the “RTS Camera PC and Mobile” for about 22 Euros. When i tried to add it to my project i had to first surprise: it wasn’t allowing me to add it to a 4.9 project. So to get arround this i added it to a 4.8 project, and then exported it from there to my game.

When I tried out the demo level that came with it it didn’t work though – after a while i figured out that for some reason (maybe the way i got the files into my project?) The hit boxes it created for moving when the mouse is neer the edge had their name set to “Unknown”, while the rest of the blueprint was relying on the names to be set to Left/Right/Bottom/Top.

After some time I realized that using this camera would require me to rebuild the way controls and my player is handled (the RTS Camera PC and Mobile is build as a pawn, however in BATS the pawn is the ship that the player controls), and had a few further problems (moving the mouse close to the edge = cam is moving, moving it to the edge = camera stops), the camera requires that the game mode that comes with the camera is used – without reason. So in the end i ended up changing my own camera, and throwing away the stuff that i had just bought.

Start using components like they are supposed to be used

I figure this is a learning effect that i got from working with unity a bit. As i was running out of time i just wanted to have used it to learn how to – so all i got to do with it was make a component that can be added to blueprint objects, that will make them rotate in a ranom speed and direction.
This was extremly straight forward, and, as you can see in the video, worked like a charm.

Slim Framework 3 + king23/di (replacing pimple)

I’ve promised this post to a few guys on the slim framework irc channel (#slimframework on irc.freenode.net), so here we go.

As this post is not an introduction to slim framework 3 itself, I might be skipping over some information that, if you don’t know the framework, might be important to you. I recommend that if something is unclear – just have a look at the code of the example app and/or the documentation of slim framework 3

This post is basically in 3 pieces:

  1. the slim3-skeleton as an example – a very quick introduction in the relevant parts of slim3, and how it is “normaly” done
  2. replacing pimple with king23/di
  3. ok? whats the benefit

the slim3 skeleton as an example

to start this off, lets take akrabats slim3-skeleton as an quick example, as it basically does all the ground work a simple app would do.

you can get the slim3-skeleton from github (note that i linked a specific commit, the skeleton might have been improved / advanced after this post, so if you want to use it, look up the changes and use the latest version ;)

Now if you look at line 4 of the app/routes.php file, you will find there something like $app->get('/', 'App\Action\HomeAction:dispatch') which is defining a route for get requests to / (first parameter), which will be routed to ‘\App\Action\HomeAction:dispatch’. Which means an instance of \App\Action\HomeAction will have its dispatch() method be called.

If you have a look at the \App\Action\HomeAction class, you will notice that the constructor actually requires a \Slim\Views\Twig and a \Psr\Log\LoggerInterface instance to be instanciated,
so lets have a quick look where this instance is created, which is in the app/dependencies.php file, on line 44
the app/dependencies.php file is used in the skeleton to configure all dependencies that the dependency injection (pimple) is taking care of.

So what happens when a request to / is routed is, it will break up the ‘App\Action\HomeAction:dispatch’ string, will then ask the di container to provide an instance of App\Action\HomeAction, and then call the dispatch method on the resulting object.

replacing pimple with king23/di

If you read the documentation of slim framework 3, you will notice that they made the wise choice of allowing the DI container to be replaced, all that the new DI container must do is implement container-interop/container-interop, and a few basic dependencies that every slim3 application has.

When I wrote my post about Slim Framework3 + King23/Castle23 + React a few weeks ago, i basically already wrote the stuff necessary for this, however i decided to improve the interop part a bit (cover exceptions as well), and release an king23/di-interop package, which conveniantly makes king23/di available to all frameworks supporting the interop interface.

So, whats left todo?

  1. add king23/di-interop to composer
  2. edit public/index.php so it will use king23/di-interop as a container
  3. edit the app/dependency.php, so it uses king23/di syntax for configuring the container

Hint: this is mostly to explain how the container is switched, if you want to use this, you don’t have to follow this step by step, you can simply use ppetermann/slim3-skeleton

add king23/di-interopt to composer

Now this once is simple enough, we are adding king23/di-interop: "~1.0" to the require block of the composer.json, then bump the requirement for the twig view up to ~2.0, as older versions are dependend on pimple specific stuff and in theory are done.

Now what I like to do here, is to add an additional “replace” block, where I claim that the package replaces pimple, but be careful with following that example, as it is a bit of an abuse of the replace that composer offers.

The idea is: i have king23/di-interop in my app, so i don’t need pimple anymore, so why have those files litter my vendor directory?
“replace” allows you to define that a package is replacing another package in composer, the thing is, its meant to have the package you are writing the composer.json for say “i replace package x”, so in this case our skeleton app is replacing pimple with itself. Now as it is requiring king23/di-interop instead, and using it, this is not causing any problems, but I think its important to keep in your mind that it was meant to be used slightly different.
(The way the documentation is written it seems to be meant to be added to packages that are drop in replacements for others – say if king23/di was a fork of pimple, which could replace it while being 100% compatible on the public interface – but thats a bit off the reality of replaceable components. A better solution would be what i described a while ago in my post about Composer & virtual packages, but at the moment thats not supported by all packages involved.

edit public/index.php so it will use king23/di-interop as a container

When you want to replace the container used by slim framework 3, you do that by handing it the container as parameter in the constructor. This parameter defaults to an empty array, and in the skeleton is actually taking the contents of app/settings.php.

So, to make this work, we remove the line that is loading the settings (we will do that elsewhere), and we actually inject the return value of a new file called app/container.php, where we instanciate the king23/di-interop container (King23\Di\Interop\InteropContainer), and add the minimum required dependencies for slim.
Then we rewrite the app/dependencies.php file to use king23-di syntax for configuring it, and delete the ‘Action Factories’ section of it (yes, thats right)

ok, what’s the benefit?

Well, first of all – running it with pimple is totally fine, so don’t take this as “pimple sucks” or something in that direction.
What I don’t like is that “action factories” configuration which i removed in the end of the last step. Having actually the DI container use the classes (idealy interfaces like with the logger) that are typehinted in the constructor to pick up what it should inject to me makes it feel a lot more “rounded”, and i can spend my time writing actual logic rather than keep doing the same boilerplate code for intialization of Actions (and other classes using the DI) over and over again.

a bit of salt

There is two downsides to using this:

  1. all the main examples in the slim documentation or in skeletons out there assume you are using pimple, while that in most cases shouldn’t be an issue you should know that not all of this has been written with the container-interop interface in mind – quite often you find code thats accessing the container through ArrayAccess, something thats not available in container-interop (and not in king23/di either). I’m sure this will get better with time, but in the end you will every now and then run into code that you have to refactor if you want to use it.
  2. if king23/di does not know a key, but it can load a class by the name of the key it will, and it will then use reflection to determine what to inject into the constructor – thats how it can create the Actions in this post without being configured for each Action, however that means that each time an object is created this way, there will be some reflection overhead, that won’t be there with the pimple solution. In the end it shouldn’t be much, but you should know this.


King23/Castle23/ReactPHP/Slim Framework 3.0 – for the fun of it

Sometimes I sit down, and just do some code for the fun of it. Yesterday I took the time to marry my own Framework with Slim Framework 3.0beta1 – just for the fun of it. The result is quite fast.


I’m not even sure if I should make this post. Most certainly I shouldn’t make this post without this warning: almost everything you will see in this post is unstable so if you get the idea to use this in production, keep in mind that you have been warned, and that you probably will burn in hell (or have to invest some time to get this stuff stable).
Also: this is not a tutorial

The Idea

The Idea is simple: load slim framework 3.0 into my own framework, to allow it to use my frameworks react based webserver.

Why? For fun, and to see how fast its gets.

Technologies used


ReactPHP is a framework for event-driven, non-blocking I/O with PHP, which has been around for a while. I’ve been toying with it before, and even had some form of react integration done for the pre-rewrite king23.
One of my websites (cmdr.club) runs the api for the route planner on a react based app since about 10 month, and while i had little to no trouble with it, I still feel reactphp is not exactly where it should be yet. It seems that many people in the PHP world either don’t care for non-blocking, or go the “nodejs already has that so lets use nodejs instead” way. Sadly there isn’t any non-blocking libs for many I/O tasks that one has yet.

Reacts code being less than well documented probably does its own part, but personally I hope it will gain more momentum the more it ages.


King23 is my own Microframework – i created it in 2010 for a small statistics website that I build around a game that i was playing back then. Ever since I’ve been using it for almost every web project that I’ve done in my freetime, adding and changing stuff very much to what i needed it to be on the fly.

As you might notice, you will find very little documentation on it, and its tests directory is still empty.

A few weeks ago i decided to rewrite parts of it to modularize it more, adding a dependency injection (dropping the service locator it was using before), and making it PSR-7 conform. This spawned the king23/di package, and shortly after the knight23/core package as a starting point for console scripts.

As I’m writing this post those changes (that are essential for this post) are still in the develop branch, and have not been released yet (as I’m struggeling putting a 1.0.0 on it while I don’t feel it to be fully stable yet).


king23/di is the di container I wrote a few month ago, it is probably the most stable component in this post, and its main purpose is to constructor-inject by interface.
In this post we will “abuse” it a bit, as we need it to be compatible to container-interop/container-interop to work with slim framework 3. More about this later on.

    $container = new King23DIDependencyContainer();

    // register a service
        function() {
            return new MyInstance();

    // also you can register a factory (the default for services is to be singletons)
        function() {
            return new MyOtherInterface();

    // an example class to have our interface injected to
    class Example
        public function __construct(MyInterface $myIntance)

    // get an instance with the injections done
    $instance = $container->getInstance(Example::class);

    // the same method can also be used to retrieve a dependency from the container
    // this will return the MyInstance instance


When I was rewriting parts of king23, I decided to change the way it handled command line quite far from how it worked in 0.9.*. Also, i changed my understanding of it from being a part of the framework (old), to being a framework for cli apps, based on king23 (new).
The result will surely change over time, as much of it is very rudimentary yet.


So whats castle23? After having used ReactPHP (see below) for a while, and even having had some previous integration in the old king23, I decided to build a small webserver within Knight23, which would use ReactPHP, and which would be able to use psr-7 conform middlewares for everything. The main goal at that time was to run king23 applications within this webserver.

Slim Framework (3.0beta1)

Slim is a microframework which has been around for a while, lots of people know it, lots of people use it.

Lately the slim guys have been rewriting huge parts of it, basically to allow incorporation of other DI containers, Psr-7 support, and modernizing it. Part of the result is that Slim Framework 3.0 can be used as a Psr-7 conform middleware – which caused me to look at the option to load Slim into my middleware queue, replacing my own router with the whole framework, and by that having an extremly easy and effordless way of loading slim framework 3.0 into castle23.

The Execution

I had castle23 already build, the slim framework guys have released beta1 of 3.0 – so i figured all that was left to do was to build a configuration.

I will list a few files here, and comment on what is done in them, I won’t go much into detail though as this is pretty straight forward, and the code should be easy enough to read.


The script boostrapping and starting the actual knight23 application is almost identical to the one of plain castle23, the main difference being that there is no default route added (as the router is removed from the Middleware queue, that wouldn’t make any sense), and the package name / version fields have been set to something more sensible.


This file is load by the bin/slim-castle23 script, and basically loads all the files containing the dependencies to be injected by the king23/di.
As most of the services are happy with the default from the castle23 example, I simply load the files from there, and only load those where there are differences from this projects config/services/ folder.


This basically contains the middleware queue for castle23, which has the following entries:

  • Whoops – this will catch all exceptions that are uncaught, and print them pretty
  • StaticFileMiddleware – this middleware will serve static files from a public/ folder (or which other folder is configured through the \King23\Core\SettingsInterface service).
  • The \Slim\App as it is registered in config/services/slim-app.php.


Once I had the first take on castle23 + slim framework running, i decided to adapt the king23/di injection so it could be used within slim framework, replacing the one used by default.
As slim has a hand full of services that it requires by default, those are configured in this file. Basically everything except the notFoundHandler is identical to slim-default setup, the notFoundHandler however had to be replaced by a simple own one, as slims default tries to use a method on the request object which is not psr-7 conform, and thus doesn’t exist in the request object thats given to the middleware.


Basically i just register the Slim App here – for its own classname, as i have no interface, and as this is just an example, i go along and declare a basic hello world route here too. then, when i added the DI container, i decided to use it at least once in the example to show that it works – hence I’m reading the package name and the version of the runner (which have been set in bin/slim-castle23).


So, when I decided to hand my own DI Container to slim, I needed to make sure that the container-interopt interface is implemented. The solution was to simply extend my container with a wrapper that would map the two required methods to something equivalent in my container.
That said: it still feels a bit messy, because while it works, it is now looking up instances by random strings, which is not exactly how it was meant to be used – once I’m over that I’ll probably move this file to its own package, so the DI can in theory be used by everything supporting container-interopt then.


Now, as you might have noticed, marrying the two frameworks was actually quite easy, but now – what does the result give us? some frankensteinish monster which loses tons of performance due to the amount of abstraction and frameworks?

To compare performances, i basically build the same app as the one that i injected here, however for the pure-slim approach i actually left out getting the package / version, as those information don’t even exist. So the pure-slim approach is basically: get name from uri, and print hello name. while for the castle23 based solution it is get name from uri, print hello name, print package and version names.

I then run apache bench with a concurrency of 100 and 1000 requests at the slim-only installation. This is over network, so there might be slight fluctuations in measurement.
In this case this is running through nginx as a webserver and hhvm as the runtime
the command being:
ab -c 100 -n 1000 http://my.dev.machine/hello/Peter

    Concurrency Level: 100
    Time taken for tests: 1.310 seconds
    Complete requests: 1000
    Failed requests: 0
    Total transferred: 219000 bytes
    HTML transferred: 11000 bytes
    Requests per second: 763.23 [#/sec] (mean)
    Time per request: 131.022 [ms] (mean)
    Time per request: 1.310 [ms] (mean, across all concurrent requests)
    Transfer rate: 163.23 [Kbytes/sec] received

763.23 requests – thats not bad.

so the next run was to start bin/slim-castle23 listening on port 8001
hhvm bin/slim-castle23 serve 8001
and the run apache bench against that:
ab -c 100 -n 1000 http://slim.devedge.eu:8001/hello/Peter

    Concurrency Level: 100
    Time taken for tests: 0.476 seconds
    Complete requests: 1000
    Failed requests: 0
    Total transferred: 72000 bytes
    HTML transferred: 53000 bytes
    Requests per second: 2102.35 [#/sec] (mean)
    Time per request: 47.566 [ms] (mean)
    Time per request: 0.476 [ms] (mean, across all concurrent requests)
    Transfer rate: 147.82 [Kbytes/sec] received

2102? now that is an awesome result right there.

so why is it that much faster?
the main speed difference should be that it has compile only once, and keeps an running application – so there is no initializing class loading, etc going one – once the request has been run once it is just a matter of executing the same stuff over and over again.
I would like to give a bit of credit to the non-blocking factor – however with there being very little I/O, there isn’t much of a difference to be made because of that.


  • Running Slim Framework 3.0 apps withing castle23 is simple
  • Running Slim Framework 3.0 apps within a react based webserver can actually help with its performance
  • Sometimes toying around with tons of unstables just to see where it gets you can be a lot of fun.


PS: thanks for taking the time to read this, next time I might look into a more practial post again :)

A look at Elite: Dangerous – Powerplay

Powerplay is the latest version of Elite: Dangerous, that adds mechanics which deal with ownership of Systems.

I’ve been waiting for this Patch, as E:D had turned to be a bit dull lately, and the basic ideas seem to promise some conflict and even maybe some PVP.

Having seen that it was released yesterday i decided that I’d spend a few hours, and see if/how it works

So, when i logged in, i located the Powerplay menus quick enough and had a hard decision to make.. to what Power would i pledge my allegiance – now I’m holding the Knight Rank within the Empire, so picking one of the Empire Powers seemed the right thing to do, as it would go well with my reputation / rank progression – or so i thought. So which of the Empire factions? Basically i ended with what i felt was a tough choice – a woman who is supposed to increase bounties & payouts – so far my only income source, OR some dude who lower the prices for high tech weapons – something i have to throw a lot of money on because my ship currently has only two pulse lasers. Well, i decided for the income.

Both had the benefit that two of their 3 activities are combat, so I did expect a lot of combat missions.

At the time of pledging, however i was still outside of the influence sphere of A Lavigny-Duval, but the system was in a status where i could have started a preparation – at least thats what the Powerplay missions screen told me. For that i was supposed to get some documents proving corruption.
Well, ok, it didn’t really tell me where to get those – or i missed it, but i thought “well, why not, I guess i have to shoot up a few local officials, and will get those documents” – well all it gave me was some local bounty, so after a while i decided to dock a last time, upgrade my FSD and the cheapest fuel scoop i could find with my last money (my wallet was at ~450 after that), and head towards one of those controlled systems..

14 jumps later i had made the following experiences:

  • spending a lot of time orbiting a sun to scoop fuel
  • spending a lot of time in jump animations
  • being intercepted by 2 eagles who belong to the new Powerplay pirate faction.. as it turns out one gave me nothing, and the otherone 420 credits in a local voucher – which i decided not to collect seeing that the station was 28k light seconds away.

So, after flying about 2k ls to the station in the controlled system i finally looked at the Powerplay missions screen again. So.. it tells me i can’t do fortifying because i don’t bring any goods for that. Hum so more hauling crap. It also tells me that it accepts vouchers from expansion fights, so yay fighting things.

Oh, and it also offered me to get 10 of those corruption papers that i would have had needed for in the system that i started in – so thats how you get them.

A quick look at the map shows me the next expansion system (of which there are three in total) is about 26 jumps away.

26 boring, mindnumbing nothing-happening jumps (including extended scooping periods orbiting around nameless suns) later, I arrive at a location where i can do “Crime Sweeps” – the expansion-action within this faction, so now a bit excited, hoping that it would turn this into fun i drop into one of those crime sweep signatures.
I’m constantly under fire by several quite heavy ships, I’m getting euphoric, thinking it would be my lucky day. The ships within that signature seem a bit tougher than the ones in the usual conflict zones, so one of the anacondas that i’m working on taking down actually manages to burn through my shield on various occassions, and take me down to 36% hull – but in the end I’m the victor.

After cutting through 3 anacondas, 2 pythons, 2 viper, and 2 eagles i feel I should have earned quite a bit and should get my rewards, maybe repair the ship and upgrade it a bit. A quick look into my list of rewards shocks me though – rather than having something saying “Powerplay Rewards 1000000 Credits”, it says “Powerplay: 9” so, it lists basically a reward of 1 Powerplay per killed ship.

Mh, Somehow that seems weird, as the anacondas should give a lot more reward than an eagle, right? So i’m thinking “maybe they just screwed up display, and it will all resolve in cash-cash-cash once i turn ’em in.

Theres a station 3ls away from where i am, so a quick docking maneuver later i learn that this station will not offer me an option to give the vouchers to them. So i figure “well i guess i have to go back into one of the systems in the influence-sphere of my power”, and pick the closest system within the bubble thats describing said sphere.
8 boring jump-n-fuelscoops later, the 36% of my ship that still move and me make it to a system that is “exploited” by the power i’m working for – and as it turns out, exploited systems are not good enough for turning in the vouchers either.
So back to jump-n-fuelscooping, for another 13 jumps. Finally arriving in a “controlled” system again i head to my powers contact – and finally get my reward for all this..

9 merrits (one per voucher), and 900(!) credits – not even enough to repair my vulture.

So what can i do with the 9 merrits? Maybe there is some money in that?
Well after some research:

Merrits are basically something that helps you “rank” within your powers rating system.
Right now I’m in Rating 1, thats the one i entered with.

What does Rating 1 give me?

  • 1000 credits per week
  • 0 Preparation Nominations (those allow you to take influence on which system should be tried to taken over next)
  • 10 Power commodity allocation every half hour (whats that you ask? well those corruption documents? thats the cap on how many you can get, in my rating maximum 10 per 30 minutes)

so basically no real reward either.

At 100 Merrits, i would get Rating 2, which gives

  • 50 000 Credits
  • 2 Preparation Nominations
  • 15 commodity allocation per 30 min
  • 20% more bounties in controlled/exploited systems (the first real benefit)

Rating 3 at 750 merrits is barely worth talking about, 500k credits a week, 5 nominations, and 20 power commodity per 30min

However, the real rewards would be on rating 4, where you start to have weekly income of 5000000 Credits, and rating 5 where you get 50mio credits weekly and double the bounty in controlled/exploited systems
the catch with that is that you need 1500 merrits for rating 4 and 10 000 for rating 5.

Now if that all still seems reasonable to you because “oh, but you get quite high passive income, even though when you will lose money at first” – well but its not that either, let me quote from the rating screen of my power “To Maintain your rating you must earn enough Merits to unlock it again this week”.

When playing Powerplay,
for killing one ship, no matter what size you get 1 Merrit, and 100 credits, to make it worth your while you need to at least get it to Rating 4, which means 1500 dead ships. Per Week. so, lets just for the fun of it assume you manage to get the easiest ship to kill that i saw – the eagle constantly supplied, and you need a minute to take each eagle down, then you have to constantly grind 25 hours to get to Rating 4. But killing 60 ships an hour is quite unrealistic, its more likely that you do 20 to 30 in an hour.

I assume (i haven’t tried, as i have no cargo hold) that delivering the power commodities (the corruption papers) yields 1 Merrit per Paper, which would while in rank 1 put you in about the same speed of making merrits than the guy shooting ships, and when in rating 3 at about 1.5 times the speed (and in rank 5 you’d reach 100 merrits an hour) – but thats pure speculation (but If I’m wrong it would mean that once again the hauling/trading profession got the advantage – which is kinda odd for a combat expansion).

Powerplay’s basic idea is a good one, having contant conflicts driven by factions, and various ways to contribute to those efforts seems a good idea to keep people from getting too bored – however the idea of cost (+risk)  vs reward is so far of in this that there is (at least for myself) simply no point in playing it. 

building a game that reads your mind in 6 hours… OR the worlds first jump ‘n’ blink!

What happend before:

A few weeks ago, on the 17th of April I did attend to the first european neurogame jam which was an event about creating games utilizing the ability to read brainwaves through an off-the-shelve eeg neuro-headset such as the Emotive Epoc.

Sadly the timeframe that was set for that game jam (just a couple of hours) was way to short to actually build something, so we ended up having Jens and Iris do their presentations – which tought the rest of us about how the technology works, and the basics about the toolchain Iris created. After that we spend most of the time coming up with and discussing ideas and aspects of what one could do with this technology, while some of us did give the devices a spin, and we had some fun comparing brainwaves – but in the end we didn’t have the time to actually build something.

Also there is still a few problems with the technology – the two main points probably are:

  • While the machine learning that Iris build into the toolchain works extremly good, it has to learn all commands that can be used with in a game for every user, every time it is used (as taking off and putting it on again might already change what the device is picking up from you) – this means games build upon this would still have a ramp up time of 10+ minutes from “I want to play” to “I can play now”.
  • Latency is too high.

I left having had a good evening with some very interesting discussions, but I thought the next time that I’d hear about the technology would be in a few years when it is more ripe to work with it, and more devices are available.

So how did I end up building a game?

A few weeks later Iris contacted me, as she for the evaluation of her work still needed someone to implement something using her toolchain.

Now the catch: the toolchain is build for unity3d, which had very little experience with – but the invite was still standing, so last Monday I visited her at the teco and after a short refresh on what i had learned during the first european neurogame jam,  we set to build the game.

Knowing from the game jams that i’ve been to that time is extremly valuable, and knowing that not knowing unity would cost me some time, I set myself the following goals before thinking about an idea:

  • the game must be completely playable
  • as the game should have a quick “let me try that out” experience, doing a long learning-phase was a no-go.
  • asset creation? uhm.. not with less than 6 hours left

So i came up with a very very simple game – the character of the player is a simple ball which needs to get from the floor to the ceiling, using a set of steps – where some steps are white, and some steps are black. The ball would fall/jump through white steps, while it could stand/move on black steps.

Now the controls for movement would be regular keyboard commandos (to avoid frustration factors like learning phase for the user)

The twist? To get up to the ceiling the player needs to switch the colors of the steps. The control of changing the colors is linked to the game using Iris toolchain to watch the brainwaves of the player, and find the very distinctive pattern that happens when someone blinks. Whenever a blink is registered the game then changes the colors.

So to win the player needs to jump from step to step, blinking in the right moment, thus allowing him to reach the top.

The main problem building it was my unfamiliarity with unity, but since unity is using c#, at least i didn’t have to learn an unfamilliar language. (I think what took me the most time was actually trying to figure out how to color the objects. And what did also cost me some time was that changes done while the game is in “play” mode are forgotten afterwards).

Well in the end the game worked, and we had quite some fun trying it out – multiplayer (or extra-hard-mode) is when one does the jumping and the other the blinking ;)

So where can you see this game?
Well that is a bit of a problem at the moment, even if I’d provide a download, you would need the toolchain Iris build as well as an emotive epoc headset (which is quite expensive).

So the best I can offer is this video: