Mashed Games – What happens when 3 devs discuss bit operations

Mashed Games is the result of a discussion of three colleagues about bit operations.

About two years ago when Peter Molyneux’s Curiosity Cube hit the media we were curious too and found the idea pretty interesting. Unfortunately, the first days Curiosity was down a couple of times due to the media coverage and the number of people trying to play it. So we started wondering: What might be the bottleneck there?

We came to the conclusion that it’s probably caused by the number of connections and the updating service which has to keep track of the cube’s tiles and broadcast the updates.

Next thing that came to our minds was: What would we do, if we had to store a huge tilemap?

Storing a huge tilemap in an array or even a database with geospacial indexing sounded like a huge waste of ressources to us. We believed the solution had to be simpler. A single byte could store the state of 8 tiles (set/unset) and tiles could be unset with a few bit operations. So the logical consequence for us was to store everything binary and set/unset single bits with bit operations. The number of tiles per axis just had to be a multiple of 8. This way a tilemap could be easily stored in a single file. To prove our idea could work, we hacked a few scripts together and a bit later we had a simple web interface which displayed a map of 64×64 tiles in ASCII characters. The file on disk was just 512bytes. By clicking on a tile, we calculated the offset of the byte, unset the according bit on the server and broadcasted the change to all connected clients.
That was already pretty cool but we weren’t satisfied. Now that we had the proof-of-concept working we wanted to make more out of it and created a game: Mashed Games.
The goal of this game is to destroy the entire map and the player who destroys the last tile wins! To make it more fun, we added a few characters from games we loved in our youth. The characters can be unlocked while progressing on the map.
On the client side, we use ImpactJS, which is an awesome framework to create 2D games in JavaScript. The server is running a few node.js processes. One process is called the “map master” and keeps track of the entire map. Everything is stored in a buffer object in RAM and gets flushed to disk periodically.
The clients connect to a couple of processes which use SailsJs to provide the REST and WebSocket endpoints. Each of the sails processes connects to our map master to pass the deleted tiles and the map master broadcasts the changes periodically to all connected clients (socket.io + redis pub/sub interface). To keep the amount of data in transfer small, we split the map into chunks of 256×256 bits each. A player only receives updates for the chunks that are visible on his screen. To achieve that, we created one socket.io room per chunk and while the player moves on the map, we make him join&leave these rooms in the background.

Now you might wonder: That was two years ago? Did you really take that long for a simple game?

No, it acutually didn’t take us two years to write this game, but as Paul Dix put it on HN the other day: “In software development there a lies, damn lies and timeline estimates”. We simply lost the motivation over christmas 2012 and totally forgot it until a few weeks ago. The game was already working back in 2012 and only needed some more polishing, some bugfixes and a few more features to make it even more fun. It would have been a shame if we didn’t finish it, especially since it was a lot of fun writing it. Eventually we finished it and are really happy with the result.

Now there’s only one question left:
Did we make it better or would we suffer from the same issues the Curiosity Cube had, when it went viral?
We don’t know yet, but we are eager to find out. So please spread the word!

We hope you enjoy playing this game, as much as we enjoyed coding it.

Sails cross-domain socket.io+passport authorization

I think this is one of most annoying issues you can face. You’ve written your fancy sails application using sockets, passport and all the other fancy node modules. Right before you launch, you think it might be a good idea to put your application behind Cloudflare and you notice: Cloudflare doesn’t suppport socket connection for non-enterprise plans.

So, the first thing that will certainly come to your mind is to create a subdomain which connects directly to your server for all the socket.io stuff. That’s the right thing to do, but what about authorization and cookies? That’s the challenging part!

First of all, sails supports cross-domain requests out-of-the-box since vers 0.10.x – which is great! But this might be a bit misleading. The CORS and socket configuration all refer to the connection itself, but the docs don’t tell anything about cookies.

When you make a request to Sails and you have session&authentication enabled (for instance using the sails-generate-auth module), your application issues a cookie called sails.id. When you now open the socket connection your cookie won’t get passed, since the request goes to another domain. Although the socket connection actually gets a cookie issued, it is not the same cookie and your passport authorization will fail. Your REST-API will work fine, while the socket.io connection is still unauthorized.

The solution is actually quite simple. In config/session.js you have to define a domain cookie. Per default, the sails cookie is limited to the domain of your application. For instance: www.my-startup.io and the socket connection to socket.my-startup.io can’t pass this cookie. A domain cookie however can be valid for not only your www. domain, but also for all other subdomains. With a domain cookie set, the browser will use the same cookie for REST and socket.io requests.

To define a domain cookie, just put the following code into your config/session.js:

module.exports = {
  cookie : {
    domain : '.my-startup.io'
  }
}

That’s it! Clear the cookies in your browser, reload and your REST-API as well as your socket connections should be authorized!

Sails.js without express

A few weeks ago I started working with Sails.js and besides the not (yet) very powerful ORM Waterline I’m quite happy with it. A few days ago I came to a point where I needed to create a worker process which does some housekeeping, listening to RabbitMQ messages. This worker needed access to the Sails.js configuration, services and models, but didn’t require an http(express) server.

The solution is quite simple and I’m sure this will be useful for others too. Here’s the source:

var app = require('sails');

app.lift({

  log: {
    noShip: true
  },

  // Don't run the bootstrap
  bootstrap: function (done){ return done(); },

  hooks: {
    noRoutes: function (sails){
      return {
        initialize: function (cb){
          sails.log.verbose('Wiping out explicit routes to make worker load faster...');
          sails.config.routes = {};
          return cb();
        }
      };
    },
    grunt: false,
    csrf: false,
    cors: false,
    controllers: false,
    policies: false,
    i18n: false,
    request: false,
    responses: false,
    http: false,
    sockets: false,
    views: false,
    pubsub: false,
  }
}, function (err) {
  if (err) throw err;

  // Sails is lifted, do something here
});

MongoDB Aggregation: Group by any time interval

Since version 2.4 MongoDB offers a powerful aggregation framework, which offers great functionality without the hassle of map/reduce. You can group your results by any (calculated) field. Using the date operators you can for instance group your results by day and sum your results.

Assuming you have a collection with a MongoDB Date and a value you field, you can aggregate your data like this:

db.collection.aggregate(
{
'$group' : {
'date' : { '$year' : '$datefield','$month' : '$datefield','$day' : '$datefield' },
'sum_value' : { '$sum' : '$value_field'}
}
});

This is pretty neat when it comes to days, months or years, but what if you want to group your date by an interval of 2hours?

Instead of saving your dates as MongoDB Dates, you could save your dates as unix timestamps and have Mongo calculate the intervals. MongoDB offers a set of arithmetic functions like $subtract and $divide, but unfortunately it doesn’t yet provide any round/floor/truncation functions.  But wait! There’s a solution!

You can (ab)use the $mod operator to remove the decimal digits.

In pseudocode, the formula would look like this.

60seconds * 60minutes * 2hours = 7200

timestamp/7200 – (  (timestamp/7200) mod 1)

In MongoDB your aggregation function would look like this:

db.collection.aggregate(
{
'$group' : {
'date' : { '$subtract' :[ {'$divide' : ['$timestamp_field', 7200 ]}, { '$mod' : [{'$divide' : ['$timestamp_field', 7200 ]},1] } ] },
'sum_value' : { '$sum' : '$value_field'}
}
});

When you receive the resultset, you can simply multiply the returned date field with 7200 and voilá! That’s your final Unix timestamp, grouped by every 2 hours!

 

#2013 – Lost connection to MySQL server during query – OR: MySQL VOODOO!

Today one of my scripts prompted the #2013 MySQL error while querying a huge innoDB table (31GB in ~154million rows). Some queries worked, some just failed.

Looking at the logfiles, I saw the following message:

InnoDB: Page checksum 1840120551 (32bit_calc: 1224736073), prior-to-4.0.14-form checksum 1811838366
InnoDB: stored checksum 3031359782, prior-to-4.0.14-form stored checksum 1811838366
InnoDB: Page lsn 47 623631862, low 4 bytes of lsn at page end 623631862
InnoDB: Page number (if stored to page already) 68664,
InnoDB: space id (if created with >= MySQL-4.1.1 and stored already) 22
InnoDB: Page may be an index page where index id is 35
InnoDB: (index “PRIMARY” of table “tracking”.”banner” /* Partition “p4″ */)
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 68664.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: See also http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.

I tried to check the table -> mysql failed.

I tired to recover the table -> mysql failed.

The filesystem as well as the RAID were in a healthy condition and even rebooting the machine didn’t to the trick (would have been odd if it did). So, I asked my friend Google, and Google pointed me to an interesting post:

I ran into a problem where, when dealing with HUGE tables (location tables for http://Stiggler.com ), there was an innodb page error, and mysql would try over and over to repair it, and would inform me that it could not repair it (and would then try again, etc).

[…]

I put the force_recovery mode to 1, then restarted mysqld, exported the entire database (i expected to get an error when it got to the bad table, but i never had a problem). After dumping the database, i removed the force_recovery option from my.cnf and restarted the service, and after a few moments, it started back up, and the problem was gone.

I can’t remember where exactly I found this quote (sorry!), but what this guy was basically trying to say: Dump your database and wait for the magic to happen!

So I did, tried my query again and it just works! I dunno what magic voodoo effect dumping a database with mysqldump does, but it just works. Really, this seemed like the most stupid approach to fix this issue, but it did the trick!

 

Next time you hit the “#2013 Lost connection” issue.. try dumping your database!

Creating Website Screenshots on Linux

Today a customer asked for automated screenshots of his website. So first thing to do was asking Google how it could be accomplished on a Linux WebServer. Most of the results referred to installing an X-Server, using Firefox and stuff. This sounded a bit tricky and quite frankly.. over the top.

The solution I came up with is (IMO) much easier and less prone to failures: WKHTMLTOPDF + Imagick Convert.

WKHTMLTOPDF uses QT with a Webkit Widget – it renders HTML pages inlcuding JavaScript like Apple’s Safari Browser. So creating a PDF from a website became very easy:

Usage: wkhtmltopdf -L 0 -T 0 -R 0 -B 0 http://mywebsite.com/ test.pdf

That’s all you need to create a PDF from your website including all images, javascript rendered contents and such. The parameters “-L -T -R -B” define the the border margin of the PDF document. The default value is 10mm – I prefer zero.

Now that we have a PDF of the website, we need to convert it into a JPG file. This can be acomplished using ImageMagick.

Usage:  convert -density 600 test.pdf -scale 2000×1000 test.jpg

In case your webpage is pretty long it might happen that your PDF contains multiple pages. Imagick generates one JPG file per page named test-0.jpg, test-1.jpg and so on. All you need to do is combining the images with Imagick in a second step, which is also pretty easy and straighforward:

Usage:  convert test-* -append single.jpg

That’s it… now you have a single image file from your website.

Creating screenshots with FFmpeg is slow?

Just a quick note for everyone who’s using FFmpeg for creating screenshots from video files. Today I noticed that FFmpeg can be VERY slow on large/long movie files, but there’s a pretty neat trick to speed up the screenshot generation.

I used to create my screenshots this way:

ffmpeg -i /var/www/input.mov -y -f image2 -ss 1234  -sameq -t 0.001 “/var/www/screenshot.jpg” 2>&1

and on huge files it sometimes took minutes. The solution is simple:

ffmpeg -ss 1234 -i /var/www/input.mov -y -f image2  -sameq -t 0.001 “/var/www/screenshot.jpg” 2>&1

Put the -ss parameter in front of the input file and FFmpeg skips to the selected frame almost instantly.

hf! :)

6 ways to kill your server – learning how to scale the hard way

Learning how to scale isn’t easy without any prior experience. Nowadays you have plenty of websites like highscalability.com to get some inspiration, but unfortunately there is no solution that fits all websites and needs. You still have to think on your own to find a concept that works for your requirements. So did I.

Read more

1 2