How to Turn Your Phone Into a Simple Game Controller using Monaca

Monaca have just released two new exciting plugins--HttpServer and WebScoketServer plugins. This will open a lots of possibilities. Think of chat application, multi-player game, game controller, the limitation is your imagination. Here we will create a simple game controller, but before that, let me give a brief introduction to the two plugins.

HttpServer Plugin

With this plugin, you can turn your phone into a web server. Yes! It’s a mobile web server! When turned on, you can point any browser to your phone’s ip address and port, and have your phone serve the content of any folder in your project. We will use this plugin to serve our html5 game.

WebSocketServer Plugin

With this plugin, you can turn your phone into a websocket server. This will enable real-time communication between server and clients. We will use this plugin to send data from the phone to any connected clients.


  • The WebServer plugin currently only work with Android.

  • The code need to be run on Monaca platform. It is easy to get started with Monaca. Just sign-up at monaca.mobi and you are good to go.

The Game

To create a game controller, obviously we need a game to control.
I have chosen a simple demo from pixi.js. It is a simple game of a walking boy who will jump when you click on the screen. You can find the demo here and the source code here. We will make a game controller that will control the jumping of the boy. As can be seen on the figure below, when a user click the “Jump!” button, the boy will jump! It is simple but is good enough to illustrate the concept.

System Overview

The Project Structure

Since we are dealing with client and server codes, it is more cleaner to separate the two. Here I put all the client code inside “client” folder. We will tell the HttpServer plugin to serve files from this folder.

The rest of the folders are codes that run on the phone.

The Client

We will put the game code in the client folder. I use WebDav(see bottom part of the screenshot) to transfer files from my machine to the IDE. It is a fast and easy way to transfer lots of file to the IDE.

If you open “client/index.html”, you will see that there are two script tags that point to a missing folder. You need to copy the “pixi” folder from “../../src” folder to the “client” folder. Then modify the two script tags as shown below.

The Server

Enable WebServer Plugin

First we need to enable the plugin. Click on “Config” and select “Plugin Settings...”.

Then make sure to check the "Monaca Http and WebSocket Server" plugin.


Now let’s start the HttpServer with the following code. Put the code in “www/index.html”

Run the application. If nothing goes wrong, you will see this alert message which show that the server ip address and port.

Now the server has started successfully. You can point any browser to this phone ip address and port.

Hooray!! The server is working! So far so good but before we go on, we need to manage our code structure before it grows out of control. Let’s create an App namespace for your application so that other application module can attach to it. We will name it app.js

We will also move our HttpServer related code into http_server.js

WebSocket Server

We will put our WebSocket related server code in websocket_server.js

WebSocket Client

We need the client to listen to the server command. The code to focus on is "onmessage" event. We will put the code in “client/websocket_client.js”

Complete Code

I put the complete code of the project on github. Feel free to give it a spin!


We have seen how easy it is to leverage WebServer plugin of Monaca to create a Game Controller application. I hope this will help you create awesome app of your dream.

Netapp NAS - Asial's unsung hero

Hello, Anthony here.
My last couple of blogs have been about Zabbix, the enterprise network monitoring system that we recently installed. As I mentioned in my last piece after a few teething problems this is now working fine and I think we all sleep easier at night knowing that any problems will be picked up before they become critical. And if the worst does happen then we'll know about them immediately.

So this month I'd like to discuss something that Zabbix indirectly highlighted and has taken up quite a bit of my time over the last week or so. That is Netapp.

Now Asial maintains installations in a couple of data-centers but we don't go there more often than we have to. For a start almost everything that needs doing can be done remotely, secondly they're inconvenient to get to, and finally they are extraordinarily inconvenient to actually get into. Security is rigorous, you have to book in advance, take several forms of ID, be escorted into and out of the building, and, well to put it bluntly the data-centers do everything they can to discourage visitors. So while I was aware we had a Netapp server I'd had very little to do with it.

For the uninitiated a Netapp server can best be described as a large noisy box. It doesn't look particularly impressive, at least from the front, and anyone randomly wandering in off the street would guess it was fairly low down the digital pecking order. It certainly doesn't have all the flashing lights that, say a router has. And it has lots of clumsy chunks of plastic poking out of it, not at all the sleek lines of our HP Proliant servers. However the fact is our Netapp server is critically important to us, probably the single most important item on the rack, because it provides high availability, high redundancy storage.

All our servers have a certain storage capacity of course but all the really important stuff goes on Netapp. This means while the OS runs on the server, in many cases the data itself will be on Netapp. Why? Well behind each one of those chunks of plastic sits a hard drive, 14 of them in Asial's case, providing not just several terabytes of storage, but more importantly speed and reliability. Each hard drive is working with all the others to distribute load and maintain service. It can take snapshots of data at regular intervals so that recovering from an accidental deletion is trivial. Indeed the snapshot process is so fast and efficient that in many ways it renders traditional backups obsolete. Finally in addition to those 14 active hard drives there are a number of spares. If Netapp detects a problem with one of the active drives it will bring one of the spares online and deactivate the defective disk. In other words if a disk dies, service is unaffected.

My problem therefore is how to know when a disk has failed. There are no signs, no interruption in service, no complaints from users, just a seamless switching over from one disk to another. Well getting this information, and various other important details has been what I've been up to recently. I've had to do quite a bit of reading around the Netapp OS, checking the settings currently in use. Even if I don't know what they mean now they're still a useful benchmark for anything that might change in the future.

So what happens if Tokyo gets hit by a major earthquake and takes the entire Netapp installation out? Hmm... perhaps that's a topic for next month.

Further adventures with Zabbix

Following on from my last blog about installing Zabbix I thought I'd go into it in a bit more depth this time because, as it turns out getting it installed and running is really just the beginning.

The problem is that all the servers are doing different jobs and have subtle differences in the way they're configured. Therefore while you can start getting feedback from Zabbix very quickly I've had to spend a fair bit of time tweaking it for our environment.

The main issue is that the templates supplied by Zabbix are very detailed and the alerts have low trigger thresholds. This is exactly what you need to get started but it doesn't take long to start collecting a large number of alerts, most of which will be false. Getting a red alert that a server was down was alarming until I realised it was for a news server, something we don't actually run. Clearly some template editing was called for.

This can be quite formidable at first sight but fortunately because the supplied ones are so detailed its mostly a case of taking a hatchet to everything you don't need, at least until you're comfortable with Zabbix. So from the (literally) thousands of things you can monitor in almost all cases the important ones will be

Disks and filesystems
CPU load

Disk performance metrics are really concerned with availability and I/O performance. Its always good to know there's enough free disk space on your partitions. I find it more helpful to show this as a percentage of available space than an absolute figure in Mbs. You will also want to monitor reads and writes per second. Actual values are a bit geeky in themselves but over time they'll build up into useful historical trends.

CPU performance? Well clearly you need to know how hard the processors are working so keep an eye on CPU load average and idle time. Load average is normally expressed in values over 1, 5 and 15 minutes. A value of 0.7 (meaning the processor is at 70% capacity) or below is good, occasional peaks as high as 3 are probably OK too, anything higher than that, especially if it's sustained spells trouble. In the default configuration these metrics returned a blizzard of alerts but are now more or less under control (more about that in a moment).

Memory, this covers both physical RAM and virtual memory. Generally what doesn't fit into RAM is swapped so you should keep an eye out for high swap rates.

And services will depend on what function your server is performing, but Zabbix can ping your HTTPD or MySQL service regularly to make sure its still there.

Once everything seemed to be under control I was pretty alarmed to discover that load on the Zabbix server itself had gone through the roof. My next job therefore was to reduce the load on the server.

This screen shows what happened when I deployed a fairly basic monitoring template across the servers based on the supplied one for Linux

As you can see the Zabbix server struggled to keep up for a while before gradually losing the battle. Well fter a bit of research I found I only had to do two things.

First of all a bit of tinkering with MySQL's configuration file (/etc/my.cnf)
Adding these two lines reduced CPU utilization by 50%!


The next step was to reduce the polling time for the monitored items dramatically. The default for many is every 30 seconds. Multiply this by 40 different metrics on 50 servers and its not hard to see why the server was struggling to keep up.

By throttling back the the polling threshold to once per minute on many values and considerably more than that on others. You really don't need to check free disk space more than once every 15 mins or so. By doing this I was able to reduce CPU utilization by another 50%.

So load on the server reduced by over 100%. Here is a screenshot of the result of these two steps. Right now MySQL is taking up 2% of the CPU resources, against about 130% last week!

Zabbix to the rescue!

Hello Anthony here, from infrastructure ( & England)

Well it’s with great relief that I've been allowed to write this in English so welcome to my first, and AFAIK Asial's first English-language blog.

The big news here in infrastructure is that we're getting towards the end of a rollout of the Zabbix network management system. This has been quite a long and protracted process as well as a steep learning curve for me. But first let me give you a bit of background.

Well as you might expect Asial has dozens of servers. Some of them, like the mail server or the main website are pretty high profile. Others, located in dark and far-flung corners of the organisation are much less obvious, but the fact is they're all important for someone. Our problem is how to keep an eye on them all, make sure they're working properly, have advance warning if they’re about to go wrong and immediate notification if they do.

Most of this information is available in the log files of course but who wants to spend all day trawling through those? Enter Zabbix. First of all you build your Zabbix server, then you install the Zabbix agent on all your other servers. The agent runs in the background collecting information about processor load, free disk space, running services and so on, and periodically sends this info back to your Zabbix server. And not just servers either, Zabbix can collect just about any information you want from just about any network device you can think of. Printers, routers, disk arrays – no problem.

The basic installation was pretty straightforward and the basic templates are enough to get you started but with all this information gathering potential, configuration has taken a while. In fact I expect to be tweaking this for a good while yet.

Anyway the good news is that it works, better than I ever thought. I can get an up to the minute overview of the server status any time, as well as detailed information on specific servers, times of spikes in load, long—term historical trends and goodness knows what else. And an email or text message if anything goes horribly wrong. Blimey, this is great stuff.

So how much does Zabbix cost and where can I get it? Well thanks to the hard work of Alexei Vladishev and the Open Source community, nothing, it’s free! You can download it from here

What did we do before Zabbix? I don't remember but it wasn't as pretty as this.

Thanks for reading and hope to have more for you next month


DRBD+heartbeat+LVM(on Fedora Core10)によるクラスタリング



さて、ここで目指すのは、DRBDを使ったデータレプリケーションサーバ( Master / Slave 構成 )の自動フェイルオーバークラスタ( 非フェイルバック構成 )です。



# vi /etc/sysconfig/network-scripts/ifcfg-eth1


また、他にもheartbeat用にttyS0 (RS-232Cのシリアル) をクロスで結んでいます。

1. サーバ構築時のパーティション設定

ハードディスク構成は、Hardware RAID 1+0 ベースの1 Logical Drive でやってます。

     c0d0p0 LVM Volume
     c0d0p1 /boot     # ブートローダ

/dev/LVMVol1/lv_root /
/dev/LVMVol1/lv_var  /var  # なんとなく分けた。このあたりはご自由に。。
/dev/LVMVol1/lv_data /data # drbdによるレプリケーション対象ドライブ。あとから/dev/drbd0でラッピングされます。
/dev/LVMVol1/lv_meta (none) # drbdのmetaデータ格納用ドライブ




2. DRBDのインストール

ここでは、最小構成(Base + Vimぐらい) でインストールした前提で話を進めます。
なお、yumの設定やyum updateはやってある前提です。

2.1 コンパイルに必要なツールの準備

Fedora CoreではDRBDのrpmパッケージは用意されておらず、自前でコンパイルする必要があります。

# yum install make gcc glibc flex rpm-build



# yum install kernel-devel
# yumdownloader --source kernel
# rpm -ivh kernel-


2.2 DRBDのコンパイルとインストール

DRBDのソースは、http://oss.linbit.com/drbd/ からダウンロードできます。

# wget http://oss.linbit.com/drbd/8.3/drbd-8.3.2.tar.gz


# tar xvzf drbd-8.3.2.tar.gz
# cd drbd-8.3.2
# make rpm


You have now:
-rw-r--r-- 1 root root  220334 2009-08-14 02:09 dist/RPMS/x86_64/drbd-8.3.2-3.x86_64.rpm
-rw-r--r-- 1 root root 1079065 2009-08-14 02:09 dist/RPMS/x86_64/drbd-km-


# cd dist/RPMS/x86_64/
# rpm -ivh *.rpm

2.3 DRBDインストール後の注意点

cronでyum のアップデートを行うように設定している場合は、アップデート対象からkernelを除外しておくなどの対応をしておき、kernelのメンテナンスだけは手動で行うようにするとよいでしょう。

# vi /etc/yum.conf

3. DRBDのセットアップ


3.1 DRBDのmetaデータについて



3.2 metaデータ用LVの作成


# vgdisplay |grep Free
  Free  PE / Size       209 / 6.53 GB


# lvcreate -l 209 -n lv_meta LVMVol1
  Logical volume "lv_meta" created

容量をバイトサイズで指定したい場合には、-l 209の代わりに -L 6G などと指定してあげればOKです。

3.3 領域を使いきっていた場合

/dev/LVMVol1/lv_data (ext3)を縮小する場合には

# lvdisplay -C |grep lv_data
  lv_data LVMVol1 -wi-ao 292.97G

# umount /data
# resize2fs /dev/LVMVol1/lv_data 290G
# lvreduce -L 290G /dev/LVMVol1/lv_data
# mount /data



3.4 drbd.confの設定


# vi /etc/drbd.conf

resource r0 {
  protocol B;

  handlers {
    pri-on-incon-degr "halt -f";

  startup {
    wfc-timeout 120;
    degr-wfc-timeout 120;

  syncer {
    rate 100M;

  disk {
    on-io-error detach;

  net {
    cram-hmac-alg "sha1";
    shared-secret "HogeHoge";
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict    disconnect;

  on db1.example.com {
    device     /dev/drbd0;
    disk       /dev/LVMVol1/lv_data;
    flexible-meta-disk /dev/LVMVol1/lv_meta;

  on db2.example.com {
    device     /dev/drbd0;
    disk       /dev/LVMVol1/lv_data;
    flexible-meta-disk /dev/LVMVol1/lv_meta;

各項目の細かい説明は、man drbd.confや 邦訳( http://www.drbd.jp/documentation/drbd.conf.html )を参照してください。

ここでは、環境に合わせて必ず変更が必要な on句の説明をしておきます。

まず、on db1.example.com となっているonの後の名前は、uname -nで出てくる各サーバのホスト名である必要があります。それぞれの対象サーバで設定を行いましょう。




3.5 DRBDボリュームの作成


# drbdadm create-md r0
md_offset 0
al_offset 4096
bm_offset 36864

Found some data
 ==> This might destroy existing data! <==

Do you want to proceed?
[need to type 'yes' to confirm]


3.6 internalでやる場合の注意

metaデータ領域をinternalにする場合には、flexible-meta-diskの指定を[ internal ]変更する事で可能です。


4 DRBDの起動と同期


4.1 モジュールのロード


# modprobe drbd


# lsmod|grep drbd
drbd                  225992  0

4.2 DRBDの起動


# /etc/init.d/drbd start


Starting DRBD resources: [ d(r0) s(r0) n(r0) ]..........
 DRBD's startup script waits for the peer node(s) to appear.
 - In case this node was already a degraded cluster before the
   reboot the timeout is 120 seconds. [degr-wfc-timeout]
 - If the peer was available before the reboot the timeout will
   expire after 120 seconds. [wfc-timeout]
   (These values are for resource 'r0'; 0 sec -> wait forever)
 To abort waiting enter 'yes' [  52]:yes


# cat /proc/drbd
version: 8.3.2 (api:88/proto:86-90)
GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e build by root@db1.example.com, 2009-08-14 02:09:08
 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent B r----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:307200000

となり、cs:Connected ro:Secondary/Secondary と、両方ともセカンダリで接続されているのが分かります。

4.3 プライマリの設定とデータ同期


# drbdadm -- --overwrite-data-of-peer primary r0


その様子も、/proc/drbd をみる事で把握する事が出来ます。

# cat /proc/drbd
version: 8.3.2 (api:88/proto:86-90)
GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e build by root@db1.example.com, 2009-08-14 02:09:08
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent B r----
    ns:8887460 nr:0 dw:0 dr:8888236 al:0 bm:542 lo:1710 pe:31 ua:1886 ap:0 ep:1 wo:b oos:298312664
        [>....................] sync'ed:  2.9% (291320/300000)M
        finish: 0:38:00 speed: 130,412 (80,064) K/sec


4.4 DRBDボリュームのマウントと 設定変更


# mount -t ext3 /dev/drbd0 /data/


# vi /etc/fstab

/dev/LVMVol1/lv_root    /                       ext3    defaults        1 1
#/dev/LVMVol1/lv_data    /data                   ext3    defaults        1 2
/dev/LVMVol1/lv_var     /var                    ext3    defaults        1 2


/dev/drbd0    /data                   ext3    defaults        1 0


5. heartbeatのインストールとDRBD連携


5.1 インストール

Fedora Core 10では、heartbeatはyumでインストール可能です。

# yum install heartbeat


5.2 設定とDRBD連動



# cp /usr/share/doc/heartbeat-2.1.3/authkeys /etc/ha.d/
# cp /usr/share/doc/heartbeat-2.1.3/ha.cf /etc/ha.d/
# cp /usr/share/doc/heartbeat-2.1.3/haresources /etc/ha.d/


# vi /etc/ha.d/authkeys
auth 1
1 crc


# chmod 600 /etc/ha.d/authkeys


# vi /etc/ha.d/ha.cf
keepalive 2
deadtime 30
warntime 10
initdead 120

udpport 694
baud    19200
serial  /dev/ttyS0 
ucast eth1

auto_failback off

node    db1.example.com
node    db2.example.com

ucast eth1
また、node はheartbeatの対象となるサーバの、uname で取得できるサーバ名を設定します。



# vi /etc/ha.d/haresources

db1.example.com drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3




だいぶ長くなってしまいましたが、これでベースシステムとしてのheartbeat + DRBD (+ LVM)の自動フェイルオーバークラスタの出来上がりです。

DRBD + heartbeatはちゃんと設定すると色々と細かい制御ができていい感じなので、ぜひぜひ活用していきたいですね。