Wifi on IP Cam EM 6220 does not work – or how not to design your UI

I bought an IP cam to spy on my little baby girl.

Camera to monitor the little one

Camera to monitor the little one

The plan was to set up this camera above her bed. It has wifi, so the lack of network sockets in the bedroom should be no problem.

Setting up wifi on this camera goes like this:

  1. Connect the camera via a UTP cable
  2. Install the app on your phone
  3. Configure the app to connect to your camera
  4. Configure the camera’s wifi via the app
  5. Disconnect the UTP cable

I followed these steps. The app showed that the camera was connected to my wifi network.

Wifi configuration screenshot that says "Connected"

The camera is connected to wifi

But alas – after unplugging the UTP cable, the app could no longer find the camera.

I tried some things to get it to work, like restarting the camera, plugging and unplugging the network cable, looking at my DHCP server to find out which IP address it had… but nothing worked.

Looking closely at my DHCP server, it looked like the camera’s wifi connection did not get an IP address. Maybe I entered the wrong password in the camera’s wifi configuration? But the status clearly says “Connected“. Hmm.

To test this hypothesis, I re-entered my wifi password. The app showed:

Wifi configuration screenshot that says "Connected"

The camera is connected to wifi

Then I entered an incorrect password to see what the app would show:

Wifi configuration screenshot that says "Connected"

The camera is not connected to wifi

So, no matter what password I entered, the app always show “Connected” as status. But of course, wifi only worked with the correct password.

This makes me wonder if the makers of this webcam have eaten their own dog food. I guess not, because how else could you end up with such a glaring bug in your software?

Posted in software, UX | Tagged , , , , , , , , | Leave a comment

Why we chose JSON-RPC over REST

If you don’t want to mess about with XML, REST is pretty much the industry standard for creating an API.

Initially, we had a REST(ish) API. But after using it internally at HomeRez, we were not very happy with how it works. So we looked around for alternatives.

In the end, we decided to go for JSON-RPC.

What are the differences?

  • REST uses HTTP  or HTTPS . JSON-RPC can use any transport protocol, for example TCP .
  • JSON-RPC has 1 end-point URL for all requests. REST uses different URLs for different resources.
  • In JSON-RPC, any request is sent the same way (e.g. via HTTP POST ) with the method and parameters in it. In REST, you use the HTTP verbs (GET , POST , PUT , DELETE ) for different actions.

So, what made us switch from REST to JSON-RPC?

The main reason is that we couldn’t find a good way to map all operations in our API to HTTP verbs. We have several operations that are not pure Create, Read, Update or Delete operations.

For example, we want a call to calculate the rent for staying at an apartment for a specific period. That isn’t really GET /apartment/rent  because we’re not retrieving a “rent” object that we can then PUT  to update. It also isn’t something like POST /apartment/calculate_rent , because that isn’t very RESTish.

Cancelling a reservation is another operation that gave us doubts. Calling PUT /reservation/<id>  with data { guestFirstName: “John” }  seems very different compared to calling it with data { status: “CANCELLED” } . The first simply updates a field, while the second has a lot of side effects: emails being sent to the guest and the owner, payments being refunded, the apartment becoming available again, etc. Maybe POST /reservation/<id>/cancel  would be ok, but that also doesn’t seem very RESTish – after all, you are modifying a reservation.

It became clear to us that we wanted to have an action-based API, where most of the calls perform actions. Many of those actions are different from from the traditional CRUD operations.

One other thing that bothered us, is GET  requests with lots of parameters. For example, let’s say you want to search reservations by a guest named “John Doe”. In JSON, the search parameters could look something like this:

{
    "guest": {
        "firstName": "John",
        "lastName": "Doe"
    }
}

However, if you put this information in the GET parameters, it becomes a bit tricky. You need to take escaping into account (where &  becomes &amp; ). If you just do the traditional ?a=1&b=2  parameters, you don’t have support for sub-structures. So you could turn your parameters into a JSON string, encode it, and then decode it on the server, but why make it so complex?

Yes, for a URL that you visit in your browser, it’s great that everything is in the address bar. It’s great that you can bookmark such a search. But we’re talking about an API here, not about a page being visited in the browser.

So, now we post all API calls to the same URL, with a method and a parameter object. Authentication fields are also sent in the parameter object, so we can easily switch our transport layer from HTTPS  to something else for better performance, if we want.

An additional advantage is that we can now easily use json-schema both to validate the incoming requests and to auto-generate most of our API documentation.

Examples of our calls are:

  • reservation.create  to create a reservation
  • reservation.quote  to get a quote (rent calculation) for a specific vacation home and period
  • reservation.cancel  to cancel a reservation
  • reservation.list  to get a list of reservations based on the search parameters
  • property.list  to get a list of properties (vacation homes) based on the search parameters
  • property.rate.list  to get all rates of a single vacation home

All in all, we are very happy with this switch.

Posted in software development | 9 Comments

Checking log4js output in your node.js testsuite

This post will teach you how to test log4js  log statements in your code.

You can also find the code below in the `src/log4js-unittest` dir of my blog code repository.

Let’s say that we have written this simple node.js module, foo.js .

'use strict';

var log4js = require('log4js');
var logger = log4js.getLogger('foo');

function logSomething() {
  logger.info('something');
}

module.exports = {
  logSomething: logSomething
};

It would be good to verify that logSomething()  actually logs something. Let’s use this mocha test file skeleton foo.spec.js :

'use strict';

var foo = require('./foo');

describe('Foo logger', function() {
  it('should log something', function() {
    // Prepare to detect the log output here

    foo.logSomething();

    // Verify the log output here
  });
});

We obviously cannot test logSomething() ‘s return value, so we have to think of something else. Maybe we can somehow stub either log4js.getLogger()  or logger.info() ?

Normally, I use sinon to create spies, stubs and mocks. With sinon, you can replace object methods or even the objects themselves.

As far as I know, it is not possible to modify the logger  object inside foo.js , because foo.spec.js  doesn’t have access to the scope of foo.js . We could add the logger  object to the foo.js  exports, but that doesn’t seem to be a very elegant solution. I’d rather not modify foo.js  itself.

Let’s see if we can stub getLogger() :

'use strict';

var should = require('should');
var sinon  = require('sinon');

// First require log4js here, so we are in time to stub getLogger.
var log4js = require('log4js');

// Create a spy to see which logger.info calls were done.
var infoSpy = sinon.spy();

// Make sure log4js.getLogger() returns an object containing our spy.
var getLoggerStub = sinon.stub(log4js, 'getLogger');
getLoggerStub.returns({ info: infoSpy });

// Now we can require foo.js, which will use our getLogger stub.
var foo = require('./foo');

describe('Foo logger', function() {
  it('should log something', function() {
    foo.logSomething();

    infoSpy.calledOnce.should.equal(true);
    infoSpy.calledWithExactly('something').should.equal(true);
  });
});

Does it work?

czapka:~/src/blog/log4js-unittest$ mocha foo.spec.js

  Foo logger
    ✓ should log something

  1 passing (5ms)

Yes it does!

However, this method relies on being able to stub log4js.getLogger()  before foo.js  is required. What happens if we require foo.js  before stubbing?

'use strict';

// What if this file (or any other code file or spec file!)
// requires foo before we can create the getLogger stub?
var foo = require('./foo');

var should = require('should');
var sinon  = require('sinon');
var log4js = require('log4js');
var infoSpy = sinon.spy();
var getLoggerStub = sinon.stub(log4js, 'getLogger');
getLoggerStub.returns({ info: infoSpy });

describe('Foo logger', function() {
  it('should log something', function() {
    foo.logSomething();

    infoSpy.calledOnce.should.equal(true);
    infoSpy.calledWithExactly('something').should.equal(true);
  });
});

Unfortunately, this fails:

czapka:~/src/blog/log4js-unittest$ mocha foo.spec.js

  Foo logger
[2015-09-27 12:49:55.284] [INFO] foo - something
    1) should log something

  0 passing (7ms)
  1 failing

  1) Foo logger should log something:

      AssertionError: expected false to be true

The reason that this fails, is that node.js uses a module cache. The first time a module is required, it is loaded; any followup require()  calls simply return the loaded module from node.js’s module cache.

This makes the test very fragile. If any other module requires foo.js  before this test is run, the logger  object inside foo.js  has already been created before the getLogger()  stub can be used.

What else is there to try?

We could add an appender to log4js  that captures any logged lines for verification afterward. Since there did not seem to be such an appender yet, I decided to create one.

This is how to use it:

'use strict';

// We can now load foo.js at any point, because we are no longer
// stubbing log4js.getLogger().
var foo    = require('./foo');
var should = require('should');

var testAppender = require('log4js-test-appender');
testAppender.init();

describe('Foo logger', function() {
  it('should log something', function() {
    foo.logSomething();

    var logEvents = testAppender.getLogEvents();
    logEvents.should.have.length(1);
    logEvents[0].level.levelStr.should.equal('INFO');
    logEvents[0].data[0].should.equal('something');
  });
});

There we go, a robust, working test.

The API of log4js-test-appender  is not very elegant yet, so if you have any suggestions for improvement, feel free to comment or create a merge request! And if you have an alternative suggestion to test log4js  log statements, I’m all ears.

Posted in JavaScript, node.js, software development | Tagged , , , , , , , , , , , , , , | Leave a comment

Combining mongoose and Q in node.js

This post will teach you how to write promise-based mongoose code, using Kris Kowal’s Q library.

You can also find the code below in the `src/mongoose-and-q` dir of my blog code repository.

We have a mongo database with 3 collections: users, bicycles and cars. Bicycles and cars are owned by users. We will write a script that shows the bicycles and cars owned by rupert@example.com .

We’ll look at multiple versions of this script. This article focuses on callbacks and promises, so we’ll be using a hardcoded email address and we’ll not be handling errors very well. Those aspects of the code are for a different article.

The first version uses callbacks for the database calls.

'use strict';

var mongoose = require('mongoose');
var Schema   = mongoose.Schema;

var UserSchema = new Schema({
  email    : { type: String },
  firstName: { type: String },
});
var User = mongoose.model('User', UserSchema);

var CarSchema = new Schema({
  ownerId       : { type: Schema.Types.ObjectId, ref: 'User' },
  colour        : { type: String },
  numberOfWheels: { type: Number },
});
var Car = mongoose.model('Car', CarSchema);

var BicycleSchema = new Schema({
  ownerId       : { type: Schema.Types.ObjectId, ref: 'User' },
  colour        : { type: String },
  numberOfWheels: { type: Number },
});
var Bicycle = mongoose.model('Bicycle', BicycleSchema);

mongoose.connect('mongodb://localhost/blogtest');

var email = 'rupert@example.com';

function getVehicles(email, cb) {
  User.findOne({ email: email}, function(err, user) {
    if (err) {
      return cb(err);
    }
    Car.find({ ownerId: user._id }, function(err, cars) {
      if (err) {
        return cb(err);
      }
      Bicycle.find({ ownerId: user._id }, function(err, bicycles) {
        if (err) {
          return cb(err);
        }
        cb(null, {
          cars: cars,
          bicycles: bicycles
        });
      });
    });
  });
}

getVehicles(email, function(err, vehicles) {
  if (err) {
    console.error('Something went wrong: ' + err);
  }
  else {
    console.info(vehicles);
  }
  mongoose.disconnect();
});

Two things stand out in this code. First, the indenting in getVehicles . It looks bad. It is bad. And if we want to add more callbacks, it gets worse. Second, we are using error handling code in each individual callback.

We can rewrite this code to use Q .

'use strict';

var Q        = require('q');
var mongoose = require('mongoose');
var Schema   = mongoose.Schema;

var UserSchema = new Schema({
  email    : { type: String },
  firstName: { type: String },
});
var User = mongoose.model('User', UserSchema);

var CarSchema = new Schema({
  ownerId       : { type: Schema.Types.ObjectId, ref: 'User' },
  colour        : { type: String },
  numberOfWheels: { type: Number },
});
var Car = mongoose.model('Car', CarSchema);

var BicycleSchema = new Schema({
  ownerId       : { type: Schema.Types.ObjectId, ref: 'User' },
  colour        : { type: String },
  numberOfWheels: { type: Number },
});
var Bicycle = mongoose.model('Bicycle', BicycleSchema);

mongoose.connect('mongodb://localhost/blogtest');

var email = 'rupert@example.com';

function getVehicles(email) {
  var foundCars, foundUser;
  return Q(User.findOne({ email: email }).exec())
  .then(function(user) {
    foundUser = user;
    return Q(Car.find({ ownerId: user._id }).exec())
  })
  .then(function(cars) {
    foundCars = cars;
    return Q(Bicycle.find({ ownerId: foundUser._id }).exec())
  })
  .then(function(bicycles) {
    return {
      bicycles: bicycles,
      cars: foundCars
    };
  });
}

getVehicles(email)
.then(function(vehicles) {
  console.log(vehicles);
})
.catch(function(err) {
  console.error('Something went wrong: ' + err);
})
.done(function() {
  mongoose.disconnect();
});

The getVehicles  function now returns the promise created on line 33. The user is found, then the cars, then the bicycles, and then the promise resolves to the same object as in the first listing.

This version of getVehicles looks better regarding both indenting and error handling. The indenting stays on the same level, and the error handling – no matter where the error happens – is done by the catch  block after calling getVehicles .

However, there is also a disadvantage to this code. The vars from a then  block can’t be used in the next then  block, because each consecutive then  block has its own function scope. That’s why we have to assign user  and cars  to helper vars outside their scope.

We can do better. Let’s look at our improved version of getVehicles .

function getVehicles(email) {
  return Q(User.findOne({ email: email }).exec())
  .then(function(user) {
    return Q.all([
      Q(Bicycle.find({ ownerId: user._id }).exec()),
      Q(Car.find({ ownerId: user._id }).exec())
    ]);
  })
  .then(function(results) {
    return {
      bicycles: results[0],
      cars: results[1]
    };
  });
}

We use Q.all(arrayOfPromises) to retrieve both the cars  and the bicycles  in one go. Q.all  resolves to a list of fulfilment values, which end up in an array in the then  block.

This both makes the code shorter and gets rid of the helper variables. There is one minor improvement that we can still make.

function getVehicles(email) {
  return Q(User.findOne({ email: email }).exec())
  .then(function(user) {
    return [
      Q(Bicycle.find({ ownerId: user._id }).exec()),
      Q(Car.find({ ownerId: user._id }).exec())
    ];
  })
  .spread(function(bicycles, cars) {
    return {
      bicycles: bicycles,
      cars: cars
    };
  });
}

We replace .then  by .spread , so the bicycles  and cars  end up in one argument each, instead of an array of results. Additionally, because .spread  calls .all  initially, we can now remove Q.all and return just the array of promises. This code is more readable and maintainable than the previous version.

Posted in JavaScript, node.js, software development | Tagged , , , , , , , , , | 4 Comments

Hiding your sshd with ufw and knockd on Ubuntu

I don’t like it if malicious programs or people try to hack my server.

I do, however, like to have access to my server, via ssh of course. Not just from home, but from wherever I happen to be.

Fortunately, it’s possible to have the best of both worlds on my Ubuntu server, using a simple piece of software called knockd .

When using knockd , your sshd  can be firewalled by default. You can open up a temporary hole in the firewall by sequentially connecting to a few ports, as defined by you in knockd ‘s config file. Then you can ssh to your server.

To set this up with ufw , I started by closing port 22 to the world:

sudo ufw insert 1 deny from any to any port 22

This is my /etc/knockd.conf :

[options]
        UseSyslog

[SSH]
      sequence      = 17613,27791,20882,51313
      seq_timeout   = 5
      start_command = ufw insert 1 allow from %IP% to any port 22
      tcpflags      = syn
      cmd_timeout   = 10
      stop_command  = ufw delete allow from %IP% to any port 22

This configures knockd  to listen for connections on the 4 specified ports, within 5 seconds after each other. Once the sequence is completed, a hole is opened for 10 seconds using the given ufw commands. This process can be easily followed in /var/log/syslog , which helped me to get this to work.

On my laptop, I made a small script, using the client side of knockd :

knock -d 300 server.example.com 17613 27791 20882 51313
ssh server.example.com

After these easy steps, I can connect to my server anytime, anywhere. Now all I need to do, is remember the port sequence!

Posted in linux | Tagged , , , , , , , , , , , , , , , | Leave a comment

Wifi USB adapter N300 in Ubuntu server

In my living room, I have a home theatre PC that runs Ubuntu 14.04 with Kodi (formerly XBMC). I wanted to reduce the amount of cables under my TV, so I bought a Sitecom N300 Wi-Fi USB adapter.

It turned out to be pretty easy to configure it from the command-line. Here’s a short guide for you to follow.

First make sure that the kernel drivers are loaded via lsmod | grep rtl and seeing rtl8192cu  in the output. Then you need to install the wpasupplicant  package, which can manage connections to WiFi base stations:

towel:~$ sudo apt-get install wpasupplicant

We need to create a config file for this package, as follows.

towel:~$ sudo wpa_passphrase Blauwmutsenpad > /etc/wpa_supplicant.conf
# reading passphrase from stdin
typeyourpassword
network={
	ssid="Blauwmutsenpad"
	#psk="aoeuaoeu"
	psk=52e832cb10afa74d405f66c12629d79e06da0a8abc3f6f963e4f617217fdd1b5
}

You should replace Blauwmutsenpad by your own network’s SSID, and typeyourpassword by the password to your network.

Make sure you remove the line with the plaintext password from the file /etc/wpa_supplicant.conf  after you’ve created it.

Then we need to tell the network system that there is a wireless network card. To do that, we add the following to /etc/network/interfaces :

auto wlan0
iface wlan0 inet dhcp
wpa-driver nl80211
wpa-conf /etc/wpa_supplicant.conf

After that, you can activate the wireless connection with:

ifup wlan0

The wireless connection will also be activated automatically after booting.

Posted in linux | Tagged , , , , , , , , , , , , | Leave a comment

Processing an array of promises sequentially in node.js

This post describes how you can perform a sequence of promises sequentially – one after another – using Kriskowal’s Q library in node.js. If you’re just interested in how to do this, and not in the other examples, scroll down to the last code snippet.

Imagine you have an array of filenames, and you want to upload those files to a server. Normally, you’d just fire off uploads asynchronously and wait for all of them to finish. However, what if the server you upload to has restrictions, for example maximum 1 concurrent upload, or a bandwidth limit?

In this post, we’ll be using the Q library to create and process promises, and log4js to get log lines with timestamps easily. First we create an uploader.js  module with a function uploadFile  that takes a filename, and uploads it to a server. For demonstration purposes, this function doesn’t actually upload a file, but simply waits for a random time, and then fulfills the promise.

var Q = require('q');
var log4js = require('log4js');
var logger = log4js.getLogger('uploader');

exports.uploadFile = function(filename) {
    var deferred = Q.defer();
    Q.fcall(function() {
        var delay = Math.random() * 4000 + 3000;
        logger.info("Starting upload: " + filename);
        setTimeout(function() {
            logger.info("Completed upload: " + filename);
            return deferred.resolve();
        }, delay)
    });
    return deferred.promise;
}

The following code uploads a single file:

var log4js = require('log4js');
var logger = log4js.getLogger('upload-example-1');
var uploader = require('./uploader');

var filename = 'file1.jpg';

uploader.uploadFile(filename)
    .then(function(result) {
        logger.info("The file has been uploaded.");
    })
    .catch(function(error) {
        logger.error(error);
    });

The output of this script is:

[14:15:28.128] [INFO] uploader - Starting upload: file1.jpg
[14:15:32.169] [INFO] uploader - Completed upload: file1.jpg
[14:15:32.170] [INFO] upload-example-1 - The file has been uploaded.

That’s uploading a single file. This is how you use Q to upload multiple files in parallel:

var Q = require('q');
var log4js = require('log4js');
var logger = log4js.getLogger('upload-example-2');
var uploader = require('./uploader');

var filenames = ['file1.jpg', 'file2.txt', 'file3.pdf'];
var promises = filenames.map(uploader.uploadFile);

Q.allSettled(promises)
    .then(function(results) {
        logger.info("All files uploaded. Results:");
        logger.info(results.map(function(result) { return result.state }));
    })
    .catch(function(error) {
        logger.error(error);
    });

Here, we have an array of filenames, which we turn into an array of promises by using the map  method. After that, we use Q.allSettled()  to wait until all promises have either been fulfilled or rejected. We don’t use Q.all()  here, because that would stop processing as soon as one of the promises is rejected.

This leads to the following output:

[14:49:09.598] [INFO] uploader - Starting upload: file1.jpg
[14:49:09.601] [INFO] uploader - Starting upload: file2.txt
[14:49:09.602] [INFO] uploader - Starting upload: file3.pdf
[14:49:13.788] [INFO] uploader - Completed upload: file3.pdf
[14:49:14.489] [INFO] uploader - Completed upload: file2.txt
[14:49:15.014] [INFO] uploader - Completed upload: file1.jpg
[14:49:15.014] [INFO] upload-example-2 - All files uploaded. Results:
[14:49:15.014] [INFO] upload-example-2 - [ 'fulfilled', 'fulfilled', 'fulfilled' ]

As you can see, the three uploads are started immediately after each other, and run in parallel.

To turn an array of filenames into a sequentially processed array of promises, we can use reduce . If you’ve never used reduce  before (here is the documentation), this will look a bit weird.

var Q = require('q');
var log4js = require('log4js');
var logger = log4js.getLogger('upload-example-3');
var uploader = require('./uploader');

var filenames = ['file1.jpg', 'file2.txt', 'file3.pdf'];

var lastPromise = filenames.reduce(function(promise, filename) {
    return promise.then(function() {
        return uploader.uploadFile(filename);
    });
}, Q.resolve())

lastPromise
  .then(function() {
    logger.info("All files uploaded.");
  })
  .catch(function(error) {
    logger.error(error);
  });

Let’s go through this. We call the filenames.reduce()  with two arguments, a callback and an initial value. Because we passed a second argument to reduce(), it will now call the given callback for each element in filenames . The callback gets two arguments. The first time, the first argument is the initial value and the second argument is the first element of filenames . Each next time, the first argument is the return value of the previous time the callback was called, and the second argument is the next element of filenames .

In other words, the first argument of the callback is always a promise, and the second is an array element. We use an “empty promise”, Q.resolve() , as “seed” for this chain.

Using this code, each next step in the reduce()  chain is only called when the previous step has been completed, as can be seen in the output:

[14:22:35.935] [INFO] uploader - Starting upload: file1.jpg
[14:22:39.814] [INFO] uploader - Completed upload: file1.jpg
[14:22:39.815] [INFO] uploader - Starting upload: file2.txt
[14:22:45.293] [INFO] uploader - Completed upload: file2.txt
[14:22:45.293] [INFO] uploader - Starting upload: file3.pdf
[14:22:48.657] [INFO] uploader - Completed upload: file3.pdf
[14:22:48.658] [INFO] upload-test-3 - All files uploaded.

The code turns out to do exactly what we want. The above reduce()  solution can be used to perform all kinds of promises sequentially, by inserting the right code in the callback to reduce() .

However, there is one thing to add, which is error handling. What if one of the promises is rejected? Let’s try that. We’ll make a failing-uploader.js , which we’ll rig to fail sometimes.

var Q = require('q');
var log4js = require('log4js');
var logger = log4js.getLogger('failing-uploader');

exports.uploadFile = function(filename) {
    var deferred = Q.defer();
    Q.fcall(function() {
        var delay = Math.random() * 4000 + 3000;
        logger.info("Starting upload: " + filename);
        setTimeout(function() {
            if (filename === 'file2.txt') {
                logger.error("Timeout while uploading: " + filename);
                return deferred.reject("Timeout while uploading: " + filename);
            }
            else {
                logger.info("Completed upload: " + filename);
                return deferred.resolve();
            }
        }, delay)
    });
    return deferred.promise;
}

It turns out that an error stops the entire chain. When we modify Example 3 by changing require(‘uploader’)  to require(‘failing-uploader’) , and then running it, we get:

[18:04:47.576] [INFO] failing-uploader - Starting upload: file1.jpg
[18:04:53.896] [INFO] failing-uploader - Completed upload: file1.jpg
[18:04:53.903] [INFO] failing-uploader - Starting upload: file2.txt
[18:04:59.701] [ERROR] failing-uploader - Timeout while uploading: file2.txt
[18:04:59.702] [ERROR] file-upload - Not all files uploaded: Timeout while uploading: file2.txt

This might be what you want, or maybe you want to just register the error while continuing uploading the other files. In that case, you need to modify the callback to reduce() , for example like this:

var Q = require('q');
var log4js = require('log4js');
var logger = log4js.getLogger('upload-example-4');
var uploader = require('./failing-uploader');

var filenames = ['file1.jpg', 'file2.txt', 'file3.pdf'];
var results = [];

var lastPromise = filenames.reduce(function(promise, filename) {
    return promise.then(function() {
        results.push(true);
        return uploader.uploadFile(filename);
    })
    .catch(function(error) {
        results.push(false);
        logger.error("Caught an error but continuing with the other uploads.");
    });
}, Q.resolve());

lastPromise
    .then(function() {
        // Remove the first result, which is <true> returned by
        // the seed promise Q.resolve().
        // This is a clumsy way of storing and retrieving the results.
        // Suggestions for improvement welcome!
        results.splice(0, 1);
        logger.info("All files uploaded. Results:");
        logger.info(results);
    })
    .catch(function(error) {
        logger.error("Not all files uploaded: " + error);
    });

 

 

This will catch the rejection a level deeper, so the chain can continue. The output of this code is:

[18:15:55.659] [INFO] failing-uploader - Starting upload: file1.jpg
[18:15:59.883] [INFO] failing-uploader - Completed upload: file1.jpg
[18:15:59.884] [INFO] failing-uploader - Starting upload: file2.txt
[18:16:05.279] [ERROR] failing-uploader - Timeout while uploading: file2.txt
[18:16:05.279] [ERROR] file-upload - Caught an error but continuing with the other uploads.
[18:16:05.279] [INFO] failing-uploader - Starting upload: file3.pdf
[18:16:10.600] [INFO] failing-uploader - Completed upload: file3.pdf
[18:16:10.601] [INFO] file-upload - All files uploaded. Results:
[18:16:10.601] [INFO] file-upload - [ true, false, true ]

And indeed, the second upload fails, but the chain continues, and at the end, you know what succeeded and what failed.

Our promises have been processed sequentially, with error catching, and continuing on errors anyway.

 

 

 

Posted in JavaScript, node.js, software development | Tagged , , , , , , , , , , , , | 6 Comments

Chatting at work for fun and profit

As I am writing this, I am employed as a software engineer at XS4ALL, the oldest consumer internet provider in the Netherlands.

One of the things that makes people at XS4ALL so effective, is the use of a company IRC server, only accessible by employees.

A what?

IRC is a very old chat protocol. It was developed in 1988 in Finland. What makes IRC different from other chat protocols, like Facebook chat, Skype and ICQ?

groupchatThe main difference is that IRC is mainly aimed at group conversations. IRC has channels, which can be seen as chat rooms. These channels have a name, and anyone can join them. Anything you say on a channel, is broadcast to all people who have joined that channel. Channels persist, as long as there is at least one person in them.

It is also possible to have private conversations on IRC.

 

 

So, how does this help XS4ALL?

We have several “official” channels on our office IRC server, for example a general channel for everyone, a helpdesk channel, and a sysadmin channel. In principle, everyone is welcome on any channel, but the people from the department that “own” the channel are the ones who mainly talk. The others mainly read.

If an employee has a question that he/she has not found an answer for elsewhere, he/she can ask this on the appropriate channel. Hopefully, at least one person of that department is reading along at that moment, and can answer the question.

mIRC - A commonly used Windows IRC client

There are several reasons why this simple concept is very powerful.

First, asking the question in a channel instead of a private conversation makes it more likely that you will get a response quickly. You don’t have to depend on one person being active at that moment.

Second, other people will be able to read your question and the answer. They will therefore learn the answers to questions, so they won’t have to ask the same question at a later time.

Third, it is simpler and more informal than a shared email box or a ticketing system. This can lead to faster replies, which allows a customer’s problem to be solved while the customer is still on the phone, instead of the employee who asked the question having to call back the customer at a later time.

Fourth, if one person gives an answer, but a second person knows a better answer, you will also get this better answer. That is a better result than email or a ticket, because it is very rare to receive more than one answer for each inquiry done that way.

Fifth, it is easy to read back an IRC channel after a couple of hours of not reading it, assuming that you keep your IRC software connected to the IRC server. That gives additional opportunities for both learning and giving delayed or improved answers.

irssi - A commonly used unix IRC client

All in all, our IRC server contributes to a significant amount of knowledge sharing among employees. It always impresses me how easy it is to learn a few things, by simply being present on a few IRC channels and reading along.

If your company is not too big, and willing to experiment a little, I can highly recommend trying IRC, or another group-based chat protocol, to improve the sharing of knowledge, and experience fast, interactive problem solving.

Posted in work | Tagged , , , , , , , , , , | 1 Comment