Wifi on IP Cam EM 6220 does not work – or how not to design your UI

I bought an IP cam to spy on my little baby girl.

Camera to monitor the little one

Camera to monitor the little one

The plan was to set up this camera above her bed. It has wifi, so the lack of network sockets in the bedroom should be no problem.

Setting up wifi on this camera goes like this:

  1. Connect the camera via a UTP cable
  2. Install the app on your phone
  3. Configure the app to connect to your camera
  4. Configure the camera’s wifi via the app
  5. Disconnect the UTP cable

I followed these steps. The app showed that the camera was connected to my wifi network.

Wifi configuration screenshot that says "Connected"

The camera is connected to wifi

But alas – after unplugging the UTP cable, the app could no longer find the camera.

I tried some things to get it to work, like restarting the camera, plugging and unplugging the network cable, looking at my DHCP server to find out which IP address it had… but nothing worked.

Looking closely at my DHCP server, it looked like the camera’s wifi connection did not get an IP address. Maybe I entered the wrong password in the camera’s wifi configuration? But the status clearly says “Connected“. Hmm.

To test this hypothesis, I re-entered my wifi password. The app showed:

Wifi configuration screenshot that says "Connected"

The camera is connected to wifi

Then I entered an incorrect password to see what the app would show:

Wifi configuration screenshot that says "Connected"

The camera is not connected to wifi

So, no matter what password I entered, the app always show “Connected” as status. But of course, wifi only worked with the correct password.

This makes me wonder if the makers of this webcam have eaten their own dog food. I guess not, because how else could you end up with such a glaring bug in your software?

Posted in software, UX | Tagged , , , , , , , , | Leave a comment

Why we chose JSON-RPC over REST

If you don’t want to mess about with XML, REST is pretty much the industry standard for creating an API.

Initially, we had a REST(ish) API. But after using it internally at HomeRez, we were not very happy with how it works. So we looked around for alternatives.

In the end, we decided to go for JSON-RPC.

What are the differences?

  • REST uses HTTP  or HTTPS . JSON-RPC can use any transport protocol, for example TCP .
  • JSON-RPC has 1 end-point URL for all requests. REST uses different URLs for different resources.
  • In JSON-RPC, any request is sent the same way (e.g. via HTTP POST ) with the method and parameters in it. In REST, you use the HTTP verbs ( GET , POST , PUT , DELETE ) for different actions.

So, what made us switch from REST to JSON-RPC?

The main reason is that we couldn’t find a good way to map all operations in our API to HTTP verbs. We have several operations that are not pure Create, Read, Update or Delete operations.

For example, we want a call to calculate the rent for staying at an apartment for a specific period. That isn’t really  GET /apartment/rent  because we’re not retrieving a “rent” object that we can then PUT  to update. It also isn’t something like POST /apartment/calculate_rent , because that isn’t very RESTish.

Cancelling a reservation is another operation that gave us doubts. Calling PUT /reservation/<id>  with data { guestFirstName: "John" }  seems very different compared to calling it with data { status: "CANCELLED" } . The first simply updates a field, while the second has a lot of side effects: emails being sent to the guest and the owner, payments being refunded, the apartment becoming available again, etc. Maybe POST /reservation/<id>/cancel  would be ok, but that also doesn’t seem very RESTish – after all, you are modifying a reservation.

It became clear to us that we wanted to have an action-based API, where most of the calls perform actions. Many of those actions are different from from the traditional CRUD operations.

One other thing that bothered us, is GET  requests with lots of parameters. For example, let’s say you want to search reservations by a guest named “John Doe”. In JSON, the search parameters could look something like this:

However, if you put this information in the GET parameters, it becomes a bit tricky. You need to take escaping into account (where &  becomes &amp; ). If you just do the traditional ?a=1&b=2  parameters, you don’t have support for sub-structures. So you could turn your parameters into a JSON string, encode it, and then decode it on the server, but why make it so complex?

Yes, for a URL that you visit in your browser, it’s great that everything is in the address bar. It’s great that you can bookmark such a search. But we’re talking about an API here, not about a page being visited in the browser.

So, now we post all API calls to the same URL, with a method and a parameter object. Authentication fields are also sent in the parameter object, so we can easily switch our transport layer from HTTPS  to something else for better performance, if we want.

An additional advantage is that we can now easily use json-schema both to validate the incoming requests and to auto-generate most of our API documentation.

Examples of our calls are:

  • reservation.create  to create a reservation
  • reservation.quote  to get a quote (rent calculation) for a specific vacation home and period
  • reservation.cancel  to cancel a reservation
  • reservation.list  to get a list of reservations based on the search parameters
  • property.list  to get a list of properties (vacation homes) based on the search parameters
  • property.rate.list  to get all rates of a single vacation home

All in all, we are very happy with this switch.

Posted in software development | 9 Comments

Checking log4js output in your node.js testsuite

This post will teach you how to test  log4js  log statements in your code.

You can also find the code below in the src/log4js-unittest dir of my blog code repository.

Let’s say that we have written this simple node.js module, foo.js .

It would be good to verify that logSomething()  actually logs something. Let’s use this mocha test file skeleton foo.spec.js :

We obviously cannot test logSomething() ‘s return value, so we have to think of something else. Maybe we can somehow stub either log4js.getLogger()  or logger.info() ?

Normally, I use sinon to create spies, stubs and mocks. With sinon, you can replace object methods or even the objects themselves.

As far as I know, it is not possible to modify the logger  object inside foo.js , because foo.spec.js  doesn’t have access to the scope of foo.js . We could add the logger  object to the foo.js  exports, but that doesn’t seem to be a very elegant solution. I’d rather not modify foo.js  itself.

Let’s see if we can stub getLogger() :

Does it work?

Yes it does!

However, this method relies on being able to stub log4js.getLogger()  before foo.js  is required. What happens if we require foo.js  before stubbing?

Unfortunately, this fails:

The reason that this fails, is that node.js uses a module cache. The first time a module is required, it is loaded; any followup require()  calls simply return the loaded module from node.js’s module cache.

This makes the test very fragile. If any other module requires foo.js  before this test is run, the logger  object inside foo.js  has already been created before the getLogger()  stub can be used.

What else is there to try?

We could add an appender to log4js  that captures any logged lines for verification afterward. Since there did not seem to be such an appender yet, I decided to create one.

This is how to use it:

There we go, a robust, working test.

The API of log4js-test-appender  is not very elegant yet, so if you have any suggestions for improvement, feel free to comment or create a merge request! And if you have an alternative suggestion to test log4js  log statements, I’m all ears.

Posted in JavaScript, node.js, software development | Tagged , , , , , , , , , , , , , , | Leave a comment

Combining mongoose and Q in node.js

This post will teach you how to write promise-based mongoose code, using Kris Kowal’s Q library.

You can also find the code below in the src/mongoose-and-q dir of my blog code repository.

We have a mongo database with 3 collections: users, bicycles and cars. Bicycles and cars are owned by users. We will write a script that shows the bicycles and cars owned by  rupert@example.com .

We’ll look at multiple versions of this script. This article focuses on callbacks and promises, so we’ll be using a hardcoded email address and we’ll not be handling errors very well. Those aspects of the code are for a different article.

The first version uses callbacks for the database calls.

Two things stand out in this code. First, the indenting in getVehicles . It looks bad. It is bad. And if we want to add more callbacks, it gets worse. Second, we are using error handling code in each individual callback.

We can rewrite this code to use Q .

The getVehicles  function now returns the promise created on line 33. The user is found, then the cars, then the bicycles, and then the promise resolves to the same object as in the first listing.

This version of  getVehicles looks better regarding both indenting and error handling. The indenting stays on the same level, and the error handling – no matter where the error happens – is done by the catch  block after calling getVehicles .

However, there is also a disadvantage to this code. The vars from a then  block can’t be used in the next then  block, because each consecutive then  block has its own function scope. That’s why we have to assign user  and cars  to helper vars outside their scope.

We can do better. Let’s look at our improved version of getVehicles .

We use Q.all(arrayOfPromises) to retrieve both the cars  and the bicycles  in one go. Q.all  resolves to a list of fulfilment values, which end up in an array in the then  block.

This both makes the code shorter and gets rid of the helper variables. There is one minor improvement that we can still make.

We replace .then  by .spread , so the bicycles  and cars  end up in one argument each, instead of an array of results. Additionally, because .spread  calls .all  initially, we can now remove Q.all and return just the array of promises. This code is more readable and maintainable than the previous version.

Posted in JavaScript, node.js, software development | Tagged , , , , , , , , , | 4 Comments

Hiding your sshd with ufw and knockd on Ubuntu

I don’t like it if malicious programs or people try to hack my server.

I do, however, like to have access to my server, via ssh of course. Not just from home, but from wherever I happen to be.

Fortunately, it’s possible to have the best of both worlds on my Ubuntu server, using a simple piece of software called knockd .

When using knockd , your sshd  can be firewalled by default. You can open up a temporary hole in the firewall by sequentially connecting to a few ports, as defined by you in knockd ‘s config file. Then you can  ssh to your server.

To set this up with ufw , I started by closing port 22 to the world:

This is my /etc/knockd.conf :

This configures knockd  to listen for connections on the 4 specified ports, within 5 seconds after each other. Once the sequence is completed, a hole is opened for 10 seconds using the given ufw commands. This process can be easily followed in  /var/log/syslog , which helped me to get this to work.

On my laptop, I made a small script, using the client side of knockd :

After these easy steps, I can connect to my server anytime, anywhere. Now all I need to do, is remember the port sequence!

Posted in linux | Tagged , , , , , , , , , , , , , , , | Leave a comment

Wifi USB adapter N300 in Ubuntu server

In my living room, I have a home theatre PC that runs Ubuntu 14.04 with Kodi (formerly XBMC). I wanted to reduce the amount of cables under my TV, so I bought a Sitecom N300 Wi-Fi USB adapter.

It turned out to be pretty easy to configure it from the command-line. Here’s a short guide for you to follow.

First make sure that the kernel drivers are loaded via  lsmod | grep rtl and seeing rtl8192cu  in the output. Then you need to install the wpasupplicant  package, which can manage connections to WiFi base stations:

We need to create a config file for this package, as follows.

You should replace Blauwmutsenpad by your own network’s SSID, and typeyourpassword by the password to your network.

Make sure you remove the line with the plaintext password from the file /etc/wpa_supplicant.conf  after you’ve created it.

Then we need to tell the network system that there is a wireless network card. To do that, we add the following to /etc/network/interfaces :

After that, you can activate the wireless connection with:

The wireless connection will also be activated automatically after booting.

Posted in linux | Tagged , , , , , , , , , , , , | Leave a comment

Processing an array of promises sequentially in node.js

This post describes how you can perform a sequence of promises sequentially – one after another – using Kriskowal’s Q library in node.js. If you’re just interested in how to do this, and not in the other examples, scroll down to the last code snippet.

Imagine you have an array of filenames, and you want to upload those files to a server. Normally, you’d just fire off uploads asynchronously and wait for all of them to finish. However, what if the server you upload to has restrictions, for example maximum 1 concurrent upload, or a bandwidth limit?

In this post, we’ll be using the Q library to create and process promises, and log4js to get log lines with timestamps easily. First we create an uploader.js  module with a function uploadFile  that takes a filename, and uploads it to a server. For demonstration purposes, this function doesn’t actually upload a file, but simply waits for a random time, and then fulfills the promise.

The following code uploads a single file:

The output of this script is:

That’s uploading a single file. This is how you use Q to upload multiple files in parallel:

Here, we have an array of filenames, which we turn into an array of promises by using the map  method. After that, we use Q.allSettled()  to wait until all promises have either been fulfilled or rejected. We don’t use Q.all()  here, because that would stop processing as soon as one of the promises is rejected.

This leads to the following output:

As you can see, the three uploads are started immediately after each other, and run in parallel.

To turn an array of filenames into a sequentially processed array of promises, we can use reduce . If you’ve never used reduce  before (here is the documentation), this will look a bit weird.

Let’s go through this. We call the filenames.reduce()  with two arguments, a callback and an initial value. Because we passed a second argument to reduce(), it will now call the given callback for each element in filenames . The callback gets two arguments. The first time, the first argument is the initial value and the second argument is the first element of filenames . Each next time, the first argument is the return value of the previous time the callback was called, and the second argument is the next element of filenames .

In other words, the first argument of the callback is always a promise, and the second is an array element. We use an “empty promise”, Q.resolve() , as “seed” for this chain.

Using this code, each next step in the reduce()  chain is only called when the previous step has been completed, as can be seen in the output:

The code turns out to do exactly what we want. The above  reduce()  solution can be used to perform all kinds of promises sequentially, by inserting the right code in the callback to reduce() .

However, there is one thing to add, which is error handling. What if one of the promises is rejected? Let’s try that. We’ll make a failing-uploader.js , which we’ll rig to fail sometimes.

It turns out that an error stops the entire chain. When we modify Example 3 by changing require('uploader')  to require('failing-uploader') , and then running it, we get:

This might be what you want, or maybe you want to just register the error while continuing uploading the other files. In that case, you need to modify the callback to reduce() , for example like this:

 

 

This will catch the rejection a level deeper, so the chain can continue. The output of this code is:

And indeed, the second upload fails, but the chain continues, and at the end, you know what succeeded and what failed.

Our promises have been processed sequentially, with error catching, and continuing on errors anyway.

 

 

 

Posted in JavaScript, node.js, software development | Tagged , , , , , , , , , , , , | 6 Comments

Chatting at work for fun and profit

As I am writing this, I am employed as a software engineer at XS4ALL, the oldest consumer internet provider in the Netherlands.

One of the things that makes people at XS4ALL so effective, is the use of a company IRC server, only accessible by employees.

A what?

IRC is a very old chat protocol. It was developed in 1988 in Finland. What makes IRC different from other chat protocols, like Facebook chat, Skype and ICQ?

groupchatThe main difference is that IRC is mainly aimed at group conversations. IRC has channels, which can be seen as chat rooms. These channels have a name, and anyone can join them. Anything you say on a channel, is broadcast to all people who have joined that channel. Channels persist, as long as there is at least one person in them.

It is also possible to have private conversations on IRC.

 

 

So, how does this help XS4ALL?

We have several “official” channels on our office IRC server, for example a general channel for everyone, a helpdesk channel, and a sysadmin channel. In principle, everyone is welcome on any channel, but the people from the department that “own” the channel are the ones who mainly talk. The others mainly read.

If an employee has a question that he/she has not found an answer for elsewhere, he/she can ask this on the appropriate channel. Hopefully, at least one person of that department is reading along at that moment, and can answer the question.

mIRC - A commonly used Windows IRC client

There are several reasons why this simple concept is very powerful.

First, asking the question in a channel instead of a private conversation makes it more likely that you will get a response quickly. You don’t have to depend on one person being active at that moment.

Second, other people will be able to read your question and the answer. They will therefore learn the answers to questions, so they won’t have to ask the same question at a later time.

Third, it is simpler and more informal than a shared email box or a ticketing system. This can lead to faster replies, which allows a customer’s problem to be solved while the customer is still on the phone, instead of the employee who asked the question having to call back the customer at a later time.

Fourth, if one person gives an answer, but a second person knows a better answer, you will also get this better answer. That is a better result than email or a ticket, because it is very rare to receive more than one answer for each inquiry done that way.

Fifth, it is easy to read back an IRC channel after a couple of hours of not reading it, assuming that you keep your IRC software connected to the IRC server. That gives additional opportunities for both learning and giving delayed or improved answers.

irssi - A commonly used unix IRC client

All in all, our IRC server contributes to a significant amount of knowledge sharing among employees. It always impresses me how easy it is to learn a few things, by simply being present on a few IRC channels and reading along.

If your company is not too big, and willing to experiment a little, I can highly recommend trying IRC, or another group-based chat protocol, to improve the sharing of knowledge, and experience fast, interactive problem solving.

Posted in work | Tagged , , , , , , , , , , | 1 Comment