OpenHAB tricks for the Aeotec Zwave Doorbell

The Aeotec Zwave Doorbell

When we bought our house it didn’t have a doorbell. I hadn’t consciously considered the utility of a doorbell, but the house is in a cell phone dead zone so the lack of a doorbell has left guests cooling their heels waiting for us to notice their arrival, or frantic knocking if we’re downstairs. Some first-time visitors end up wandering around wondering if they’re even in the right place.

As a result, one of the more practical items I picked up in a recent run of home automation tinkering was the Aeotec Gen5 Zwave doorbell. It plugged neatly into a wall socket, that until the doorbell arrived, had seemed oddly placed being high up on the wall over our stairs. The doorbell remote connects to the base unit over a proprietary radio protocol, but the base station speaks the wireless zwave protocol that can connect to many smart home base stations. I had rolled my own base station from a RaspberryPi and an Aeotec Z-Stick Gen5 using the free OpenHAB software. Once the doorbell was paired into the zwave network I laid plans of incorporating the doorbell into my setup to make this one smarter than the average doorbell.

OpenHAB Setup

First I had to get OpenHAB talking to the new device. This section assumes you are already somewhat familiar with OpenHAB and administration of Linux packages with apt-get.

Hopefully this part will be much simpler in the future and can just be ignored, but as I was setting this system up the OpenHAB project was in the midst of its transition from 1.x to 2.x versions such that 1.x versions were no longer being actively built and released, while 2.x versions were not yet ready for production use. My challenge was that the the Aeotec doorbell had been added to the top-of-tree zwave binding’s zwave product database after the last official builds posted to the OpenHAB .deb repository.

To work around this issue I started out by pointing apt-get to the latest builds of OpenHAB and updating my RaspberryPi to the latest 1.x version of OpenHAB (which was version1.8.1 at the time this was written).

deb stable main
deb testing main
deb unstable main
sudo apt-get upgrade openhab-runtime/testing

Then it was off to find a build of the zwave binding I could get working with this setup. In the end I was able to download .jar build of a recent successful build of the top-of-tree version 1.9 zwave binding from a Jenkins continuous integration server either here or here. Installing the .jar file in my OpenHAB addons folder allowed the binding to find the device, but attempting to configure the device via the HABmin zwave binding web interface ended up crashing the OpenHAB server every time. So… don’t do that. Just look up the doorbells zwave node id and code up any items and bindings as needed. Those seemed to work just fine.

Items and Rules

With the doorbell recognized by OpenHAB it was time to bind some items and start automating this sucker. My main motivation was to control the volume of the doorbell to effectively mute it when we didn’t want an audible notification. A bonus would be to instead push a notification to our cellphones, but that will have to wait for a later post.

I started by aiming for just manually controlling the volume and ended up with the following lines added to my OpenHAB items configuration file. When the ON payload is sent with theSWITCH_BINARY command it triggers the doorbell chime to play just like a press on the outside button, which is useful for testing. The doorbell is zwave node 4 on my network, you should replace that in the bindings below with the correct node for your network.

Switch zwave_house_doorbell "Doorbell" { zwave="4:command=SWITCH_BINARY" }
Number zwave_house_doorbell_volume "Doorbell Volume [%d]" { zwave="4:command=configuration,parameter=8" }
Switch rules_house_doorbell_mute "Doorbell Mute"

The second binding is for the doorbell’s volume. I looked up the zwave configuration information for the doorbell on Pepper, a clearinghouse for zwave device info, and found that the volume is configuration parameter 8. OpenHAB has a virtual command that lets you set configuration parameters. So by using a Number item and binding to the CONFIGUATION command using parameter 8 we’ve got an item which controls the volume on a scale from 0 to 10.

Finally, I added a Switch item to act as the mute button. This item has no direct zwave binding, instead this switch is used to control an OpenHAB rule that toggles the doorbell between 0 (no sound) and 10 (max volume).

rule "Doorbell Muter"
    Item rules_house_doorbell_mute received command
    if (receivedCommand == ON) {
      sendCommand(zwave_house_doorbell_volume, 10)
    } else if (receivedCommand == OFF) {
      sendCommand(zwave_house_doorbell_volume, 0)

Trying it out

Screen Shot 2016-02-17 at 12.30.45 AM
Doorbell Controls OpenHAB UI

At this point you’ll want to go ahead and add the three items above to your home’s OpenHAB sitemap and give it a try. You should see something like this in your default-themed OpenHAB web UI.

You’ll note that the Doorbell Mute doesn’t immediately update the UI for the Doorbell Volume. This is because the Volume item gets updated directly from the doorbell device itself. We could probably us the postUpdate() command in our muting rule to cause the UI to update immediately, but this way you can be sure that the value in the UI is being read from the device itself.

Sausage and Potato Stew

Last Sunday we decided to cook for a friend coming over and had an interesting challenge, avoid gluten, yeast, and a few other gotcha ingredients, and ideally easy to chew soft food. Striking out on the interwebs I came across the following recipe on for a “sausage and potato bisque”. Well, it’s not a bisque, it’s a stew, but it seemed like it would be yummy and could be made to meet all of the dietary checkboxes.

In short the recipe was scrumptiously yummy, easy to shop for, easy to prep in parallel if a guest is willing to help in the kitchen (we had 2 chopping stations going in parallel), and leaves a good 40 minutes of simply cooking down on the stove while you hang out.

The recipe seemed amenable to some minor modifications, as well. We ended up putting in a whole clove of chopped garlic, tripling the black pepper, and substituting chopped fresh rosemary for the thyme in the recipe.

I also happened to learn a neat trick for dicing an onion, specifically the “radial cut” method shown here. It seems easier to me when you’re chopping only part of the onion to use this method than the horizontal slicing method you see elsewhere.

Fixing Verizon DSL’s incorrect DNS response behavior

TL;DR unbreak macports on a Verizon DSL connection by replacing the last octet of your Verizon DNS servers with .14 to disable redirects to a search page for domains which fail to resolve.

Over the holiday break I had reason to update macports on my venerable old laptop, and apparently it was the first time since moving to the new house which is currently served only by Verizon DSL. During the process I got a really unintuitive error in several packages that followed the form:

Error: Failed to configure poppler, consult /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_poppler/poppler/work/poppler-0.29.0/config.log

Error: org.macports.configure for port poppler returned: configure failure: command execution failed
Please see the log file for port poppler for details:

Starting to debug the problem, I noticed that several of the ports had only empty directories where tarball contents should have been. Then I noticed this gem in the logs:

Warning: Your DNS servers incorrectly claim to know the address of nonexistent hosts. This may cause checksum mismatches for some ports. See this page for more information:

Yep. That’s the problem. Fortunately Verizon lets you opt out of this behavior it calls Search Assist, but which breaks internet standards.

I’ll save you some painful wandering through the site (it tries to lead you through based on modem and firmware version, but didn’t have mine) and just tell you all you have to do is replace the last octet of your Verizon DNS servers with .14 to disable redirects to a search page for domains which fail to resolve.

If you have DNS X.Y.Z.12 just change it to X.Y.Z.14. Done.

Then follow the instructions at the macorts MisbehavingServers for any port that seems to be broken or just plain wonky and everything should be back on track. I had a broken version of autoconf and gawk in my tree, so pretty much nothing worked, making the issue obvious, but if it had been some dependency deep in the build I might have stumbled around for hours.

New Music: Break Out the Old Funk

32ndStReasonMixerThis track grew out of a study I was doing on classic breakbeat drum patterns and playing around with groove. The mix is not perfect, but then I’m not an expert mastering tech so I’m happy to live with what I can get.

I ended up pumping up the compression to really make the drums smack, and the driving gated bass line will reward anyone with good headphones or actual woofers on their speakers. The tone is set predominantly with subtractive poly synths running square and sawtooth waves with distortion layered on top and a breathy pad underneath in portions to support the mid-ranges. Enjoy!

A Retrospective Remix

I started writing electronic music quite a while ago, so I have a lot of old stuff lying around on my hard drive from the “early days”. This week I noticed one of these songs was last edited pretty much exactly 10 years ago. It’s simple, but not bad. Here it is:

So I decide it was time for a remix. I kept it at the same tempo, but went a bit more up-beat with a club-ish feel and some new bass grooves, melodies, and harmonies. Had some fun with gated filter effects on the original music loops as well.

Thrust Code Recipe: multi-dimensional segmented reduction

Recently I was attempting to port some scientific code to run on the CPU and GPU via Thrust. Thrust is a really powerful algorithms library in C++ similar to the Standard Template Libarary (STL), but it mainly operates on linear vectors of values. For the code I was parallelizing it didn’t seem to match this well at first. Fortunately Thrust provides some real power-tools, and here’s the recipe I came up with.


for (double theta=thetaLow; theta<thetaHigh; theta+=dtheta)
   const double dphi = (phiHigh - phiLow)/100;
   const double outer_value = compute_outer(dphi, theta);
   for (int j=0; j<numJ; j++)
      for (int i=0; i<numI; i++)
        for (double phi=phiLow; phi<phiHigh; phi+=dphi)
           double value = compute_value(theta, phi, i, j, outer_value);
           result_buckets[j][i] += value;
  • Multiple loop nests with non-cannoncal forms
  • Data structures that are multi-dimensional
  • The reduction is segmented
  • Outer loop values are shared across inner loop computations
  • Traversal of reduction segments is sparse, not dense

This problem kind-of seems like a weird combination of outer-product then reduction, similar to a form of MapReduce. It generates four dimensions of values and then integrates them down into two dimensions. Fortunately each computation and the reduction is commutative, so we can extract all the parallelism in these loops.


The original loop nest from outer to inner was [theta][i][j][phi], however the functionality is to do a summation over all [theta][phi] values for each [i][j] pair. To parallelize this we’ll be turning that serial summation into a parallel reduction, and because each [i][j] pair is independent of each other we can do all i x j of these reductions in parallel, too.

To achieve this we want to create a reduction operation across all [i][j][theta][phi] values but have the reduction operations routed to specific [i][j] buckets. By generating a key value for each [i][j] bucket we can use thrust::reduce_by_key to achieve the desired segmenting of the reduction.

Since the values we’re generating as part of the reduction are based on indices and reading stateless data in compute_value we can implement the body of the computation as a transform operation on the linearized index of all [i][j][theta][phi].


  • custom thrust::unary_function‘s to both create the key segmentation for the reduction and to generate the actual computed values
  • explicit computation of the index space for the non-canonical loops
  • thrust::reduce_by_key to perform the reduction
  • thrust::counting_iterator to generate an implicit computed input sequence for iterating over (our flattened “loop indices”)
  • thrust::transform_iterator to get those unary_function’s to act on the reduction as well as perform kernel fusion (a good thing!)
  • thrust::discard_iterator so we don’t waste time or space on output that isn’t needed

Let’s do it!

Unary Functors!
We use two unary functions (functions that transform a single input into a single output) in this recipe. While they look like verbose C++, the functionality is straight-forward.

The first unary function is responsible for transforming a sequence value from [i][j][theta][phi] into a key value representing a [i][j] bucket.

struct key_functor : public thrust::unary_function<int,int>
  const int Nphi, Ntheta;
  key_functor(int _Nphi, int _Ntheta) : Nphi(_Nphi), Ntheta(_Ntheta) {}

  __host__ __device__
  int operator()(int x) { return x / (Nphi * Ntheta); }

The second unary function actually computes the pre-reduction value for every point in our iteration space. It’s what implements the body of the loops we’re parallelizing. The operation is on a single element of our [i][j][theta][phi] space and the function object computes a single output value for each element in this multi-dimensional space as input to the final reduction.

struct value_functor : public thrust::unary_function<double,int>
  const int Nphi, Ntheta, Nj, Ni;
  const double dtheta, thetaLow;
  const double dphi, phiLow;

  value_functor(int _Nphi, int _Ntheta, int _Nj, int _Ni
                double _dtheta, double _thetaLow,
                double _dphi, double _phiLow) : 
                Nphi(_Nphi), Ntheta(_Ntheta), Nj(_Nj), Ni(_Ni),
                dtheta(_dtheta), thetaLow(_thetaLow),
                dphi(_dphi), phiLow(_phiLow) {}
  __host__ __device__
  double operator()(int index) const 
    // calculate original loop indices
    // "loop" order is now [i][j][theta][phi]
    const int phiIdx   = index  % Nphi;
    const int thetaIdx = (index / Nphi) % Ntheta;
    const int jIdx     = (index / (Nphi * Ntheta)) % Nj;
    const int iIdx     = (index / (Nphi * Ntheta * Nj)) % Ni;
    // Generate our original loop values from the base value and stride
    const double theta = thetaLow + thetaIdx * dtheta;
    const double phi   = phiLow + phiIdx * dphi;
    // Recompute our outer-loop value (replicate, don't communicate!)
    const double outer_value = compute_outer(dphi, theta);
    // Compute the value for this [i][j][phi][theta] element
    return compute_value(theta, phi, iIdx, jIdx, outer_value);

Implicit iterators for efficient indicies
Now that we have our unary functions which implement the behavior we want to parallelize the next challenge is to invoke them properly. To do this we’ll want to create a linear sequence of indices covering all the [i][j][theta][phi] elements.

First we’ll need to figure out the range of theta and phi as those loops were not originally index based. Then, rather than store all of the index data in memory we’ll use an implicit iterator which can compute the needed value on-the-fly.

  const int numPhi   = (phiHigh - phiLow)/dphi; 
  const int numTheta = (thetaHigh - thetaLow)/dthetha;

  // Create a linearized index of [i][j][theta][phi]
  thrust::counting_iterator<int> index_begin(0);
  thrust::counting_iterator<int> index_end(numI * numJ * numPhi * numTheta);

Invoking it all
The final step once we have our input index sequence defined is to invoke the whole thing. Since our unary functions take in this index and then generate the needed bucket key or computational value we can use thrust’s transform_iterator to fuse these computations at compile-time with the implementation of the ultra-optimized thrust reduction operation. This is really important, because it means we won’t have to save off all the intermediate values generated for each element. Since we’ve got a lot of parallelism here, that’s a lot of data-shuffling to save.

using thrust;

// Storage for the of our reduction output consisting of a linear vector of [i][j] buckets
host_vector<double> output(numI * numJ);

// Capture all the input values needed to generate each element
value_functor value_generator(numPhi, numTheta, numJ, numI, dtheta, thetaLow, dphi, phiLow);

// Do a reduction for each [i][j] bucket across the values for each [theta][phi]
  // first [i][j] bucket key
  make_transform_iterator(index_begin, key_functor(numPhi, numTheta)),
  // last  [i][j] bucket key
  make_transform_iterator(index_end,  key_functor(numPhi, numTheta)),
  // generator of values
  make_transform_iterator(index_end, value_generator),
  // the compacted key list output is not needed
  // The output of [i][j] buckets with summed values

Here we’ve allocated a vector for the output, it could just as well be a device_vector, too. Then we created the value_functor objecte for value generation, capturing all the input values necessary.

Now we’re on to the invocation of the reduction operation. The first two arguments are transformations of the indices into our reduction bucket keys with the key_functor objects handling the duties. The third argument uses the same input index sequence with our value generating functor to create the inputs to the reduction operation. The fourth argument discards an unneeded output. Finally we provide an iterator argument for our output of the reduction.

Getting the data back out
If, as I in my case, this code is sitting in the middle of a bunch of existing code that’s using the multi-dimensional arrays elsewhere you’ll want to pull this data back out from Thrust’s containers.

thrust::host_vector<double>::iterator iter = output.begin();
for (int i=0; j<numI; i++, thrust::advance(iter, numI))
  thrust::copy_n(iter, numJ, &result_buckets[i][0]);

And you’re done!


If you’re a JavaScript or other dyanmic language programmer you’ll not be used to crafting your continuations and closures by hand like this, the good news is that the Lambda language feature in C++11 will slice right through this verbosity soon enough.

However you’ll still have to do the hard work of parallelizing: searching for data or loop dependencies, re-architecting the loop structure, and picking the right primitives for your parallel operation.

Doing this hard work with Thrust has some great payoffs, though. For parallel code that maps onto Thrust is that this algorithm will be both portable and will leverage expert-optimized primitive routines on each platform. The code can be run in parallel on both CPUs and GPUs via Thrust’s TBB, OpenMP, and CUDA back-ends, or just serially on the CPU if you prefer to debug the code there (or just run valgrind on it!).

Night musica

Sometimes the composition just flows, it must come out. Seeping forth from a weary mind, eating up the hours. This one was six hours from first phrase recorded to complete with a quick mix and mastering to keep it from being too muddy. Had to do it before it was lost. I’m not sure why it’s named what it is, probably it was just a fun phrase to say. So, enjoy ‘Antithetical Hermetics’…

Night Owl Music

I tend to do a lot of my musical stuff at night. Neighbors probably haven’t appreciated that. In-fact, I once got a note on my door after some late night electric guitar jamming which said, “It’s late and your guitar is very loud downstairs, but your technique is clearly improving.” Anyhow, here’s a track I wrote a few years ago out in Cali late at night noodling around with some sampled piano and dulcimer loops and a down-temp beat. It’s “Gibbous Delight” and it’s kind of frenetic but relaxed at the same time, like composing at 2am. It’s also the only song I’ve written with a dulcimer break…

Anthem for returning

A piece constructed in one furious, late-night composing session on one of the last nights I spent in a loft in downtown St. Louis, MO. The track is ‘Something Nice (Synth Kid Mix)’.

Living downtown was an experience. There wasn’t really much down there in 2006, but the building was incredible with 18ft. tall windows, exposed brick, original wood flooring, no sound-proofing, and all the problems that you could imagine coming with it.

Starting with some Favs

Starting off with some favorite tracks. The first one, ‘Rainy Wednesday’ was actually written on a Wednesday when it was raining probably sometime in 2000 or 2001. Early on capturing something literal about the environment and the mood was a good way to get the composing process started. It’s ended up being one of my wife’s favorites. It has a pretty fast beat, but is also fairly laid back giving it a somewhat calm energy.

The second track, ‘Unknown (Danger Grip)’ is just a down-tempo electronic … thing. Layers and melodies of which developed over time in the mid 2000’s during time spent in Urbana, IL and St. Louis, MO. By the end of the process I was really happy how all the different parts integrated with each other.