EBD-USB+ Electronic Load – Testing Power Supplies and Batteries

I watched a video from Andreas Speiss reviewing electronic loads, and identifying one in particular called EBD-USB+ from zketech.com and purchased through Amazon or Banggood or wherever. The software is a little hard to find, but it’s available through this link. It comes as a .rar file, which you might need to unpack with 7-zip or Winzip. You will also need the driver.

The resulting Windows-only software allows one to test two categories of things: batteries, by allowing a set amount of current to flow and recording both current and voltage vs. time, and power supplies, by allowing an increasing amount of current to flow and recording voltage vs. current. The Speiss video explains really well.

Here are a couple of results I got while trying to locate a power supply capable of furnishing 5V at 2.1A to power a Raspberry Pi 3B. None of the five wall-wart power supplies really made the grade.

The worst of them, capable of providing only 4.72V at 2.1A
The best of them, the power supply that Adafruit sells and recommends for the Raspberry Pi. It provided 4.88V at 2.1A

Thoughts on serial communications

Maybe these issues have been solved in the past. I don’t think there is necessarily a very good solution, though.

The problem is that serial communications is imperfect. Characters get messed up in transit between two devices. When they do, that in turn messes up carefully thought out protocols, like “Capture characters in a buffer until you see the linend; then process the buffer as a single message.”

It’s a small problem if a message character gets corrupted. The message probably won’t make sense; that can be handled with an error message and some recovery process.

But what if the linend itself gets corrupted? Then your program won’t see it as the conclusion of a message, so it will continue capturing characters into the buffer. Then what? Two messages are now corrupted. Still no big deal. But then you have an increased risk of buffer overflow. ( You are checking to avoid buffer overflow, right? ) How to handle that? Flush the buffer and hope the next message is better? But the next message will most likely be broken, ’cause you probably flushed the buffer halfway through it, discarding the first part.

Finally, what do you do if the source goes away? There might be a partial message in the buffer. Should you have a timeout, after which you conclude that the source has crashed, and you need to flush the buffer so the next message that arrives, when the source has recovered, will not also be clobbered?

Think of these possibilities. Writing embedded software is not necessarily as simple as we’d like. But if we consider all the possible ways something can go wrong, and deal with them carefully, we can make software work well and robustly.

Teensy, touch sensing, and ESP8266 – via MQTT and Raspberry Pi

Whee!

I have a demo/development setup on a small breadboard which powers an ESP-01 ( the small, cheap, 8-pin breakout board for an ESP8266 chip ). In addition to being a programming board, it has 3v3 voltage regulation, a pot, a temperature sensor, a simple LED, and a WS2812 3-color LED. I have the installed ESP8266 running a program to subscribe to an MQTT broker and light the LEDs according to publications on a certain topic.

I’ve developed another small program which listens to a 115200 baud serial connection and accepts JSON messages which instruct it to connect to a WiFi, to publish values to an MQTT topic, or to subscribe to an MQTT topic. It is intended as a general-purpose glue service for an otherwise dumb microcontroller to communicate with remote telemetry.

I have also a Teensy 3.1, which supports capacitive touch sensing, interprets the values as control signals, and via the text-to-MQTT glue service publishes them, whence they will control the aformentioned LEDs. The program that does this is also on Github.

It all works, sort of. I’m considering it to be at the late-alpha stage of development.

Securing a Raspberry Pi IoT Gateway

I believe that the UMass IT policy forbids “rogue” WiFi gateways in order to prevent anonymous Internet access, so that nefarious actors can be identified.

I needed to create an IoT server for my lab, M5, and it needed to be accessible via WiFi. It also needed to have Internet access so that I could keep its Linux firmware up to date.

Securing it in accordance with the IT policy, and preventing tampering, required several actions:

  • disable forwarding of anything coming in from the WiFi port wlan0 to the ethernet port eth0
  • limit the user accounts to the minimum necessary
  • secure the privileged account “pi” by changing its password
  • disable remote root account login

I will address the first and the last of these actions.

Disabling of forwarding between the WiFi port and the ethernet port

There is a baffling mechanism called “iptables” that routes packets between the local host and its various network ports. Luckily, I had to deal only with the FORWARD chain. I simply had to flush the current rules out of the FORWARD chain

sudo iptables -F FORWARD

and add the one to reject forwarding:

sudo iptables -A FORWARD -i wlan0 -o eth0 -j DROP

Once the changes are made, they are made permanent by saving the tables into a rules file that is consulted at boot time:

sudo sh -c "iptables-save > /etc/iptables/rules.v4"

Disabling of remote root account login

Edit ( with root ) the file /etc/ssh/sshd_config and put the line

PermitRootLogin no

References:

Calculating the Axis of Nutation of my Solar Tracker

Following on the cluster analysis of the last post, I pairwise crossed (as in cross-multiply) the resulting cluster centers, and got three reasonably-close vectors after normalization. These were:

0 x 1: (-0.36, 0.93, -0.10)
0 x 2: (-0.23, 0.97, -0.06)
1 x 2: (-0.20, 0.97, -0.14)

While satisfyingly close, I wanted to do better.

Next I decided to take 1M random pairs of points from the raw data, cross and normalize these pairs, and then look at component-wise statistics. I ignored points within my arbitrary minimum separation of 0.5. The results were not only satisfyingly close, but also close to the values I got above.

N: 367187
M: (0.30, -0.94, 0.11) ( sum = (108760, -345150, 40391) )
S: (0.10, 0.08, 0.04)

I am now satisfied that I have the axis of nutation: (0.30, -0.94, 0.11).

Update…

Of course, I couldn’t leave well enough alone. Looking at the data over time, I found that data taken before 2018-01-14 16:41 was radically different. That was the time I mounted the monitor onto its arm on the solar tracker. <sheepish grin>

So now the cluster analysis yields

0 x 1: (-0.36, 0.93, -0.10)
0 x 2: (-0.42, 0.90, -0.12)
1 x 2: (-0.44, 0.90, -0.08)

and the random pair analysis yields

N: 272251
M: (0.350, -0.927, 0.099) ( sum = (95366, -252300, 26948) )
S: (0.07, 0.05, 0.02)

Finally, my analysis program now yields

Projection of Gravity Vectors...

normalized nutation axis 'mean': (0.352, -0.931, 0.099)
unnormalized rotation_axis (-0.93, -0.35, 0.00)
nutation_axis (0.352, -0.931, 0.099) x z_axis (0.00, 0.00, 1.00) -> rotation_axis (-0.935, -0.354, 0.000)
Sin theta = 0.995 -> Rotation angle 1.47 radians
Test Projection of nutation axis: (0.00, -0.00, 1.00)
Most recent ( 2018-01-29 18:39:45 ) theta: -10.0 degrees

So the panel, while its elevation is as low as it goes, has an elevation theta of -10 degrees. Here is the 3D view of the raw gravity vectors now. (For more about how to see this chart in 3D, see my previous post.)

and the trace of elevation over the length of one day (yesterday):

I’m not yet completely satisfied. The plot shows a range of elevation of about -7 to about -36 degrees, for a range of nutation of 29 degrees. I believe that this accounts for less than half the about 70 degrees I think I see.

More investigation to come!

Instrumenting and Analyzing My Solar Tracker

I’m an instrumentation and data collection geek. I love the challenge of developing ways to monitor the systems of my house and making sure they’re working properly. I’ve had a couple bad experiences where systems failed and not knowing it soon enough caused problems.

I have a solar tracker system, one which follows the sun when it’s up. It failed once with a hydraulic leak, and it was a couple days before I noticed it. Never again!

I’ve built and installed a monitoring system including its own solar panel, battery, 9 degree of freedom MEMS device normally used for inertial navigation of unmanned aerial vehicles, a microcontroller, and a LoRa radio with enough range to cover the distance to my house.

I’ll go into the details in another post. What I want to put into this post is the 3D plot of the first telemetry data.

The solar tracker is on a pillar out in my yard; the whole rotates around the vertical axis of the pillar (azimuth), and the panel tilts up and down (the motion is called nutation, the result is called elevation). The inertial navigation system (INS) device measures linear acceleration, angular acceleration, and magnetic field, all in 3D. My first task is to develop a coordinate system with one axis parallel to the axis of nutation of the solar tracker.

I have collected thousands of data points, taken at 1 minute intervals, and including the 3 linear accelerations. These mostly reflect the gravity vector. I’ve now plotted these in 3D; they should all lie in a plane, perpendicular to the axis of nutation. I can see this in the plot. (Look at the picture with 3D glasses, red filter for left eye, cyan filter for right eye.)

My next step is to calculate cross products for many pairs of vectors. The cross products will be perpendicular to each of the vectors, and so all the cross products should be collinear with each other, and also with the axis of nutation.

With 40,000 data points, there would be n(n-1) pairs, and hence 1.6 billion cross products. That’s a lot of computation. But I reckon that crossing vectors that are close together will yield poor results, and account for a lot of noise. So I’ll cross only those vectors that are farther apart, by some arbitrary distance.

Quaternions are fun!

Once upon a time, I was a math major. I shunned engineering as not pure enough — I wanted to be a scientist. And I didn’t see that most of the really good mathematics is developed in support of other sciences, particularly physics, and engineering. Fourier analysis. Eigenfunctions. Complex numbers. And linear algebra.

I’ve always loved computer graphics, and especially plotting of 3D clouds of points. In order to visualize a cloud of points in 3D, one has to generate a right-eye picture and a left-eye picture, and somehow present these so the correct eye gets the correct picture. The last bit is easy – put the pictures next to each other, the R picture on the L and vice versa, and cross your eyes to get it all working. There are a bunch of other techniques, but this is my own favorite. (Yeah, I haven’t played with 3D goggles yet, other than the venerable old Viewmasters, which were absolute magic when I was 7 years old.)

The other part is a little more involved, mathematically. To generate an image from the perspective of a given eye, map the cloud so that the x-y plane of the display is perpendicular to the eyepoint vector. Then, if you want perspective, shrink the cloud in the distance by moving each point closer to the eyepoint vector by a proportion that varies with the eye distance.

Then do it all again for the other eyepoint.

I’ve been aware of quaternions since my math days, but have never looked closely at them. They can be viewed from the perspective of complex analysis, but are better viewed as linear algebra.

What’s cool about them (from my limited view) is that they provide an elegant and simple way of rotating vectors in 3-space. Here’s the reference that I used to learn how. And here’s the inevitable Wikipedia link ( and another ) to provide more of an introduction.

Documentation!

Whew. I just read LadyAda’s post about using Doxygen to document a project, and setting GitHub up to not only host the files, but also to serve the documentation as html. I managed to do most of it!

I created a separate GitHub project according to her instructions, restructured my code as a library with an example (main) program, and pushed the code up to GitHub.

I read (some of) the Doxygen manual, and added a bunch of structured comments to my library code. I installed and ran Doxygen, fixed the documentation, many times, and got it all working.

I set up a Travis account and followed the instructions, but couldn’t get everything to work. It’s supposed to set up a virtual machine, configure it, compile your code, do some additional diagnostics, and then automatically generate the documentation using its own copy of Doxygen. Unfortunately, this particular code is written for the Teensy platform and won’t run on anything else. The Teensy setup for the Arduino IDE involves first installing the Arduino IDE, then running a Teensy installer on top of it, and I couldn’t figure out how to do all that with command-line scripts. (Probably I could, but didn’t want to take the time to try. Lazy!)

But I could simply upload the documents directly into the gh-docs branch of my project and ask GitHub to serve them. So I did.

They can be found here.

I like this. I think this will be how I document future open-source projects.

Doxygen seems pretty cool

I’ve just (belatedly) added Doxygen commentary to the firmware I’ve written for the OpenEffects Project hardware. It was half a day’s work, but it was reasonably painless to install Doxygen and Graphviz dot, even starting from scratch as I was.

I had blanched at the idea of manually diagramming the class hierarchy and arranging the comments so they made sense. Doxygen handles it well enough that I didn’t have to struggle (much).

The overloading of potentiometers

Creating complex firmware for existing hardware is often complicated by unforseen ( or ignored by the hardware designer ) human-interface issues. One example of this situation comes up when a project requires more settings than there are physical controls for. Modern cameras provide examples of the problem, and also of its solution.

Consider the case of a camera with a touchscreen. Problem solved! The touchscreen provides the capability of having a given location on the screen provide control for many different things, each made available with a different screen display.

Some modern cameras, however, have a few buttons which could be called multipurpose. In one mode, a given button might be mapped to a different function than it is in another mode.

I’m currently working with the OpenEffects Project hardware, which sports 4 potentiometers. I want to experiment with quite a few different effects and explore their interactions with my electric violin. The number of parameters for these functions in total is about 40. I broke up the settings so that each effects module had its own settings mode. Its settings are displayed and can be changed while its mode is active.

Since this design overloads the potentiometers, I had to think long and hard about what should happen when the mode is changed, since the pot can’t move to reflect the ( different ) setting of the ( different ) parameter of the mode’s ( different ) effect module.

I chose to ignore each pot until it is moved, at which time the related effect module setting will ( unfortunately ) jump to the current pot setting.  (Note – consider using a rotary encoder rather than a pot in your designs!)

The need to ignore a pot when the mode changes brings up a *different* gotcha. Reading a voltage several times will often yield different values. This will cause the pot value to be perceived as changed. Rats. So I had to add a guard band around the current setting, considering any reading within the guard band to be unchanged.

I also added an EWMA smoothing algorithm  to minimize the required size of the guard band. The larger the guard band, the more one has to move the pot in order for the change to be noticed. The longer the averaging period, the more sluggishly the reaction to a change occurs and settles.

I  ended up by splitting the difference to balance rotational- and time- sluggishness. But it works!