I’m trying to embed a Jupyter notebook page as a blog post. I saved it as html, and copied an pasted that html below. I don’t think that’s the best way to do it.
We are exploring the Adafruit Metro M4 Express¶
Poking around, wanting to see if we can access the hardware timers, we set ourselves a problem:
We want to see how long it takes to update the NeoPixel’s color. Let’s start by exploring the time library:
import time print ( dir ( time ) )
['__name__', 'monotonic', 'sleep', 'struct_time', 'localtime', 'mktime', 'time']
print ( time.time() )
So it looks like time.time() returns seconds since some fixed epoch – probably midnight, January 1, 1970, which epoch is a standard amongst Unix installations — but I haven’t checked this…
Next we’ll set up the NeoPixel
import board import neopixel pixel = neopixel.NeoPixel ( board.NEOPIXEL, 1 )
And now we want to see how many updates we get in one second. We don’t expect to get a very consistent number, because we’re not being really accurate with our timing.
t = time.time() n = 0 while time.time() == t: # do nothing until we get a clock change pass t = time.time() while time.time() == t: n += 1 pixel = [ 2, 2, 2 ] print ( n )
A better way to time would be to extend the time interval:
durationOfTest_seconds = 5 secondsUsed = 0 numberOfIterationsCounted = 0 t_seconds = time.time() while time.time() == t_seconds: # wait, doing nothing, until we get a clock change pass t_seconds = time.time() while secondsUsed < durationOfTest_seconds: pixel = [ 2, 2, 2 ] numberOfIterationsCounted += 1 if time.time() != t_seconds: secondsUsed += 1 t_seconds += 1 print ( numberOfIterationsCounted / durationOfTest_seconds )
Better yet would be to keep running the test until the standard deviation dropped to less than 1. We’ll calculate the variance as v(x)=E(x2)−E(x)2
or, because this is sample variance,
import math stdevUpperLimit = 20 stdev = stdevUpperLimit + 1 # statistical moments of inertia n = 0 # number of events summarized x = 0 # sum of values x2 = 0 # sum of squares of values while stdev > stdevUpperLimit: # keep testing until criterion matched t_seconds = time.time() while time.time() == t_seconds: # wait, doing nothing, until we get a clock change pixel = [ 0, 0, 0 ] numberOfIterationsCounted = 0 t_seconds = time.time() while time.time() == t_seconds: pixel = [ [ 5, 0, 0 ], [ 0, 5, 0 ], [ 0, 0, 5 ] ] [ n % 3 ] numberOfIterationsCounted += 1 # completed 1 test event; summarize statistics n += 1 x += numberOfIterationsCounted x2 += numberOfIterationsCounted * numberOfIterationsCounted # calculate the mean and standard deviation of all the tests so far if n > 0: mean = x / n if n > 1: # variance = e(x2) - e(x)**2 variance = ( ( x2 * n ) - ( x * x ) ) / ( n * ( n - 1 ) ) stdev = math.sqrt ( variance ) print ( "Mean: ", mean, "; Variance: ", variance, "; St Dev: ", stdev ) else: # print ( "Insufficient tests for variance to be calculated" ) pass else: print ( "No tests completed!" )
That one stinks — it runs forever if the stdevUpperLimit is set too low. So the next iteration needs to reverse everything – run n iterations of updating the NeoPixel and seeing how long that takes. We can measure elapsed time by using time.monotonic, which will give us a baseless value in ticks. Here we go:
numberOfIterationsCounted = 0 t_ticks = time.monotonic() while numberOfIterationsCounted < 10000: pixel = [ [ 5, 0, 0 ], [ 0, 5, 0 ], [ 0, 0, 5 ] ] [ numberOfIterationsCounted % 3 ] numberOfIterationsCounted += 1 t_final_ticks = time.monotonic() print ( t_final_ticks - t_ticks )
It looks like time.monotonic() is a count in seconds, but with high precision. The number we get, 8.8 or so, from the above routine is consistent with about 1ms updates which were the rough guesses of the previous program. But based on the variation across runs in the result – from 7.8 to 8.9 or so – coupled with the visual impression of there being unexpected pauses in the flashing of the NeoPixel, indicates to me that there’s some overhead in CircuitPython that makes it unwise to count on millisecond-scale timing accuracy.