October 31, 2019

Motor Temperature Estimation Without a Sensor

I just finished building a bunch of Mini Cheetahs which the lab  is loaning out to other research groups.  Since we're giving these to people who for the most part aren't hardware oriented, these robots need to be even more robust than the original one.

One piece of that is preventing the motors from burning out.  During normal operation with a good locomotion controller, the motors barely even get above ambient (even going 3.7 m/s, the motors weren't warm to the touch afterwards).  However, it's really easy to write a controller that rapidly heats up the motors without actually doing anything - railing torque commands back-and-forth due to instability, joints mechanically stuck against something, machine-learning algorithm flailing around wildly, etc.  We haven't burned out any motors on the original Mini Cheetah yet, but I think our lab members all have a pretty good sense of what's good/bad for the robot, and know to shut it down quickly if something bad is happening.  But when the robot is in the hands of a bunch of software and machine learning engineers.....  So to protect the robots, I'm adding in winding temperature sensing and over-temperature protection, to (hopefully) make it impossible to burn out the motors.

Now, the smart way to do this would have probably been to just add a thermistor in the windings, and call it done.  Obviously, I didn't do that, so here's my observer-based workaround.

Overview

The temperature estimate uses an observer to combine two sources of information:  A thermal model of the motor, and a temperature "measurement" based on the resistance temperature coefficient of the copper in the windings. The resistance of the motor is estimated based on applied voltage, measured current, measured speed and known motor parameters, and compared against the resistance at a known temperature.  Sounds pretty simple, right?

Of course not.  If it was, it wouldn't be worth a blog post.  It's not terribly complicated either, but it took a bunch of little hacks to make it actually work.

Thermal Modeling:

I'm using a 1st order thermal model with just thermal mass  \(C_{th}\) and thermal resistance to ambient  \(R_{th}\).  With temperature rise over ambient \(\Delta T\), thermal power going in \(\dot{Q_{in}}\), and thermal power going out \(\dot{Q_{out}}R_{th} =  \Delta T\), the dynamics are
$$
\Delta \dot{T} = \frac{\dot{Q}_{in} - \dot{Q}_{out}}{C_{th}}
$$
To get \(\dot{Q}_{in}\), I just use \(i^{2} R\), but accounting for the variation from nominal resistance\(R_{0}\) due to \(\Delta T\), i.e. it's temperature coefficient of resistance \(\alpha\) (.00393 for copper):
$$
\dot{Q}_{in} = (1 + \alpha\Delta T)R_{0}i^{2}
$$
In (slightly simplified) code, the model part of the observer is updated as follows, by Euler-integrating the thermal model.


delta_t = observer.temperature - T_AMBIENT;
observer.qd_in = R_NOMINAL**(1.0f + .00393f*delta_t)*controller.i*controller.i;
observer.qd_out = delta_t*R_TH;
observer.temperature += DT*(observer.qd_in-observer.qd_out)/C_TH;

An important implementation detail, in the actual firmware I'm doing the last line of math as doubles, rather than floats like everything else.  My sample period DT is very small (since my loops run at 20-40 kHz depending on the motor I'm driving), and  \( \frac{\dot{Q_{in}} - \dot{Q_{out}}}{C_{th}}\) is also pretty small since the thermal dynamics are very slow.  As a result, the change in temperature over one loop-cycle gets rounded down to zero when you use floats.  Since STM32F4's are crap-tastic at double math, this single line takes a substantial fraction of my 25 microsecond loop period when I'm running at 40 kHz.  I'm sure there's a way to do this avoiding doubles, but I have just enough computational headroom that I don't care.

Measuring Temperature and Resistance

Assuming we can measure the resistance of the motor perfectly and we know nominal resistance \(R_{0}\)  at some temperature \(T_{0}\), and the temperature coefficient  \(\alpha\) , we can calculate temperature:

$$
T = T_{0} + \left(\frac{R}{R_{0}} -1 \right)\frac{1}{\alpha}
$$

To measure \(R\), use one of the synchronous motor voltage equations.  I use the Q-axis one, because for my surface PM robot motors there's usually not much current on the D axis.

$$
V_{q} = Ri_{q} + L_{q}\frac{di_{q}}{dt} + \omega L_{d}i_{d} + \omega\lambda
$$

We'll assume \( \frac{di_{q}}{dt} \) is zero, since the temperature observer is going to be very low-bandwidth compared to the current control dynamics.  Conveniently if you use the Q-axis voltage equation rather than the D-axis one, the \( \omega L_{d}i_{d} \) term is usually zero, since \(i_{d} \) is only non-zero at high-speeds during field weakening, and in that region you can't get enough q-current into the motor to burn it out anyways.

Since we know \(V_{q}\), \(i_{q}\), \(i_{d}\), \(\omega\), \(l_{d}\), and \(\lambda\), we can just solve the voltage equation for \(R\).  In reality I had to add a trick to get this to work, but I'll get into that later.

Implementing the Observer

The basic steps in the observer are:
  • Integrate forward the dynamics of the quantity you are estimating
  • Take a sensor reading and calculate the error between the estimate and the sensor reading
  • Use a proportional controller to drive the estimate towards the sensor reading
In code the whole observer looks like this:

// Integrate the thermal model //
delta_t = observer.temperature - T_AMBIENT;
observer.qd_in = R_NOMINAL**(1.0f + .00393f*delta_t)*controller.i*controller.i;
observer.qd_out = delta_t*R_TH;
observer.temperature += DT*(observer.qd_in-observer.qd_out)/C_TH;
    
// Estimate Resistance //
observer.resistance = (controller.v_q - controller.omega*(L_D*controller.i_d + WB))/(controller.i_q);

// Estimate Temperature from temp-co //
observer.t_measured = ((T_AMBIENT + ((observer.resistance/R_NOMINAL) - 1.0f)*254.5f));

// Update Observer with measured temperature //
e = (float)observer.temperature - observer.temp_measured;
observer.temperature -= .0001f*e;

Naively implemented, the above didn't really work - the resistance measurements are terrible , so either you have a very low observer gain and basically run open-loop with just the thermal model, or the temperature estimate varies wildly depending on the speed and torque the motor is operating at.  It took a couple more additions to get it to work reliably.

Voltage Linearization

The first problem I notice was that the measured resistance changed dramatically as the current varied.  At low currents, the estimated resistance was much higher.  This problem was caused by nonlinearity in the voltage output of the motor drive due to deadtime.  I tried a few methods for compensating the dead time, but I got the best result by scaling my d and q axis voltages based on modulation depth.  I measured the nonlinearity by logging measured current vs commanded modulation depth over a range of currents, putting all the current on the d-axis so the rotor didn't move.


I generated a lookup table to scale the commanded voltages so that current vs commanded voltage is linear.  Around zero commanded voltage, I actually only get about .5 output volts per commanded volt, pre-linearization:



State-dependent Gain Scaling
i.e. lazy person's Kalman filter substitute.

Even with the voltage linearization, there are some operating conditions which make it hard to measure the resistance accurately.

  • At very low currents the measurements are bad.  You have a small voltage divided by a small current, so the result is super sensitive to noise or bias in either.
  • At high speeds the measurements are also bad since the flux linkage term in the voltage equation dominates.   Slight error in the flux linkage constant causes errors, and also non-sinusoidal flux linkage and inductances mean there's lots of ripple in the d and q voltages and currents at high speed.
The least sketchy way to incorporate these effects might be to figure out how the covariance of the temperature "measurement" depends on the operating state, and use a Kalman filter to optimize the observer gain based on the changing covariance.

My sketchy alternative was to hand-tune a "trust" function which behaves similarly to inverse covariance, to make the gain really small in states where I don't trust the temperature measurement.  Basically, if the current is below some threshold, make the gain really small, and/or if the speed is above some threshold, make the gain really small.  Around each threshold I have a linear ramp, to smooth things out a bit.  In code, my "trust" function works like this:

// Calculate "trust" based on state //
observer.trust = (1.0f - .004f*fminf(abs(controller.dtheta_elec), 250.0f)) * (.01f*(fminf(controller.current^2, 100.0f)));

// Scale observer gain by "trust" //
// .0001 is the default observer gain //
observer.temperature -= observer.trust*.0001f*e;

When the resistance estimates are the best (current greater than 10 amps, speed near zero), "trust" is equal to 1, so the observer gain doesn't change, and the observer gain goes to zero (i.e. open-loop) at really bad measurement points.  I'm sure this would horrify every controls professor I've had, but it works pretty well:

Here's a video of testing with one of the mini cheetah motors, with a thermocouple glued into the windings.  I'm railing the current command between -40A and 40A at 5 hz, so the motor only spends a very brief period of time at low speed, and most of its time at high speed with low current.  In this test I initialize the motor temperature at 25 degrees, even though the motor is still at 60 degrees from a previous experiment.  The observer takes about 16 seconds to converge to the actual temperature, but from there on it tracks to within ~5 degrees.  Once the temperature hits 120 degrees, the current limit is throttled back to 14 amps in order to keep the temperature below 120 C: