intelliger 64 leaky softy naive
Posted: Fri Mar 20, 2020 6:20 pm
The attached .fsm implements a 64-tap Widrow-Hoff LMS adaptive filter. I am calling this a "intelliger".
Indeed, such device is self-learning and capable of discriminating, categorizing, identifying, cloning, anticipating.
It gets configured here, for learning (cloning) (identifying) the transfer function of a given device, any kind of device, kind of "plant" would you say, whose input and output are accessible. The kind of device is a loudspeaker exhibiting a highpass, bell resonance, and lowpass behavior. Thus, here, the "plant" is a loudspeaker (modeled in digital for convenience).
The intelliger "mu" scale factor that's governing the learning speed / precision tradeoff can get adjusted from -200 dB (slow learning) to -100 dB (fast learning).
The intelliger is leaky.
The 64 FIR filter weights(i) exhibit a tendency to slowly return to zero.
The leak can get adjusted from -120 dB (slow return to zero) to -80 dB (fast return to zero).
The intelliger is softy.
Non-softy intelligers do compute their weights(i), following the time-integral of "the error multiplied by the input(i)".
Softy intelligers do compute their weights(i), following the time-integral of the signed square root of "the error multiplied by the input(i)". The idea behind such refinement, is that non-softy intelligers generate harsh signals emanating from the multiplication of one signal (the error signal), by another signal (the input(i)). Conceptually, softy intelligers replace the error signal by its square root, and replace the input(i) signal by its square root. This way, the multiplication result, still has the "dimension" of a signal. Its spectrum is softer. There is a calculation simplification, allowed by arithmetic. Signed square root(a) * signed square root(b) = signed square root (a * b).
There is no automatic gain, no filter whatsoever in the "error signal" path. A delayed or phase-corrupt "error signal" ruins the intelliger learning capability.
The learning capability is real, and outstanding.
The learning is fast in case "mu" = -100 dB.
The learning is still present (albeit slow) in case "mu" = -200 dB.
The precision is in the order of 1% when progressively dialing "mu" from -100 dB (fast coarse learning) to -200 dB (slow fine learning).
Unfortunately, the intelliger appears to be fooled by himself above Fs/8.
The intelliger is incapable of steering its gain correctly past Fs/4, upon reaching a 1% precision below Fs/8.
The 1% precision I am quoting corresponds to a measured 40 dB difference between the "plant" signal spectrum, and the "error" signal spectrum.
Unfortunately, in the absence of a "leak" function, instead of converging to a better than 1% precision, the intelliger enters instability. Above Fs/4, the intelliger gain starts gradually increasing, like plagued by resonance, despite the continuous learning process. The only way to stop the gradual gain increase above Fs/4, is to set "mu" to zero, actually stopping the learning. Thus, there is no way back. Such is the fatal flaw.
The "leak" function allows a way back, as rescue.
Here is how to use the "leak" function.
Enable the "leak".
Dis-engage the learning during a few seconds (set "mu" = 0).
Wait until the intelliger gain above Fs/4, goes down.
Augment the "leak" in case the intelliger gain above Fs/4, doesn't go down fast enough.
As soon you see the intelliger gain above Fs/4, reaching a correct level, re-engage the learning.
Unfortunately, once the "leak" function gets enabled, it generates a continuous internal perturbation.
Now, the best precision you can hope below Fs/8 is 5% instead of 1%.
Thus, the "unstable" system that's theoretically capable of a 1% precision, is only capable of a 5% precision.
The "leak" function, as instability fix, is far from optimal.
Let us try improving the system.
This intelliger misbehavior above Fs/8 may be caused by the imperfect digital implementation of the required time-integral function. This may be same as the instability that's plaguing the Agarwal-Burrus digital IIR filter when asked to produce cutoff frequencies that are higher than Fs/8.
The Agarwal-Burrus digital IIR filter can be described as a "naive Virtual Analog" filter, or "naive VA filter".
The Agarwal-Burrus digital IIR filter was popularized in Hal Chamberlin’s book Musical Applications of Microprocessors. See it here : https://www.earlevel.com/main/2003/03/02/the-digital-state-variable-filter/
The Agarwal-Burrus digital IIR filter got significantly improved over time, by replacing its two naive integrators by more complicated devices, doing a better time-integral emulation.
Look here, Vadim Zavalishin, The Art of VA Filter Design, Chapter 3.6 (Trapezoidal integration):
https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.0.0a.pdf
I am thus formulating the conjecture, that the 64-tap intelliger stability and precision will get significantly improved, by replacing its 64 naive integrators, by 64 Vadim Zavalishin trapezoidal integrators.
Please let me know
Have a nice day
Indeed, such device is self-learning and capable of discriminating, categorizing, identifying, cloning, anticipating.
It gets configured here, for learning (cloning) (identifying) the transfer function of a given device, any kind of device, kind of "plant" would you say, whose input and output are accessible. The kind of device is a loudspeaker exhibiting a highpass, bell resonance, and lowpass behavior. Thus, here, the "plant" is a loudspeaker (modeled in digital for convenience).
The intelliger "mu" scale factor that's governing the learning speed / precision tradeoff can get adjusted from -200 dB (slow learning) to -100 dB (fast learning).
The intelliger is leaky.
The 64 FIR filter weights(i) exhibit a tendency to slowly return to zero.
The leak can get adjusted from -120 dB (slow return to zero) to -80 dB (fast return to zero).
The intelliger is softy.
Non-softy intelligers do compute their weights(i), following the time-integral of "the error multiplied by the input(i)".
Softy intelligers do compute their weights(i), following the time-integral of the signed square root of "the error multiplied by the input(i)". The idea behind such refinement, is that non-softy intelligers generate harsh signals emanating from the multiplication of one signal (the error signal), by another signal (the input(i)). Conceptually, softy intelligers replace the error signal by its square root, and replace the input(i) signal by its square root. This way, the multiplication result, still has the "dimension" of a signal. Its spectrum is softer. There is a calculation simplification, allowed by arithmetic. Signed square root(a) * signed square root(b) = signed square root (a * b).
There is no automatic gain, no filter whatsoever in the "error signal" path. A delayed or phase-corrupt "error signal" ruins the intelliger learning capability.
The learning capability is real, and outstanding.
The learning is fast in case "mu" = -100 dB.
The learning is still present (albeit slow) in case "mu" = -200 dB.
The precision is in the order of 1% when progressively dialing "mu" from -100 dB (fast coarse learning) to -200 dB (slow fine learning).
Unfortunately, the intelliger appears to be fooled by himself above Fs/8.
The intelliger is incapable of steering its gain correctly past Fs/4, upon reaching a 1% precision below Fs/8.
The 1% precision I am quoting corresponds to a measured 40 dB difference between the "plant" signal spectrum, and the "error" signal spectrum.
Unfortunately, in the absence of a "leak" function, instead of converging to a better than 1% precision, the intelliger enters instability. Above Fs/4, the intelliger gain starts gradually increasing, like plagued by resonance, despite the continuous learning process. The only way to stop the gradual gain increase above Fs/4, is to set "mu" to zero, actually stopping the learning. Thus, there is no way back. Such is the fatal flaw.
The "leak" function allows a way back, as rescue.
Here is how to use the "leak" function.
Enable the "leak".
Dis-engage the learning during a few seconds (set "mu" = 0).
Wait until the intelliger gain above Fs/4, goes down.
Augment the "leak" in case the intelliger gain above Fs/4, doesn't go down fast enough.
As soon you see the intelliger gain above Fs/4, reaching a correct level, re-engage the learning.
Unfortunately, once the "leak" function gets enabled, it generates a continuous internal perturbation.
Now, the best precision you can hope below Fs/8 is 5% instead of 1%.
Thus, the "unstable" system that's theoretically capable of a 1% precision, is only capable of a 5% precision.
The "leak" function, as instability fix, is far from optimal.
Let us try improving the system.
This intelliger misbehavior above Fs/8 may be caused by the imperfect digital implementation of the required time-integral function. This may be same as the instability that's plaguing the Agarwal-Burrus digital IIR filter when asked to produce cutoff frequencies that are higher than Fs/8.
The Agarwal-Burrus digital IIR filter can be described as a "naive Virtual Analog" filter, or "naive VA filter".
The Agarwal-Burrus digital IIR filter was popularized in Hal Chamberlin’s book Musical Applications of Microprocessors. See it here : https://www.earlevel.com/main/2003/03/02/the-digital-state-variable-filter/
The Agarwal-Burrus digital IIR filter got significantly improved over time, by replacing its two naive integrators by more complicated devices, doing a better time-integral emulation.
Look here, Vadim Zavalishin, The Art of VA Filter Design, Chapter 3.6 (Trapezoidal integration):
https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.0.0a.pdf
I am thus formulating the conjecture, that the 64-tap intelliger stability and precision will get significantly improved, by replacing its 64 naive integrators, by 64 Vadim Zavalishin trapezoidal integrators.
Please let me know
Have a nice day