Support

If you have a problem or need to report a bug please email : support@dsprobotics.com

There are 3 sections to this support area:

DOWNLOADS: access to product manuals, support files and drivers

HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects

USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here

NEW REGISTRATIONS - please contact us if you wish to register on the forum

HIFix - open-source Hearing Impairment Fix

For general discussion related FlowStone

HIFix - open-source Hearing Impairment Fix

Postby steph_tsf » Wed Feb 19, 2020 7:10 pm

Say I've just won a lottery, I can afford purchasing Signia overnight, and Knowles also. Comes the question of how to enjoy life. Comes also the more important question of how to gradually multiply and transfer such wealth to people. I am going to orchestrate the business just as usual, with a slight addition. It may end up, ordering the design and the mass production of a low power 32-bit DSP-enabled silicon chip embedding Bluetooth 5.1 (AoA + AoD).

In case I am short regarding the money (no doubt I'll be), more money is welcome. All sorts of contributions are welcome. I know there is no free lunch. All contributors need to get rewarded, some way.

There are nearly 8 billions of people on earth, 2% average of them will wear some kind of hearing aid that's intelligent enough, and setup properly enough, for preserving their auditory capabilities as long as possible.

Considering that for hygienic reasons and auditory reasons one need to renew all hearing aids every five years, and that a good quality stereo combination gets sold say $75 (FOB China or Taiwan), we are talking about a $2.4 billion per year market (hearing aid bulk production). Possibly, the EBIT is 20%, including the guarantee service and the returns costs.

It's an easy business. The demand is certain, and sustained. And there is no need for government subsidization, what's concerning the raw hardware.

The attached sketches will help you decide if you want to be part of such project. Such project is indeed a open-source Hearing Impairment Fix, that's going to be duplicated in low voltage silicon, billions of times. It gets associated to a GUI and a fitting methodology, for quickly and scientifically setting up all the required parameters. The device is of course, embedding a high performance Acoustic Feedback Mitigation scheme. The aim is to ensure that the whole device remains open-source, and top-notch. This is mandatory for becoming a de-facto standard, allowing to decrease the time spent in fitting, allowing to maximize the customer and fitter satisfaction, allowing the reach a beautiful market share, allowing to constantly reinvest in R&D.

Please keep it simple, however not stupid. See the three introductory sketches. Click on them for developing them.
HIFIX (650 pix).jpg
HIFIX (650 pix).jpg (88.44 KiB) Viewed 16890 times

LMS (650 pix).jpg
LMS (650 pix).jpg (56.89 KiB) Viewed 16892 times

AFM (650 pix).jpg
AFM (650 pix).jpg (87.5 KiB) Viewed 16877 times

There is an evolution at the door, consisting in relying on a 3-wire TX/RX digital interface (gnd, data, power) allowing to replace the conventional balanced armature speaker by a low latency digital MEMS mike + digital Class-D amplified speaker, all-in-one combination. This way the sound that's hitting the eardrum gets measured instead of getting estimated. This way, there may be more room left inside the hearing aid body for accommodating a larger rechargeable battery, and a inductive charger. Without any significant cost penalty.

My name is Stéphane Pierre Cnockaert, but everybody calls me Steph. Please give me a click, and a hearing aid so I can listen to it.
Last edited by steph_tsf on Wed Feb 19, 2020 11:52 pm, edited 1 time in total.
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: HIFix - open-source Hearing Impairment Fix

Postby wlangfor@uoguelph.ca » Wed Feb 19, 2020 9:10 pm

You might also consider the legendre filter for estimating peripheral depth perception. It's a very inuitive star for a more realistic form of pan.

Like for instance, using it gets rid of less treble but not too much less bass, ergo if left has less legendre it is closer to the center and vice versa. It is obnly when items are further away that true pan law really needs to be implemented.

In the case of system messages or phone hearing aid systems, it's certainly ideal.

Look to my distance project from the past to see a simple implementation of it.
My youtube channel: DSPplug
My Websites: www.dspplug.com KVRaudio flowstone products
User avatar
wlangfor@uoguelph.ca
 
Posts: 912
Joined: Tue Apr 03, 2018 5:50 pm
Location: North Bay, Ontario, Canada

Re: HIFix - open-source Hearing Impairment Fix

Postby steph_tsf » Thu Feb 20, 2020 3:39 am

While checked my post, I realized that I deserve criticism about the way I am representing the Widrow-Hoff LMS machinery (the green stuff). The green stuff should be considered as a whole, kind of Flowstone component yet to be created.

The way I graphically represented the LMS machinery doesn't tell that the green "LMS Algorithm" subsection, requires the "X" signal (the plant input signal), like the green "FIR Filter" subsection is also requiring the "X" signal.
The way I graphically represented the LMS machinery doesn't tell that the green "LMS Algorithm" subsection is containing a "true Y, less estimated Y" block.
My current preoccupation is to streamline the required LMS machinery, considered as a whole, for it to become a Flowstone component easy to use, with a minimal amount of inputs and outputs, and no confusion.

I remain under the impression that a 64-tap LMS Machinery, hence a 64-tap FIR filter duly equipped with the required 64 accumulators, can render Flowstone, even more interesting.

In case somebody is wiling to see what a 64-tap LMS Machinery is capable of, and its convergence speed, I graphically designed one on LTspice, as analog schematic embedding analog delay lines. This is working, exactly like your brain. The LTspice schematic shows all the details that are required for transposing this as a x86 SSE routine. Worth noting is that IIR filters also work with analog delay lines. Let me know if you are interested.

Later on, there can be a "LMS Machinery II" that's exposing the above details, allowing more signals to enter the machinery. This way, at the condition that one is entering a 50 Hz or 60 Hz signal (power grid), into a "LMS Machinery II" component, comes the possibility to automatically reject such 50 Hz or 60 Hz signal, without any frequency, phase or selectivity adjustment, without any risk of drift thus, in order to make appear some useful signal that's buried, 90 dB below the perturbation.

Possibly nobody tried a Widrow-Hoff LMS machinery on Flowstone because of the physical audio input/output latency, when interfacing the real, analogic world.

Let's start, explaining the basics. You are certainly aware that a dual-channel FFT analyzer that's simultaneously computing the FFT of two channels, then "dividing" one FFT by the other for displaying a transfer function in gain and phase, is due to experience a difficulty when the two signals are heavily time-domain mis-aligned. Of course the phase diagram will exhibit a steep drop, but more surprisingly, the gain curve will get fuzzy, noisy. This is of course because one signal gets less related to the other, as soon as it escapes through time. A dual-channel FFT analyzer is not allowed to become a smart device, so smart it will search for the best correlation, and apply a delay on one of the two signals for restoring a perfect time alignment. How could such over-engineered "smart" FFT, issue a phase diagram one can trust? Impossible. The best you can do is to hook a correlator, telling by how much time, one signal appears to be delayed. This allows to apply a digital delay on the signal that's appearing "in advance". This way you get a super clean gain curve. And you get perfectly aware of the delay.

The Widrow-Hoff LMS machinery is by nature, immune against mis-aligned signals in time-domain. The LMS algorithm is robust enough, and its FIR filter can be made long enough, for coping with a "plant" that's embedding a delay.

The Widrow-Hoff LMS machinery generates in digital domain the "fair estimate of the parasitic acoustic leak signal".

Let's start with something, easy and simple. The "parasitic acoustic leak" can get implemented in digital domain, kind of simulation, using a 100 µs delay for a 3.4 cm distance, or a 200 µs delay for a 6.4 cm distance. One shall control the leak attenuation (say -40 dB to - 10 dB). One shall control the leak frequency response, kind of variable bass and treble shelving. Kind of Baxandall tone control. Or a 5-band equalizer. Or both. The hearing aid needs also to get implemented in digital domain, kind of simulation. All kinds of gains and frequency response are allowed. The hearing aid can embed all kinds of frequency bandssplitters, including the many ones that are causing a gigantic phase distortion. The transfer function and phase distortion of the hearing aid cannot in principle and by construction, ruin the LMS convergence, as the LMS machinery is only monitoring the parasitic acoustic leak transfer function, and never monitoring the hearing aid transfer function. This means that in principle and by construction, the LMS convergence can't get ruined by rapidly and constantly changing gains in the various frequency bands, speaking of a multichannel hearing aid. The sooner one can verify this on Flowstone, the best.

Finally, let's determine if a Flowstone LMS Machinery can deal with the real world. I mean, building some simple hardware consisting of a mike going into a analog pre-amplifier, possibly a tone control, then after, a power amplifier and a speaker. Of course this is going to howl, as soon as the amplifier gain is strong, and as soon as one is approaching the mike, close to the speaker. That's the idea. The PC that's running Flowstone and the LMS Machinery needs to sense a) the analog signal that's driving the power amplifier, and b) the analog signal that's delivered by the mike. Internally, in digital domain, the LMS machinery elaborates the Y "copy" signal, qualified as "fair estimate of the parasitic acoustic leak signal". Everything is perfect till here. The fatal issue comes now. For the Acoustic Feedback Mitigation to work, one must convert such digital signal into analog. This causes a delay. The auxiliary "AFM" input (located after the mike), will get the anti-phase signal, much too late. Unfortunately, the LMS machinery being not a time machine, it can't "advance" the elaboration of the "fair estimate of the parasitic acoustic leak signal".The 10 ms to 50 ms delay caused by ASIO, USB and the DAC will ruin the phase of the annihilating signal. In case you have a better idea, please let me know. In theory, the total latency should be no more than one audio sample. Are there workarounds?

By the way, I've made an error in the HIFIX block diagram. Following such diagram dated 19/02-2020, the AFM input is at the output. I'll correct later on.You need to refer to the "AFM" block diagram. Over there you'll see how to enter the "fair estimate of the electronic version of the parasitic acoustic leak signal".

The "AFM" bloc diagram may look deterrent at first glance. The red elements are there, as helping hands for the LMS Machinery. There exist implementations, where one is counting on the LMS machinery, not only for learning the parasitic acoustic leak, but also for compensating the non-unity transfer functions of the speaker and the mike. You better don't encumber the LMS machinery with this. Try to stick to the original "contract" consisting of hiring a LMS Machinery, only for learning and cloning the parasitic acoustic leak. There is no free lunch. In case you encumber the LMS Machinery, you force it to dilute its capability, with what it is not intended for. It will behave poorly. Now that we are 100% digital, it costs almost nothing, to add the red elements. Using IIR Biquads, you can mimic or invert (reverse) the transfer function of the mike and the speakers you are relying on, far better than a 64-tap FIR filter (the LMS Machinery) that's incapable of handling resonant devices ... like mikes and speakers are ... that are featuring long impulse responses. Keep in mind that what I am writing here, only concerns a 100% digital simulation, that's not interfacing the real, analog world (major latency issue, see above).
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: HIFix - open-source Hearing Impairment Fix

Postby wlangfor@uoguelph.ca » Thu Feb 20, 2020 3:18 pm

I like your writing and I follow what you are saying. Interesting that this method is similar to fft and seems maybe more efficient.

I'll get back to this post later today, busy now. You're quite a read :)
My youtube channel: DSPplug
My Websites: www.dspplug.com KVRaudio flowstone products
User avatar
wlangfor@uoguelph.ca
 
Posts: 912
Joined: Tue Apr 03, 2018 5:50 pm
Location: North Bay, Ontario, Canada

Re: HIFix - open-source Hearing Impairment Fix

Postby steph_tsf » Thu Feb 20, 2020 11:36 pm

Indeed wlang (shall I name you this way?), usually one is hooking a 2-channel FFT analyzer on some "unknown audio processing plant" (the parasitic acoustic leak in our case) that's only giving access to its "in" signal", and giving access to its "out" signal. One is asking such 2-channel FFT analyzer to compute FFT(signal out) "divided" by FFT(signal in) as fast as it can. Unfortunately, most of the time, the CPU load is so heavy that one cannot re-calculate both FFTs and the "divide" stuff, as soon as a new audio sample is arriving. I say "most of the time" because there exist tricks like "progressive FFTs" deserving more publicity. Anyway, in case you want to generate a "fair estimate" of the "out" signal (in time-domain thus), you need to compute the inverse FFT of the gain/phase. Please observe that such inverse FFT is by nature, the "plant" impulse response. This is not finished yet. You also need to inject the "plant" impulse response into the FIR filter, not as signal, but as "coefficients". Only then, can you measure the difference in time-domain, between the observed "signal out" that's delivered by the "plant", and the "most probable" signal that's delivered by the FIR filter, aiming at "cloning" the "plant". You do this because you want the LMS Machinery to "learn and clone" the "plant" behavior aka transfer function in gain and phase. Unfortunately, in case the "plant" transfer function is not static, but variable over time, the FFT method is incapable of correctly "learning and cloning", because the two FFTs, the "divide stuff", the inverse FFT, and the substitution of the FIR filter coefficients, never happen each time there is a new audio sample arriving. The CPU load would be far too heavy.

Now you realize the exact nature and importance of the LMS Machinery, Adaptive Machinery, Learning Machinery, Cloning Machinery, Anticipating (forward-looking) Machinery, Most Probable Future determining Machinery, all demonstrated by Bernard Widrow and Ted Hoff in the early 1960's. Such LMS Machinery automatically updates itself, each time a new information (audio sample in our case) is arriving. Quite unbelievable, thus. The more data, the best, could we say. The data can consist in audio (this is 1-D), in arrays (2-D tables, images), 3-D, 4-D. This exceeds the human capabilities. This is genuinely monstrous.

It all started with the "centrifugal governor" invented by Christiaan Huygens, that got mounted on many James Watt steam machines. This was the first man-made, simple and obvious manifestation of negative feedback.

Then came the Perceptron, recognized as the seminal, elementary building block, giving birth to A.I. (Artificial Intelligence). There is no need for digital, in a Perceptron, just like in negative feedback. The first Perceptron got implemented using variable resistors and analog comparators. Actually this remains the best perceptron. It is analog (however not linear), there is no sampling frequency, no brutal Nyquist frequency limit, no harsh quantification.

In case the next technology trend consists in removing complexity whenever possible, the Perceptron is a good candidate for becoming the ubiquitous building block, all intelligent machines will massively rely on. One can be massive, and remain conceptually simple, and feature robustness thanks to some adequate level of redundancy.

The first application of the Perceptron, still analog then, triggered a silent revolution. It happened in the US postal office. A photo-sensors matrix costing almost nothing became able to instantly "read" the many U.S. postal codes, handwritten on the many envelopes. Consequently, it became possible to automate the many gates that are managing the many conveyors in charge of routing the U.S. written correspondence. Consequently, in the mid 1960's came the hope that within a couple of years, man-made machines could understand speech, and learn better and faster than humans.

Only a few months later, came the LMS Machinery. The scientific community could not believe that within a couple months, two radically different devices, radically new, dealing with Artificial Intelligence, came to the world.

They are very different indeed.

The Perceptron is by nature, analog and not linear. It is basing on additions and subtracts, thresholds and activation.

The LMS Machinery looks like "digital obliged", albeit linear. There is one subtract (generating the error signal), many digital delays, many accumulators, many adders and many multipliers (multiply by a constant, never by a signal). They are all linear. In reality, the LMS Machinery is not "digital obliged" because one can replace the digital delays by analog lines.

The Perceptron obscured some way the LMS machinery.
The LMS Machinery obscured some way the Perceptron.

Cybernetics, a term coined in the 1960's, became a mess, a Babel Tower.
It ended up deceivingly, silently, as a flop.

Consequently, 25 years later, around 1985, when home computers and personal computers appeared, they were still embedding no Artificial Intelligence.

The only bit of Artificial Intelligence, appeared as deeply embedded inside the landline modems operating at 4.800 bit/sec and above. This was the line equalizer. Possibly there was also a line equalizer inside the 10 Mbit/s Ethernet network cards.

And this remains roughly the same nowadays, in 2020. The only difference being that nowadays, one can find line equalizers everywhere, in Ethernet, in HDMI, everywhere whenever is it required to reliably transport information over a transmission medium whose transfer function may vary from one day to another, or from one session to another.

There are some bits of Artificial Intelligence showing nowadays, in case a a computers gets connected on a gigantic cloud-based infrastructure, that's providing deceptively simplistic services like Amazon Alexa, Apple Siri, Microsoft Cortana (what a stupid name), and Google "Hey Google Voice Assistant" (another stupid name). Why is it so? Well, prepare for a surprise. This is because they are all digital. Firstly, there is the sampling frequency, hence Nyquist frequency limit, that is almost never considered as possibly becoming local and adaptive. Secondly, the imposed sampling frequency is introducing a time-domain granularity, possibly preventing natural massively parallel phenomenons to occur and emerge. And thirdly, with digital comes the temptation to only write sequential code, and never write the revolutionary massively parallel code that's required for exploiting various emerging properties, radical properties, specific and inherent to massively parallel systems.

I hope you will enjoy reading http://www-isl.stanford.edu/~widrow/papers/j2005thinkingabout.pdf
This is first-hand information elaborated at Stanford, in cooperation with Bernard Widrow "himself".

Same about Ted Hoff. He soon abandoned the "parallel natural computing" discipline. He designed the Intel 4004 microprocessor. https://engineering.stanford.edu/news/ted-hoff-birth-microprocessor-and-beyond

Now, in exchange of this, can you please tell if you know a "caveman Hilbert pair" consisting in two IIR Biquads in series in the "A" branch, and two slightly different IIR Biquads in series in the "B" branch, featuring a user-settable (by green or by Ruby) "centre" frequency labelled Fc, where the phase difference is exactly 90 degree. This is in the context of building a Hilbert pair amplitude detector (sum of the squares in time-domain ) that's only required to maintain the 90 degree phase shift, with a 5 degree precision, in the frequency band that's going from 0.5 Fc, to 2.0 Fc. My application consists in detecting the amplitudes seen at the 8 outputs of a bandplitter array, whose central frequencies are : 63 Hz, 125 Hz, 250 Hz, 500 Hz, 1 kHz, 2 kHz, 4 kHz, 8 kHz.
I guess tunability improves when replacing the plain normal IIR Biquads, with "warped" IIR Biquads embedding two 1st-order pure phase shifters instead of two delays. Please let me know.

I am naming this a "caveman Hilbert pair", knowing there are better Hilbert pairs out there, consuming four IIR Biquads per branch, said to maintain the 90 degree phase difference from 30 Hz to 20 kHz. Clearly this is overkill for my application, dealing with an input signal that's severely bandpass-filtered. Clearly, the selectivity that I am imposing inside the bandsplitter (8 frequency bands), must bring ease and simplicity concerning the eight Hilbert amplitude detectors that are following. Possibly this is the master recipe, explaining why one can purchase inexpensive digital hearing aids featuring 12 channels and 6 compressors, nowadays.

Speaking of more expensive digital hearing aids, I am positively impressed by the vast array of high-end features that are available, described in the following document dating back from 2013 : https://media.sivantos.com/siemens-website/media/2014/05/Connexx7_tutorial_2013-03_en.pdf.
Please look the frequency compression feature on page 38. I would like to experiment this on Flowstone. It goes about compressing the DC to 8 kHz spectrum that's picked up by the mike, into a DC to 4 kHz spectrum sent to the miniaturized speaker, or possibly a DC to 2 kHz spectrum in case the hearing loss is severe. Of course this requires some kind of acclimation. Possibly this is worth the effort. Possibly one can push the gain somewhat higher before experiencing howling (Larsen effect), now that the "out" frequency differs from the "in" frequency. Possibly one need to opt for a 1.618 (gold number) frequency compression ratio, for preventing harmonics to count as feedback. Any idea welcome. Possibly adding "warped" IIR filters or "warped" FIR filters into the process, enable the frequency compression to obey some special law, easing the acclimation.
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: HIFix - open-source Hearing Impairment Fix

Postby MichaelBenjamin » Fri Feb 21, 2020 9:11 am

.
Last edited by MichaelBenjamin on Mon Sep 21, 2020 10:48 am, edited 1 time in total.
MichaelBenjamin
 
Posts: 275
Joined: Tue Jul 13, 2010 1:32 pm

Re: HIFix - open-source Hearing Impairment Fix

Postby martinvicanek » Fri Feb 21, 2020 10:56 am

From my experience, a Hilbert pair will not provide a better RMS meter. (To recap, the idea is: the Hilbert pair splits the signal into two branches with 90 degrees phase difference. The square root of the sum of squares yields the instantaneouts magnitude.)

The notion of an instantaneous magnitude and phase is appealing but deceptive. First, phase cannot be determined instantaneously (neither can magnitude). You need to analyze a certain finite interval for that. Consequently, a Hilbert transformer will necessarily introduce some delay. There can be no cheating.

Second, the above squaring and summing yields a steady magnitude only for the case of a single sinusoid. In general, however, there will be more than one sinusoid, even if the signal is confined to one octave. As a result, there will be interference terms in the squared signals showing up as ripples in the so-computed magnitude.

Seeing is believing, so why don't you try it out for yourself? Here is your caveman Hilbert pair:
one arm: a first order allpass at center frequency.
the other arm: second order allpass at center frequency and Q=0.244.
That will work within an octave around the center frequency.
User avatar
martinvicanek
 
Posts: 1318
Joined: Sat Jun 22, 2013 8:28 pm

Re: HIFix - open-source Hearing Impairment Fix

Postby steph_tsf » Mon Feb 24, 2020 12:10 pm

martinvicanek wrote:Seeing is believing, so why don't you try it out for yourself? Here is your caveman Hilbert pair:
one arm: a first order allpass at center frequency. The other arm: second order allpass at center frequency and Q=0.244. That will work within an octave around the center frequency.

Indeed Martin. Thanks for the suggestion. I've made a corresponding .fsm, along with two more featuring higher Hilbert Pairs orders. Unfortunately, the "Sorry, the board attachment quota has been reached" annoyance interdicts me to attach them. In the meantime, I'll check the ripple of the instantaneous amplitude detector that's relying on these, in the context of octave-filtered input signals.
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: HIFix - open-source Hearing Impairment Fix

Postby steph_tsf » Wed Feb 26, 2020 4:34 pm

I am attaching the .fsm of the audio signal processing, that could be applied in parallel to the various frequency bands (say 8 frequency bands).

hearing aid RMS det comp gain lim.fsm
(18.05 KiB) Downloaded 944 times

It is embedding the RMS detector, dynamic range compressor, gain setting, and output limiter. Does this make sense? The lin2dB and db2lin modules appear to consume most of the CPU%. Can the CPU% significantly decrease, in case one is only requiring a 1e-5 precision? This is -100 dBFS. This is enough for the targeted application. Have a nice day.
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: HIFix - open-source Hearing Impairment Fix

Postby martinvicanek » Wed Feb 26, 2020 11:33 pm

steph_tsf wrote:The lin2dB and db2lin modules appear to consume most of the CPU%. Can the CPU% significantly decrease [...]?

Yes, there are two things you can do:
1. Pack the signal to Mono4. viewtopic.php?f=4&t=6372&p=29259
2. Hop the conversion. No need to update a control signal at every sample. If you want to be cautious, you could interpolate between the hops.
User avatar
martinvicanek
 
Posts: 1318
Joined: Sat Jun 22, 2013 8:28 pm

Next

Return to General

Who is online

Users browsing this forum: Google [Bot] and 27 guests

cron