Jekyll2023-12-30T11:16:30+00:00https://alexandrefrancois.org/feed.xmlARJFAlexandre R.J. Françoisalexandrefrancois@gmail.comA More Efficient Digital Resonator2023-04-16T20:23:04+00:002023-04-16T20:23:04+00:00https://alexandrefrancois.org/music/physics/oscillators/2023/04/16/A-More-Efficient-Digital-Resonator<p>In previous posts, I described my digital resonator model: the main principles in <a href="/music/physics/oscillators/2022/08/08/Digital-Resonator.html">Digital Resonator</a>; a few more geeky details in <a href="/music/physics/oscillators/2022/08/28/Digital-Resonator-II.html">Digital Resonator (II)</a>.</p>
<p>I have now understood how the way phase is handled in the FFT algorithm applies to the time domain. I knew it relies only in sine and cosine (via complex numbers), but I never came across an explanation that would help me understand it well enough. In this post I attempt to give an intuitively comprehensible explanation and outline the implications for my resonator model.</p>
<p>In <a href="/music/physics/oscillators/2022/08/28/Digital-Resonator-II.html">Digital Resonator (II)</a>, I described the amplitude update computations for my resonator model as follows:</p>
<blockquote>
<p>The resonator’s amplitude is updated at each tick of the clock, i.e. for each input sample, from the resonator’s current amplitude value <em>a</em> (in [0,1]), its current position in the oscillation period (waveform value <em>w</em>, in [-1,1]), and the input sample value <em>s</em> (in [-1,1]):<br />
<em>a <- (1-k) * a + k * s * w</em></p>
</blockquote>
<blockquote>
<p>The instantaneous contribution of each input sample value to the amplitude is proportional to <em>s * w</em>, which intuitively will be maximal when peaks in the input signal and peaks in the resonator’s waveform are both equally spaced and aligned, i.e. when they have same frequency and are in phase.</p>
</blockquote>
<blockquote>
<p><strong>In order to account for phase offset, the above calculation is performed for various phases, and the resonator’s amplitude is set to the maximum value across all phases.</strong> (emphasis added)</p>
</blockquote>
<p>In essence, we are looking for the maximum contribution at the given frequency (dictated by the waveform), but don’t know the phase offset. We take the brute force approach of computing the contribution for many offsets, i.e. sampling the resulting curve at as many points as we can, and then take the maximum value.</p>
<p>This approach is less than satisfying as the complexity is linear in the number of phases, i.e. the number of samples in one period, which for low frequencies in the audible spectrum typically reaches a few thousands. It turns out that it is also completely unnecessary!</p>
<p>Here is what made the penny drop for me: suppose the waveform <em>W</em> is a sine curve, then the collection of instantaneous contributions for a given sample value <em>s</em> at each phase is also a (scaled) sine curve (<em>s * W</em>), and so are the accumulated amplitude values, as they are linear combinations of sines, all of the same frequency. The sine shape of the phases plot is quite obvious in the app screenshot below.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;">
<a href="/Oscillators/assets/images/oscillators-resonance-silence.gif" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Oscillators/assets/images/oscillators-resonance-silence.gif" width="180" />
</a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The resonator's amplitudes (at all phases) increase when the generator produces a sinusoidal signal at the resonator's resonant frequency. The amplitude is maximal at the corresponding phase.</td></tr>
</tbody></table>
<p>We want to calculate the amplitude and phase offset of <em>s * W</em>, therefore we only need to compute and accumulate the signal’s contribution at 2 phase values (there are only 2 degrees of freedom!). For a sine waveform <em>sin(x)</em>, the natural candidates are phases 0 and 𝜋/2, i.e. <em>sin(x)</em> and <em>sin(x+𝜋/2) = cos(x)</em>.</p>
<p>This can be formulated and implemented neatly and compactly with complex numbers, but intuitively, instead of computing the amplitude at each phase, the resonator maintains two values, <em>ps</em> and <em>pc</em> (both in [0,1]), updated at each tick of the clock, i.e. for each input sample, from their current values, the current position in the oscillation period (values <em>ws</em> for the sine waveform and <em>wc</em> for the cosine waveform, both in [-1,1]), and the input sample value <em>s</em> (in [-1,1]):<br />
<em>ps <- (1-k) * ps + k * s * ws</em><br />
<em>pc <- (1-k) * pc + k * s * wc</em></p>
<p>These two values are two sample points on the “all phases” curve. At any tick, the resonator’s amplitude is <em>sqrt(ps*ps + pc*pc)</em> and the phase offset <em>arctan(ps/pc)</em>.</p>
<p>This is of course a much more efficient way of implementing digital resonators: the number of computations required per update is much smaller than in the brute force approach, and is independent of the resonator’s frequency. This will certainly allow for resonator banks with more resonators. In addition, the approach of encoding a whole bank of oscillators as one array, with each update a single call to the Accelerate framework, is worth revisiting as it might prove more efficient than the “independent resonators” approach.</p>Alexandre R.J. Françoisalexandrefrancois@gmail.comIn previous posts, I described my digital resonator model: the main principles in Digital Resonator; a few more geeky details in Digital Resonator (II).Resonator Banks2022-11-20T20:22:09+00:002022-11-20T20:22:09+00:00https://alexandrefrancois.org/music/physics/oscillators/2022/11/20/Resonator-Banks<p>In my last post, I wrote: “<em>the really exciting fun starts from something like this: a bank of oscillators tuned at a range of frequencies, that resonate in real-time with the input signal from the microphone, and possibly interact with each other.</em>” Exciting fun might very well be around the corner, as resonator banks are part of the latest version of the <a href="https://github.com/alexandrefrancois/Oscillators">Oscillators package</a> and version 2.0 of the <a href="/Oscillators/">Oscillators app</a>, now <a href="https://apps.apple.com/us/app/oscillators/id1641353759">available for download</a> on the Apple App Store, showcases their use for live audio signal frequency analysis.</p>
<p>In this post I outline my resonator bank designs, and some basic analysis applications showcased in the app.</p>
<h2 id="resonator-banks">Resonator banks</h2>
<p>For now, a resonator bank is just a set of resonators that receive the same input signal which dictates their individual amplitudes independently. The resonators can be tuned to different frequencies and/or have different dynamic properties.</p>
<p>I have experimented with several designs and implementations.</p>
<p>The package offers a resonator bank where all the resonators are modeled in single manually managed array (Swift only) so that per sample update operations consist in single calls to the Accelerate framework for the whole bank of oscillators. This seems actually quite efficient up to a certain size, as Accelerate operations are parallelized, but efficiency drops when the array exceeds a certain size (likely dictated by memory). Furthermore, this design does not allow leveraging concurrency.</p>
<p>For independent resonators, audio samples must be processed sequentially and in order, but updates can be applied concurrently across resonators. Therefore, an array of individual resonators (which leverage the Accelerate framework) seems to offer the best opportunity to leverage concurrency and therefore afford better scalability. The package features Swift and C++ implementations, for direct performance comparison. Furthermore, resonator banks offer three update functions:</p>
<ul>
<li>Sequential: calls the update function for each resonator sequentially</li>
<li>Concurrent: calls update for each resonator concurrently, with update calls grouped in a fixed number of concurrent tasks</li>
<li>Gradient Frequency heuristic: groups update calls from both ends of the bank, which should work well for Gradient Frequency banks as this should results in tasks of similar complexity (in a Gradient Frequency bank, the resonators are tuned to natural frequencies based on human auditory perception and organized from lowest to highest frequency, and the amount of computation required per update is higher for lower frequencies)</li>
</ul>
<p>Proper evaluation of these would require more thorough and systematic study across various architectures, for which I don’t have time or resources at the moment. By default the Oscillators app uses the C++ implementation and the Concurrent update function (if the device supports more than 2 concurrent threads). The implementation and update function can be changed on the Settings Screens - see <a href="/Oscillators/#setting-screens">User Guide</a>.</p>
<h2 id="frequency-analysis">Frequency Analysis</h2>
<p>For frequency analysis, the resonators in the bank are tuned to natural frequencies based on human auditory perception and organized from lowest to highest frequency (Gradient Frequency bank). Such a bank arguably captures relevant frequency information about the input signal, while affording flexibility on resonator frequency range and distribution. The app showcases frequency analysis in an instantaneous plot of live signal analysis, and a spectrogram plot.</p>
<p><strong>Instantaneous Plot</strong>: the amplitude graph plots the current amplitude of each resonator in the bank. The resonators are ordered by increasing frequency from left to right on the horizontal axis.</p>
<p><img src="/Oscillators/assets/images/frequency-analysis.png" alt="Frequency analysis" width="300" /></p>
<p><strong>Spectrogram</strong>: the spectrogram plots the amplitude levels of the resonators in a resonator bank over time. The resonators in the bank are tuned to natural frequencies based on human auditory perception and organized from lowest to highest frequency (Gradient Frequency bank). In the plot, frequencies are represented on the vertical axis, lowest frequency at the bottom, highest at the top. Time flows on the horizontal axis, to the left of the screen. Amplitude levels are color mapped, low to high, from green through yellow to red.</p>
<p><img src="/Oscillators/assets/images/spectrogram.png" alt="Spectrogram" width="300" /></p>
<h2 id="dynamics-analysis">Dynamics Analysis</h2>
<p>By using truly dynamic resonator models, we can investigate the effects of dynamics parameters on the analysis. For dynamics analysis, all resonators in the bank are tuned to the same frequency, and each resonator has a different time constant, set as a function of the frequency. The time constant regulates the dynamics of the low-pass filter through which individual contributions from each audio sample are accumulated over time in the resonators. The shorter the time constant the more reactive the resonator.</p>
<p><img src="/Oscillators/assets/images/dynamics-analysis.png" alt="Dynamics analysis" width="300" /></p>
<p>The amplitude graph plots the current amplitude of each resonator in the bank. The resonators are ordered by increasing time constant value from left to right on the horizontal axis.</p>
<h2 id="demonstrations">Demonstrations</h2>
<p>All this is very preliminary, and I have many more explorations, experiments and applications in mind… which I will try and do as time and resources permit.</p>
<p>In the meantime, keep checking the Oscillators playlist on my YouTube channel for fun experiments with the Oscillators app.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/videoseries?list=PLVcB_ABiKC_djwV2PXnSCWkvXOXt8PRMC" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>Alexandre R.J. Françoisalexandrefrancois@gmail.comIn my last post, I wrote: “the really exciting fun starts from something like this: a bank of oscillators tuned at a range of frequencies, that resonate in real-time with the input signal from the microphone, and possibly interact with each other.” Exciting fun might very well be around the corner, as resonator banks are part of the latest version of the Oscillators package and version 2.0 of the Oscillators app, now available for download on the Apple App Store, showcases their use for live audio signal frequency analysis.Digital Resonator (II)2022-09-02T20:22:09+00:002022-09-02T20:22:09+00:00https://alexandrefrancois.org/music/physics/oscillators/2022/09/02/Digital-Resonator-II<p>It’s been a productive week off! I published the first version of the <a href="/Oscillators/">Oscillators app</a>, <a href="https://apps.apple.com/us/app/oscillators/id1641353759">available for download</a> on the Apple App Store, and I released a first version of the code for my resonator implementation (and more) as a <a href="https://github.com/alexandrefrancois/Oscillators">Swift package</a>.</p>
<p>I outlined the main ideas underlying my digital resonator model in my last post: <a href="/music/physics/oscillators/2022/08/08/Digital-Resonator.html">Digital Resonator</a>. In this post I give a few more geeky details.</p>
<p>Just to be clear about the objectives, the really exciting fun starts from something like this: a bank of oscillators tuned at a range of frequencies, that resonate in real-time with the input signal from the microphone, and possibly interact with each other. Getting to this requires a model that captures the resonance behavior, and that can be implemented efficiently (a reference point for comparison would be the ubiquitous FFT).</p>
<h3 id="basic-concepts">Basic concepts</h3>
<p>Human perception is fundamentally dynamic: it’s about change over time. A quantity that varies over time is called a <em>signal</em>. Acoustic signals consist of pressure waves that cause the air to vibrate. A microphone converts an acoustic signal into an electric signal. Because digital computers can only manipulate discrete data, this analog electric signal must be converted into a <em>digital signal</em>. All it means is that the signal cannot be represented in the computer with infinite precision, so the quantity can only take discrete values (<em>quantization</em>) and we only know the values at certain discreet times (<em>sampling</em>). For sound the samples are usually taken at constant intervals (expressed in seconds), whose inverse is the <em>sampling rate</em> (expressed in Hz, equivalent to “per second”), that is the number of samples per unit of time. A common sampling rate for sound is 44100Hz, and we’ll see why in a bit.</p>
<p>A <em>periodic signal</em> repeats the sequence of values exactly after a fixed length of time, known as the <em>period</em>. The <em>frequency</em> (in Hz) is the inverse of the period duration (in s), i.e. the number of times the period is repeated per seconds. A periodic signal is entirely specified by one period, as the values repeat indefinitely.</p>
<p>Examples of periodic signals include sine, square, triangle and saw waves, and there are many apps and online sites that offer tools to generate sound from such simple periodic signals (tones), see for example <a href="https://www.gieson.com/">Michael Gieson</a>’s <a href="https://www.gieson.com/Library/projects/utilities/tonegen/">ToneGen</a>. Explore the richness of digital synthesis in electronic music at <a href="https://artsandculture.google.com/project/music-makers-and-machines">Music Makers Machines</a> (but please come back here I’m not done - even better, save it for later).</p>
<p>A few hundred years ago, long before electronic digital computers, <a href="https://en.wikipedia.org/wiki/Joseph_Fourier">Fourier</a> posited that any periodic signal can be expressed as an infinite sum of sines and cosines (<em>Fourier expansion</em>), a powerful concept that underlies <a href="https://en.wikipedia.org/wiki/Harmonic_analysis">harmonic analysis</a>, and relates to how humans perceive sound. Discrete versions of the associated mathematical tools, suitable for efficient implementation on early digital computers, reinforced the success of these techniques (for example the ubiquitous Fast Fourier Transform or FFT) and their wide adoption in a variety of fields. The FTT takes a portion of signal (values over time) and computes the values for each frequency band in the appropriate discrete truncated Fourier expansion (the frequency range is limited by the duration of the signal portion).</p>
<p>I am exploring a completely different approach, in which the signal is fed possibly in real-time to a resonator tuned to a specific frequency, and the resulting amplitude of oscillations captures the presence of the resonant frequency in the input signal. A bank of such resonators tuned at various frequencies arguably captures valuable information about the incoming signal, akin to (but in many ways fundamentally different from) what the FFT captures.</p>
<p>But before we move on to oscillators and resonators, one important constraint when digitizing a periodic signal is that the sampling rate must be twice the frequency of the highest frequency component to be captured. This is known as the <em>Nyquist frequency</em>. The audible range of frequency is roughly 20Hz to 20000Hz, which is why sampling an audio signal at 44100kHz ensures that all audible frequencies in the signal are captured in the resulting digital version.</p>
<p>For real-time audio signal processing, we’ll need to do some computations at the sampling rate, i.e. several tens of thousand times per second. We can leverage GPUs to perform floating point operations efficiently, especially if we can parallelize some of the work.</p>
<h3 id="oscillator-design">Oscillator design</h3>
<p>An oscillator is entirely specified by its frequency (or period duration), waveform and amplitude.</p>
<p>The sampling rate (or sample duration) turns out to be an important parameter that is worth imposing as a fixed input parameter.
Indeed, introducing the constraint that <strong>the period duration be a multiple of the sample duration</strong> greatly simplifies the model and the computations. In this case, the components of the model are the amplitude (a scaling factor) and an array, whose length is the number of samples in the period, and which contains the waveform value at each sample time.</p>
<p>This means however that by design, such oscillators cannot be tuned to any arbitrary frequency, but only to frequencies that correspond to a period duration that is a multiple of the sample duration.</p>
<p>To generate a signal at the chosen sampling rate, all that is needed is a pointer that keeps track of the oscillator’s position in the period, in this case an index into the waveform.</p>
<p>At each tick of the clock (driven by the sampling rate of the output signal),</p>
<ul>
<li>advance the pointer to the next position
<ul>
<li>if past the end of the waveform, go back to first position (= repeat the period)</li>
</ul>
</li>
<li>take the waveform value at the current position,</li>
<li>output the value scaled by the amplitude</li>
</ul>
<p>Again, this is extremely simplistic; there are much more sophisticated models for signal synthesis, which is not our main concern here.</p>
<h3 id="resonator-design">Resonator design</h3>
<p>A resonator is an oscillator which, when submitted to an input signal, oscillates with a larger amplitude when its resonant frequency is present in the input signal.</p>
<p>A resonator is characterized by its (resonant) frequency and the shape of its periodic signal, captured in the oscillator model as the waveform array.</p>
<p>The resonator’s amplitude is updated at each tick of the clock, i.e. for each input sample, from the resonator’s current amplitude value <em>a</em> (in [0,1]), its current position in the oscillation period (waveform value <em>w</em>, in [-1,1]), and the input sample value <em>s</em> (in [-1,1]):<br />
<em>a <- (1-k) * a + k * s * w</em></p>
<p>The pattern <em>v <- (1-k) * v + k * s</em>, where k is a constant in [0,1] is known as a low-pass filter, as it smooths out high frequency variations in the input signal. The constant <em>k</em> dictates the “smoothing”, in this case the dynamic behavior of the system, i.e. how quickly it adapts to variations in the input signal.</p>
<p>The instantaneous contribution of each input sample value to the amplitude is proportional to <em>s * w</em>, which intuitively will be maximal when peaks in the input signal and peaks in the resonator’s waveform are both equally spaced and aligned, i.e. when they have same frequency and are in phase.</p>
<p>Note that this is where traditions would suggest essentially taking the signal offline and maybe doing a convolution with the waveform as the kernel (or just do an FFT and be done with it!), but I would rather recognize and accept that we don’t know the future and we cannot always afford to wait for it to become the past, and thus favor a more dynamic approach.</p>
<p>In order to account for phase offset, the above calculation is performed for various phases, and the resonator’s amplitude is set to the maximum value across all phases. The phase offset resolution is also conveniently the sample duration so the resonator model adds to the oscillator model an array of same length as the resonator’s period, where each position stores the amplitude value for the corresponding phase offset.</p>
<p>The floating-point calculations to carry for each input sample can be performed in parallel for each phase, so perfectly suited to leverage GPU acceleration. The complexity is linear in the number of phases, i.e. the number of samples in one period, which for low frequencies in the audible spectrum typically reaches a few thousands.</p>
<p>This means a reasonable implementation on modern hardware should easily handle the computations per sample for any frequency. For real-time audio processing, these computations must typically be carried several tens of thousands of times per second. So while the computations themselves should be cheap, the setup overhead (for example copying data to/from GPU memory where required) could become a limiting factor, especially when running a number of resonators in the context of a music analysis app.</p>
<p>I implemented several versions of this model, all leveraging the Accelerate framework: the first proof of concept uses Swift arrays, a much more efficient version in Swift uses “unsafe pointers”, and I also made a C++ version, wrapped in Objective-C++ to bridge with Swift. The Swift arrays overhead proved prohibitively significant, so the app features the Swift “unsafe arrays” and the C++ implementations, in the exact same setting, so that the computation times can be directly compared. The C++ implementation seems to consistently outperform the Swift implementation - but this should of course be explored and confirmed more thoroughly and systematically.</p>
<table align="left" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: left;"><tbody>
<tr><td>Frequency</td><td>Samples / period</td><td>ns / sample</td></tr>
<tr><td>2205Hz</td><td>20</td><td>S: 395<br />C++:210</td></tr>
<tr><td>441Hz</td><td>100</td><td>S: 550<br />C++:350</td></tr>
<tr><td>110.25Hz</td><td>400</td><td>S:990 <br />C++:830</td></tr>
<tr><td>20Hz</td><td>2,114</td><td>S: 1500<br />C++:1200</td></tr>
<tr><td colspan="3" style="text-align: center;">Representative approximate processing time per sample for various frequencies observed on an iPhone 13 mini</td></tr>
</tbody></table>
<h3 id="demonstrations">Demonstrations</h3>
<p>Check out the Oscillators playlist on my YouTube channel for fun experiments with the Oscillators app.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/videoseries?list=PLVcB_ABiKC_djwV2PXnSCWkvXOXt8PRMC" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p><br />
Then go explore <a href="https://artsandculture.google.com/project/music-makers-and-machines">Music Makers Machines</a> :-)</p>Alexandre R.J. Françoisalexandrefrancois@gmail.comIt’s been a productive week off! I published the first version of the Oscillators app, available for download on the Apple App Store, and I released a first version of the code for my resonator implementation (and more) as a Swift package.Digital Resonator2022-08-08T20:22:08+00:002022-08-08T20:22:08+00:00https://alexandrefrancois.org/music/physics/oscillators/2022/08/08/Digital-Resonator<p>Over the past few months I have been exploring some ideas that started sprouting in my head over 10 years ago, to design and implement a real-time dynamic model of tonal perception, i.e. not using the ubiquitous FFT… but for now I will just share some fun results I have been able to achieve with a digital oscillator model.</p>
<p>The first milestone in my exploration is to build a simple <strong>digital resonator</strong>, a dynamic system that resonates with a specific frequency if present in an input signal (such as that captured by a microphone), i.e. that naturally oscillates with greater amplitude at a given frequency, than at other frequencies.</p>
<p>Note that there is a large body of work related to bulding real-time oscillators for <em>synthesizing</em> periodic signals (both in hardware and in software), but my objective is to <em>analyze</em> a signal (or a periodic component thereof).</p>
<p>I will not cover all the technical details here, but simply outline the main ideas behind my design.</p>
<ul>
<li>
<p>a resonator is characterized by its (resonant) frequency (inverse of its period duration) and the shape of its periodic signal (could be any periodic function, in this case a sine wave)</p>
</li>
<li>
<p>because we operate in the digital world, the sampling rate (inverse of sample duration) of the input signal is an important parameter as taking it into account affords some significant optimizations in the design of the system and in the calculations involved in computing the interaction between the input signal and the amplitude of the resonator. Those also come with some limitations that are out of scope for this post.</p>
</li>
</ul>
<p>The digital resonator has a <em>persistent state</em> consisting of current amplitude and phase; its state gets updated at each tick of the clock, driven by the sampling rate of the input signal.</p>
<ul>
<li>
<p><strong>coupling</strong>: the contribution of each input sample to the oscillator’s amplitude is modulated by the waveform value at the current phase of the oscillator. As a result, contribution will be consistently maximal if the input signal contains a component that has the same frequency as the resonant frequency of the oscillator, provided the two are in phase. <em>It is therefore necessary to maintain an array of oscillation amplitudes, for all (or at least several) possible phases, so that the contribution of any component at (or near) the resonant frequency will be captured independently of its phase relative to the resonator’s phase.</em></p>
</li>
<li>
<p><strong>persistence</strong>: each input sample contributes to the amplitude of the system (for each phase) subject to a low-pass filter. As a result, under a 0 contribution, the system’s amplitude decays exponentially to 0 over time and under a non zero contribution the amplitude of the system increases following a sigmoid curve. Both aspects are modulated by a time constant which characterizes the responsiveness of the system. Over time, the contribution of any component at (or near) the resonant frequency will result in a higher accumulated amplitude, while the sporadic contributions of other frequencies will result in lower amplitudes.</p>
</li>
</ul>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;">
<a href="/Oscillators/assets/images/oscillators-resonance-silence.gif" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Oscillators/assets/images/oscillators-resonance-silence.gif" width="180" />
</a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The resonator's amplitudes (at all phases) increase when the generator produces a sinusoidal signal at the resonator's resonant frequency. The amplitude is maximal at the corresponding phase. All amplitudes decay to 0 over time when the generator produces silence.</td></tr>
</tbody></table>
<p>When the input signal has a frequency that is <em>near</em> (but not quite equal to) the resonator’s frequency, the resonator’s amplitudes do increase, less than when the signal’s frequency is spot on, and the maximal amplitude phase position is constantly shifting at a rate that is proportional to the difference in wavelength between the signal’s frequency and the resonant frequency. Computing the actual signal frequency from this shift is relatively straightforward.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;">
<a href="/Oscillators/assets/images/oscillators-near-resonance.gif" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Oscillators/assets/images/oscillators-near-resonance.gif" width="180" />
</a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Near resonant frequency estimation and Doppler velocity computation.</td></tr>
</tbody></table>
<p>Furthermore, if the signal source (emitting a fixed frequency) and the “observer” (microphone feeding the resonator) are moving with respect to each other, the frequency observed by the resonator shifts according to the Doppler effect. A simple computation gives the relative velocity from the frequency shift.</p>
<p>But all this is worth a real world demonstration, so here is a short video…</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/iQCPDJ8L_ao" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>Alexandre R.J. Françoisalexandrefrancois@gmail.comOver the past few months I have been exploring some ideas that started sprouting in my head over 10 years ago, to design and implement a real-time dynamic model of tonal perception, i.e. not using the ubiquitous FFT… but for now I will just share some fun results I have been able to achieve with a digital oscillator model.Thoughts on MuSA_RT 2.02021-01-31T22:06:16+00:002021-01-31T22:06:16+00:00https://alexandrefrancois.org/music/mathematics/visualization/software/2021/01/31/Thoughts-on-MuSA_RT-2.0<p><a href="/MuSA_RT/assets/images/square.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="800" data-original-width="800" height="320" src="/MuSA_RT/assets/images/square.png" />
</a></p>
<p>The objective for <a href="/MuSA_RT">MuSA_RT 2.0</a> as a holiday project was to use cutting edge software development tools and frameworks/packages to put MuSA.RT in the hands of anyone with a phone, tablet or computer (limited to the Apple ecosystem because of resource and time constraints). The version currently <a href="https://apps.apple.com/app/musa-rt/id506866959">in the Apple App Store</a>, although quite crude in many respects, achieves this goal, and will serve, time permitting, as a starting point for exciting explorations.</p>
<h2 id="3d-graphics-and-augmented-reality">3D graphics and Augmented Reality</h2>
<p>Rendering 3D geometry that approximates the original MuSA_RT graphics was almost too easy, and as a result hasn’t yet received the attention it deserves. There is much to explore and improve in terms of geometry and appearance (materials and lighting), and then efficiency.</p>
<p>Similarly, the Augmented Reality (AR) mode is but a bare proof of concept, which only sets the stage for exciting explorations. The first question is of course: what, if anything, can an AR experience of MuSA_RT bring to the performer and to listeners? For example, MuSA_RT has been used as an educational tool to visually support explanations of tonality, both in private and in concert settings. For concerts, the graphics were typically projected on a large screen. What if the model was on stage instead?</p>
<h2 id="audio-processing-for-music-analysis">Audio processing for music analysis</h2>
<p>Efficient Fast Fourier Transform (FFT) computation available on virtually any device with a microphone, and the quasi-ubiquity of microphones in computing devices, from phones to tablets to laptops, made possible an implementation of MuSA_RT that does not require exotic or cumbersome equipment (such as MIDI devices) and can truly run out of anyone’s pocket (provided they have a reasonably recent Apple device). This was an unexpected but in retrospect predictable bonus, which was not part of the initial goals for MuSA_RT 2.0.</p>
<p>Processing unrestricted audio signal from a phone’s microphone to estimate tonal context allows some interesting experiments. In natural contexts, complete silence does not exist, and ambient sounds (light bulbs or refrigerator buzzing, rain falling,npeople talking, birds singing, dogs barking) all contribute to a tonal context, whether musical or not.</p>
<p>In a more focused musical context, the app passes the “Let It Be” test: start the app, put the device on the piano, play the chords - MuSA_RT <a href="https://www.youtube.com/embed/hZ2kJdeRo_Q">gets it right</a> (in the key of C major: C G Am F C G F C). MuSA_RT should work with most musical instruments and even with voice. It is worth noting here that the Spiral Array model only accounts for major and minor triads (it does not account for other chords such as 7th’s, etc.) so even though the model tracks the tonal context generated by such chords, it does not recognise/name them.</p>
<p>Playing recorded music from a speaker usually yields disappointing results in terms of chord tracking, for a number of reasons. Of course the quality of the speakers and the microphone will have an impact. The spectrum for music in which drums are a feature tends to be heavily dominated by those drums (they are the loudest). The audio engineers’ magic in modern professional music recording often results in a sound whose spectrum is quite different from that obtained directly from an acoustic musical instrument.</p>
<p>All this to lower expectations, but also to point to the fact that the FFT is probably not the best tool for this type of audio analysis (humans do hear chords over drums, even in modern recordings…). The FFT is undeniably a wonderful tool for audio signal processing, elegant and efficient, and because of that it is widely available and widely used… even when it is not the right tool. Unfortunately, for the moment the FFT remains the easiest (and only?) way to approximate human-like low-level audio signal analysis in an interactive system.</p>Alexandre R.J. Françoisalexandrefrancois@gmail.comMuSA_RT 2.02021-01-08T16:55:25+00:002021-01-08T16:55:25+00:00https://alexandrefrancois.org/music/mathematics/visualization/software/2021/01/08/MuSA_RT-2.0<p><a href="/MuSA_RT/assets/images/square.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="800" data-original-width="800" height="320" src="/MuSA_RT/assets/images/square.png" />
</a></p>
<p>The <a href="http://musa-rt.blogspot.com/">Music on the Spiral Array . Real Time (MuSA.RT)</a> project started almost 20 years ago. My first collaboration with <a href="https://en.wikipedia.org/wiki/Elaine_Chew">Elaine Chew</a>, MuSA_RT applies music analysis algorithms rooted in her <a href="https://en.wikipedia.org/wiki/Spiral_array_model">Spiral Array model</a> of tonality, which also provides the 3D geometry for the interactive visualization space.</p>
<p>The MuSA.RT project lasted many years, produced numerous publications, and various versions of the system featured in lectures and performances all around the world.</p>
<p>The software produced for this project was a constantly evolving research prototype (not something to put in the hands of a general public user), and subject to contemporary technical limitations. A Mac App released in 2012, intended as companion software for the book <a href="https://www.springer.com/gp/book/9781461494744"><em>Mathematical and Computational Modeling of Tonality: Theory and Applications</em>, Elaine Chew (2014)</a>, made the system accessible to general users.</p>
<p><a href="/MuSA_RT">MuSA_RT 2.0</a> is a universal iOS/iPadOS/macOS app that analyses the audio signal from a microphone, and offers an experimental Augmented Reality experience on devices that support it, <a href="https://apps.apple.com/app/musa-rt/id506866959">available for download on the Apple Store</a> for your favorite device (and it’s free).</p>
<div style="text-align: center;"><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/hZ2kJdeRo_Q" width="560"></iframe></div>Alexandre R.J. Françoisalexandrefrancois@gmail.comPriorities App 2.02021-01-07T11:01:54+00:002021-01-07T11:01:54+00:00https://alexandrefrancois.org/apps/design/2021/01/07/Priorities-App-2.0<div class="separator" style="clear: both; text-align: center;">
<a href="/Priorities/assets/images/priorities-logo.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;">
<img border="0" data-original-height="360" data-original-width="360" height="200" src="/Priorities/assets/images/priorities-logo.png" width="200" />
</a>
</div>
<h3 id="simply-manage-lists-of-prioritized-items">Simply manage lists of prioritized items</h3>
<p>The second iteration of the Priorities App pushes further the minimalistic UX design and adds a new feature: lists of lists. I presented the motivation behind the app and the design of the first version in a <a href="/apps/design/2020/08/09/Priorities-App.html">previous post</a>. Priorities 2.0 is <a href="https://apps.apple.com/us/app/priorities-sorted/id1469567351">available for download on the App Store</a>.</p>
<p>The home screen looks exactly the same as in the first iteration. The model is completely backward compatible, and users who do not need the lists of lists feature will not even be aware of it. This was a strong design requirement for 2.0.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="/Priorities/assets/images/priorities2-prioritized-list.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-prioritized-list.png" width="180" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Prioritized list</td></tr></tbody></table>
<p>The new feature is the ability to define inclusion relationships between any one item and any number of other existing items.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="/Priorities/assets/images/priorities2-prioritized-sublist.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-prioritized-sublist.png" width="180" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Prioritized sublist</td></tr></tbody></table>
<p>From a user perspective, this mechanism provides a way to organize items hierarchically.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="/Priorities/assets/images/priorities2-hierarchical-organization.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-hierarchical-organization.png" width="180" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Hierarchical organization</td></tr></tbody></table>
<p>Technically, it only amounts to a display/navigation convenience, as the underlying model is still a single master list of items. In practice an item can be a “subitem” of any existing item. This also means that the same item can appear in multiple other items.</p>
<h3 id="search-add-remove-create">Search, add remove, create</h3>
<p>While there is still a “create” button displayed when searching for an item from the root list, this is only to preserve the user flow from the last version. The new streamlined model for adding items ties into searching:</p>
<p>When in search mode:</p>
<ul>
<li>
<p>the first section lists all the matching sub-items in alphabetical order,</p>
</li>
<li>
<p>the next section lists all the existing matching items that are not currently subitems; each item has an action button to be added as sub-item,</p>
</li>
<li>
<p>the last section offers a “Create and Add Item” button automatically set to the search string</p>
</li>
</ul>
<p>The suggested flow is for the user to look for the item they want to add, and create it if it does not exist yet. The creation is done in context, as the item is automatically registered as a sub-item of the list in which the user searched for it.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr>
<td style="text-align: center;">
<a href="/Priorities/assets/images/priorities2-search-create-1.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-search-create-1.png" width="180" />
</a>
<a href="/Priorities/assets/images/priorities2-search-create-2.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-search-create-2.png" width="180" />
</a>
<a href="/Priorities/assets/images/priorities2-search-create-3.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-search-create-3.png" width="180" />
</a>
<a href="/Priorities/assets/images/priorities2-search-create-4.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-search-create-4.png" width="180" />
</a>
<a href="/Priorities/assets/images/priorities2-search-create-5.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-search-create-5.png" width="180" />
</a>
<a href="/Priorities/assets/images/priorities2-search-create-6.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-search-create-6.png" width="180" />
</a>
</td></tr>
<tr><td class="tr-caption" style="text-align: center;">Search and create</td></tr></tbody></table>
<h3 id="edit">Edit</h3>
<p>The item view offers an edit mode for the user to update the item’s name and description, and to manage the sub-items:</p>
<ul>
<li>
<p>the first section lists all current sub-items with an action to remove as sub-item of the current item (this does not delete the item from the general pool or from other items where it is currently a sub-item),</p>
</li>
<li>
<p>the second section lists all existing items that are not currently sub-items, with an action to add as sub-item,</p>
</li>
<li>
<p>in search mode, the first two sections only display matching items in the relevant category, and the last section offers a “Create and Add Item” button automatically set to the search string</p>
</li>
</ul>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;">
<tbody><tr><td style="text-align: center;">
<a href="/Priorities/assets/images/priorities2-edit-search.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-edit-search.png" width="180" />
</a>
<a href="/Priorities/assets/images/priorities2-edit-create.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-edit-create.png" width="180" />
</a>
</td></tr><tr><td class="tr-caption" style="text-align: center;">Add, remove and create sub items</td></tr></tbody></table>
<p>Adding or removing an item from a list of sub-items does not destroy the item itself - this is done by tapping the bin icon in the top left of the item edit screen, and requires confirmation as this is a destructive operation. A deleted item will be removed from all sub-item lists in which it appears.</p>
<h3 id="simplify">Simplify!</h3>
<p>Updating the priority of an item is best done in place while browsing the list, by swiping left or right.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="/Priorities/assets/images/priorities2-change-priority.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="512" data-original-width="288" height="320" src="/Priorities/assets/images/priorities2-change-priority.png" width="180" />
</a>
</td></tr><tr><td class="tr-caption" style="text-align: center;">Update priority with a swipe</td></tr></tbody></table>
<p>The pretty(?) but redundant and space-filling custom control featured in the first version’s item view makes way for the sub-item list in this new version.</p>
<h3 id="summary">Summary</h3>
<p>This version introduces a new feature, some simplification. Since the app design relies so much on finding and using the search, the main question remains whether those features are easily discoverable by a first time user. As for the next steps, SwiftUI enters the scene and everything is up for rethinking.</p>
<p><a href="/priorities">Priorities</a> is <a href="https://apps.apple.com/us/app/priorities-sorted/id1469567351">available for download on the app store</a>.</p>Alexandre R.J. Françoisalexandrefrancois@gmail.comPriorities App2020-08-09T10:08:49+00:002020-08-09T10:08:49+00:00https://alexandrefrancois.org/apps/design/2020/08/09/Priorities-App<div class="separator" style="clear: both; text-align: center;">
<a href="/Priorities/assets/images/priorities-logo.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;">
<img border="0" data-original-height="360" data-original-width="360" height="200" src="/Priorities/assets/images/priorities-logo.png" width="200" />
</a>
</div>
<h2 id="simply-manage-a-list-of-prioritized-items">Simply manage a list of prioritized items</h2>
<p>With a little bit of free time on my hands, I decided to get up to date on how to make an iOS app from scratch in Swift and publish it in the App Store (as a paying App, which turned out to be an interesting experience in itself). This blog post is about the motivation behind the app and the design of the first version (well, technically version 1.2 - <a href="https://apps.apple.com/us/app/priorities-sorted/id1469567351">Download on the App Store!</a>).</p>
<h3 id="a-list-of-prioritized-items">A list of prioritized items?</h3>
<p>I have been thinking for a long time about a simple app to facilitate my grocery/essentials shopping: I always buy the same basic items (milk, bread, cheese, tomatoes, chicken, yogurt, etc.), all I need to know when I am shopping is which items I will need again soon (or urgently). When I realize I will need something soon, I need a simple way to find the item (if already in the list) and put it back in the list of things to get. If it’s a new item, I should be able to add it easily, and not have to add it again in the future.</p>
<p>Until now I have been using Apple’s Reminders app but I cannot automatically prioritize items and I cannot search, so it is not well suited for this approach. There is very likely some list app out there that can do what I need, but in this case I wanted to design my own.</p>
<h3 id="priorities">Priorities</h3>
<p>The app defines items characterized by a name (string) and optional details (other string), and a priority level. There are 4 priority levels: low, medium, high and critical. The prioritized list only shows items that are at priority critical, high and medium, sorted by decreasing level of priority (and alphabetically by name in each priority level). For my shopping I think of medium as “get some if on sale,” high as “we’re running out soon,” and critical as “we needed some yesterday.”</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="/Priorities/assets/images/priorities1-prioritized-list.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1600" data-original-width="900" height="320" src="/Priorities/assets/images/priorities1-prioritized-list.png" width="180" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Prioritized list</td></tr></tbody></table>
<h3 id="lower-priority">Lower Priority</h3>
<p>When shopping, I look at the prioritized list. When I get an item on the list, I swipe left to lower its priority back to low (which removes it from the list).</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="/Priorities/assets/images/priorities1-change-priority.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1600" data-original-width="900" height="320" src="/Priorities/assets/images/priorities1-change-priority.png" width="180" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Change priority with a swipe</td></tr></tbody></table>
<h3 id="search-and-create-items">Search and Create Items</h3>
<p>When I need more of something, I search the list of existing items (pull down to reveal the Search bar and tap in it). I can see the list of all items sorted alphabetically.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="/Priorities/assets/images/priorities1-search.png" style="margin-left: auto; margin-right: auto;">
<img border="0" data-original-height="1600" data-original-width="900" height="320" src="/Priorities/assets/images/priorities1-search.png" width="180" />
</a></td></tr><tr><td class="tr-caption" style="text-align: center;">Search existing items or browse alphabetically</td></tr></tbody></table>
<p>If I type something in the search, the list gets restricted to items whose name contains the search string. I am also presented with a “create item” button that uses the search string as the initial name for the new item. If the item I am looking for already exists, I can simply swipe right and select a higher priority level for the item. I can also adjust the priority by tapping on the item and changing the priority in the item view.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="/Priorities/assets/images/priorities1-create-edit.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1600" data-original-width="900" height="320" src="/Priorities/assets/images/priorities1-create-edit.png" width="180" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Create or edit an item</td></tr></tbody></table>
<p>If I tap the button to create a new item, I get to the item view where I can edit the name, details and set the item’s priority. The item view also offers the option to delete the item entirely but that would be a rare occurrence for me since I don’t want to have to create it again next time I need it.</p>
<h3 id="and-thats-it">And that’s it!</h3>
<p>These are the basic operations - all that’s needed to capture the requirements stated at the beginning.</p>
<p><a href="/priorities">Priorities</a> is <a href="https://apps.apple.com/us/app/priorities-sorted/id1469567351">available for download on the app store</a>.</p>Alexandre R.J. Françoisalexandrefrancois@gmail.com