Signal processing tips from Hackaday

Signal processing is an electrical engineering subfield that focuses on analysing, modifying and synthesizing signals such as sound, images and biological measurements. Electronic signal processing was first revolutionized by the MOSFET and then single-chip digital signal processor (DSP). Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors (DSP chips).

Hackaday has published an in interesting series of articles on signal processing, and here are some picks from it:

RTFM: ADCs And DACs
https://hackaday.com/2019/10/16/rtfm-adcs-and-dacs/

DSP Spreadsheet: IQ Diagrams
https://hackaday.com/2019/11/15/dsp-spreadsheet-iq-diagrams/

Sensor Filters For Coders
https://hackaday.com/2019/09/06/sensor-filters-for-coders/

DSP Spreadsheet: FIR Filtering
https://hackaday.com/2019/10/03/dsp-spreadsheet-fir-filtering/

Fourier Explained: [3Blue1Brown] Style!
https://hackaday.com/2019/07/13/fourier-explained-3blue1brown-style/

DSP Spreadsheet: Frequency Mixing
https://hackaday.com/2019/11/01/dsp-spreadsheet-frequency-mixing/

Spice With A Sound Card
https://hackaday.com/2019/07/03/spice-with-a-sound-card/
- check also A real-time netlist based audio circuit plugin at https://github.com/thadeuluiz/RTspice

Reverse Engineering The Sound Blaster
https://hackaday.com/2019/06/19/reverse-engineering-the-sound-blaster/

FM Signal Detection The Pulse-Counting Way
https://hackaday.com/2019/08/28/fm-signal-detection-the-pulse-counting-way/

DSP Spreadsheet: IQ Diagrams< https://hackaday.com/2019/11/15/dsp-spreadsheet-iq-diagrams/

Here is an extra, not from Hackaday, but an interesting on-line signal processing tool for generating sounds
https://z.musictools.live/#95

164 Comments

  1. Tomi Engdahl says:

    Virtual Oscilloscope
    This online virtual oscilloscope allows you to visualise live sound input and get to grips with how to adjust the display
    https://academo.org/demos/virtual-oscilloscope/

    Reply
  2. Tomi Engdahl says:

    Parametric Press Unravels The JPEG Format
    https://hackaday.com/2023/02/14/parametric-press-unravels-the-jpeg-format/

    This is the first we’ve heard of Parametric Press — a digital magazine with some deep dives into a variety of subjects (such as particle physics, “big data” and such) that have interactive elements or simulations of various types embedded within each story.

    The first one that sprung up in our news feed is a piece by [Omar Shehata] on the humble JPEG image format. In it, he explains the how and why of the JPEG encoding process, allowing the reader to play with the various concepts along the way, in real time, within the browser.

    https://parametric.press/issue-01/unraveling-the-jpeg/

    Reply
  3. Tomi Engdahl says:

    A Guide to Choosing the Right Signal-Processing Technique
    March 29, 2023
    From audio beamforming to blind source separation, this article discusses the pros and cons of the different techniques for signal processing in your device design.
    https://www.electronicdesign.com/technologies/analog/article/21262990/audiotelligence-a-guide-to-choosing-the-right-signalprocessing-technique?utm_source=EG+ED+Analog+%26+Power+Source&utm_medium=email&utm_campaign=CPS230323066&o_eid=7211D2691390C9R&rdx.identpull=omeda|7211D2691390C9R&oly_enc_id=7211D2691390C9R

    What you’ll learn:

    What signal-processing techniques are available?
    How do the different signal-processing techniques work?
    Tips on choosing the right signal-processing technique for your application.

    Noise is all around us—at work and at home—making it difficult to pick out and clearly hear one voice amid the cacophony, especially as we reach middle age. Electronic devices have the same issue: Audio signals picked up by their microphones are often contaminated with interference, noise, and reverberation. Signal-processing techniques, such as beamforming and blind source separation, can come to the rescue. But what’s the best option, and for which applications?

    Intelligible speech is crucial for a wide variety of electronic devices, ranging from phones, computers, hearing-assistance devices, and conferencing systems to transcription services, car infotainment, and home assistants. But a one-size-fits-all approach isn’t the way to get the best performance out of such widely different devices.

    Variations in factors such as the number of microphones and the size of the microphone array will have an effect on which signal-processing technique is the most appropriate. The choice requires consideration not just of the performance you need, but the situation in which you need the application to work, as well as the physical constraints of the product you have in mind.

    Audio Beamforming

    Audio beamforming is one of the most versatile multi-microphone methods for emphasizing a particular source in an acoustic scene. Beamformers can be divided into two types, depending on how they work: data-independent or adaptive.

    One of the simplest forms of data-independent beamformers is a delay-and-sum beamformer, where the microphone signals are delayed to compensate for the different path lengths between a target source and the different microphones. This means that when the signals are summed, the target source coming from a certain direction will experience coherent combining, and it’s expected that signals arriving from other directions will suffer, to some extent, from destructive combining.

    However, in many audio consumer applications, these types of beamformers will be of little benefit because they need the wavelength of the signal to be small compared with the size of the microphone array. They work well in top-of-the-range conferencing systems with microphone arrays 1 m in diameter containing hundreds of microphones to cover the wide dynamic range of wavelengths. But such systems are expensive to produce and therefore only suitable for the business conferencing market.

    Consumer devices, on the other hand, usually contain just a few microphones in a small array. Consequently, delay-and-sum beamformers struggle as the large wavelengths of speech are arriving at a small microphone array.

    Another problem is the fact that sound doesn’t move in straight lines—a given source has multiple different paths to the microphones, each with differing amounts of reflection and diffraction. This means that simple delay-and-sum beamformers aren’t very effective at extracting a source of interest from an acoustic scene.

    Adaptive Beamformers

    Adaptive beamformers are a more advanced beamforming technique. One example is the minimum variance distortionless response (MVDR) beamformer. It tries to pass the signal arriving from the target direction in a distortionless way, while attempting to minimize the power at the output of the beamformer. This has the effect of trying to preserve the target source while attenuating the noise and interference.

    Such a technique can work well in ideal laboratory conditions, but in the real world, microphone mismatch and reverberation can lead to inaccuracy in modeling the effect of the source location relative to the array. The result is that these beamformers often perform poorly because they will start cancelling parts of the target source.

    A voice activity detector could be added to address the target cancellation problem, and the adaptation of the beamformer can be turned off when the target source is active. This typically works well when there’s just one target source. However, if there are multiple competing speakers, this technique has limited effectiveness.

    Many modern devices use another beamforming technique called adaptive sidelobe cancellation, which tries to null out the sources that aren’t from the direction of interest. These are state-of-the-art in modern hearing aids, allowing the user to concentrate on sources directly in front of them.

    Blind Source Separation

    An alternative approach to improving speech intelligibility in noisy environments is to use blind source separation (BSS) (see video below). Time-frequency masking BSS estimates the time-frequency envelope of each source and then attenuates the time-frequency points that are dominated by interference and noise.

    Another type of BSS uses linear multichannel filters. The acoustic scene is separated into its constituent parts using statistical models of how sources generally behave. BSS then calculates a multichannel filter whose output best fits these statistical models. In doing so, it intrinsically extracts all of the sources in the scene, not just one.

    The multichannel filter method can handle microphone mismatch and will deal well with reverberation and multiple competing speakers. It doesn’t need any prior knowledge of the sources, the microphone array, or the acoustic scene, since all of these variables are absorbed into the design of the multichannel filter. Changing a microphone, or a calibration error, simply changes the optimal multichannel filter.

    Because BSS works from the audio data rather than the microphone geometry, it’s a very robust approach that’s insensitive to calibration issues and can generally achieve much higher separation of sources in real-world situations than any beamformer. And, because it separates all sources irrespective of direction, it can be used to automatically follow a multi-way conversation.

    BSS Drawbacks

    However, BSS is not without its problems. For most BSS algorithms, the number of sources that can be separated depends on the number of microphones in the array. In addition, because it works from the data, BSS needs a consistent frame of reference.

    As a result, it limits the technique to devices that have a stationary microphone array. Examples include a tabletop hearing device, a microphone array for fixed conferencing systems, or video calling from a phone or tablet that’s being held steady in your hands or on a table.

    When there’s background chatter, BSS will generally separate the most dominant sources in the mix, which may include the annoyingly loud person at the next table. So, to work effectively, BSS needs to be combined with an ancillary algorithm to determine which of the sources are the sources of interest.

    On its own, BSS separates sources very well, but it doesn’t reduce the background noise by more than about 9 dB. To obtain really good performance, it must be paired with a noise-reduction technique.

    Many solutions for noise reduction use artificial intelligence (AI)—it’s utilized by Zoom and other conferencing systems, for example—to analyze the signal in the time-frequency domain. Then it tries to identify which components are due to the signal and those that are due to noise. This can work well with just a single microphone. The big problem with this technique, though, is that it extracts the signal by dynamically gating the time-frequency content, which can lead to unpleasant artifacts in poor signal-to-noise ratios (SNRs), and it may introduce considerable latency.

    A low-latency noise-suppression algorithm combined with BSS, on the other hand, gives up to 26 dB of noise suppression and makes products suitable for real-time use—with a latency of just 5 ms and a more natural sound with fewer distortions than AI solutions. Hearing devices, in particular, need ultra-low latency to keep lip sync; it’s extremely off-putting for users if the sound they hear lags behind the mouth movements of the person they are talking to.

    The number of electronic devices that need to receive clear audio to work effectively is rising every year

    Ask the Audio Experts: Separating Sounds with Blind Source Separation
    https://www.youtube.com/watch?v=qd7G-Xlktdw

    Reply
  4. Tomi Engdahl says:

    Hackaday Prize 2023: Learn DSP With The Portable All-in-One Workstation
    https://hackaday.com/2023/05/16/hackaday-prize-2023-learn-dsp-with-the-portable-all-in-one-workstation/

    Learning Digital Signal Processing (DSP) techniques traditionally involves working through a good bit of mathematics and signal theory. To promote a hands-on approach, [Clyne] developed the DSP PAW (Portable All-in-one Workstation). DSP PAW hardware and software provide a complete learning environment for any computer where DSP algorithms can be entered as C++ code through an Arduino-like IDE.

    The DSP PAW demonstrating attenuation controlled by a potentiometer.

    The DSP PAW hardware comprises a custom board that plugs onto an STM32 NUCLEO Development Board from STMicroelectronics.

    DSP PAW
    Design, study, and analyze DSP algorithms from anywhere.
    https://hackaday.io/project/190725-dsp-paw

    Reply
  5. Tomi Engdahl says:

    DIY Programmable Guitar Pedal Rocks The Studio & Stage
    https://hackaday.com/2023/05/16/diy-programmable-guitar-pedal-rocks-the-studio-stage/

    Ever wondered how to approach making your own digital guitar effects pedal? [Steven Hazel] and a friend have done exactly that, using an Adafruit Feather M4 Express board and a Teensy Audio Adapter board together to create a DIY programmable digital unit that looks ready to drop into an enclosure and get put right to work in the studio or on the stage.
    The bulk of the work is done with two parts, and can be prototyped easily on a breadboard.

    [Steven] also made a custom PCB to mount everything, including all the right connectors, but the device can be up and running with not much more than the two main parts and a breadboard.

    Building a Guitar Pedal Prototype with a Feather
    https://blog.blacklightunicorn.com/building-a-guitar-pedal-with-the-adafruit-feather-m4/

    Reply
  6. Tomi Engdahl says:

    Modelling Neuronal Spike Codes

    Using principles of sigma-delta modulation techniques to carry out the mathematical operations that are associated with a neuronal topology

    https://hackaday.io/project/190891-modelling-neuronal-spike-codes

    Reply
  7. Tomi Engdahl says:

    Designing a simple analog kick drum from scratch
    https://www.youtube.com/watch?v=yz37Yz315eU

    If you look at my backlog of videos, you’ll notice that I never tackled any percussion circuits before. This is mainly because percussion circuits are quite complex and dense. They mash a ton of different functional blocks – oscillators, envelopes, VCAs, filters etc. – into super efficient little packages.

    And they achieve that by taking shortcuts left and right, in sometimes surprising and unintuitive ways. Which makes them even less approachable.

    So I decided to cut my teeth on simpler single-purpose circuits first. Now that I’ve covered all of the essentials though, I felt it’s time to give percussion a proper go. So in this video, we’ll try our hand at a classic, Roland-inspired analog kick drum. If you want to build along

    Reply
  8. Tomi Engdahl says:

    https://www.facebook.com/groups/AudioEngineerShitPost/permalink/3597395130495079/

    Using a parametric equalizer and a delay, it’s possible to approximate a variety of audio effects. Reverb simulation can be achieved through delayed and EQ’d reflections, while a chorus effect involves duplicating audio, applying pitch variation, and using a short delay with EQ. The flanger effect is attainable by combining original and delayed audio with modulated delay times. Similarly, a phaser-like effect can be created by altering delayed signals’ EQ’d frequencies. Rhythmic amplitude changes reminiscent of tremolo can be done by automating volume with delays. Sweeping filter effects are possible by modulating EQ settings over time. Pitch shifting, doubling, echo, and comb filter effects can also be simulated through similar manipulation of EQ and delay parameters. While specialized tools offer more precision, this technique provides a way to experiment and achieve distinctive audio outcomes.

    Reply
  9. Tomi Engdahl says:

    Fucking Fourier transforms: how do they work?

    Reply
  10. Tomi Engdahl says:

    The Math Behind Music and Sound Synthesis
    https://www.youtube.com/watch?v=Y7TesKMSE74

    We hear sound because our ears can detect vibrations in the air, which come from sources like everyday objects, speakers and other people talking. Why do certain musical intervals sound the way they do? How are electronic music instruments made? This video will be all about sound frequencies, wave shapes and the math behind it all.

    Timestamps
    00:00 Intro
    00:37 Pitch vs Frequency
    01:27 Chromatic Scale, Consonance & Dissonance
    05:18 Harmonic Series, Tonality & Instrument Timbre
    07:37 Wave Shapes & Sound Design

    Reply
  11. Tomi Engdahl says:

    The Geometry of Music
    https://www.youtube.com/watch?v=ZWzwb4BumIk

    What does a rectangle sound like? A square, a circle, a pentagon? This short video introduces the geometry of music.

    Reply
  12. Tomi Engdahl says:

    The Mathematical Problem with Music, and How to Solve It
    https://www.youtube.com/watch?v=nK2jYk37Rlg

    There is a serious mathematical problem with the tuning of musical instruments. A problem that even Galileo, Newton, and Euler tried to solve. This video is about this problem and about some of the ways to tackle it. It starts from the basic physics of sound, proves mathematically why some musical instruments can never be perfectly in tune, and then introduces the main solutions that were proposed to solve this problem, along with their upsides and downsides: Pythagorean tuning, Just intonation, the Meantone temperament, and finally – the equal temperament, which is the tuning system almost everybody uses today in the West.

    Reply
  13. Tomi Engdahl says:

    Guitar Distortion With Diodes In Code, Not Hardware

    https://hackaday.com/2023/08/23/guitar-distortion-with-diodes-in-code-not-hardware/

    Guitarists will do just about anything to get just the right sound out of their setup, including purposely introducing all manner of distortion into the signal. It seems counter-intuitive, but it works, at least when it’s done right. But what exactly is going on with the signal? And is there a way to simulate it? Of course there is, and all it takes is a little math and some Arduino code.

    https://baltic-lab.com/2023/08/dsp-diode-clipping-algorithm-for-overdrive-and-distortion-effects/

    Reply
  14. Tomi Engdahl says:

    The Fourier Series and Fourier Transform Demystified
    https://www.youtube.com/watch?v=mgXSevZmjPc

    But what is the Fourier Transform? A visual introduction.
    https://www.youtube.com/watch?v=spUNpyF58BY

    Reply
  15. Tomi Engdahl says:

    Mechanical circuits: electronics without electricity
    https://www.youtube.com/watch?v=QrkiJZKJfpY

    Spintronics has mechanical resistors, inductors, transistors, diodes batteries and capacitors. When you connect them together with chains, they give a really good intuition for how circuits works.

    Reply
  16. Tomi Engdahl says:

    Designing a simple analog kick drum from scratch
    https://www.youtube.com/watch?v=yz37Yz315eU

    Reply
  17. Tomi Engdahl says:

    3D Audio Producer
    https://hackaday.io/project/178760-3d-audio-producer

    Application that producers 3D audio from 2D mono audio samples and a graphical user interface.

    This project is an application that producers 3d audio from 2D mono audio samples and a graphical user interface. It is a fork of the binaural-audio-editor project with the goal of an interactive interface, accessibility, lightweight.

    Gitlab Link:
    https://gitlab.com/pab44/3d-audio-producer

    Video Demo:
    https://www.youtube.com/watch?v=byJa1Za55PY

    Reply
  18. Tomi Engdahl says:

    ADAU1401/ADAU1701 DSPmini Learning Board
    https://www.diyaudio.com/community/threads/adau1401-adau1701-dspmini-learning-board.360238/

    Hi, I found this on Aliexpress and was wondering if it is a board that can be used to make active crossover speakers. If so, that would be pretty great rather than buying a MiniDSP for each speaker.

    Description:

    Features:
    The ADAU1401 is a complete single-chip audio system with built-in 28/56-bit audio DSP, ADC, DAC and microcontroller-like control interface. Signal processing techniques including equalization, crossover, bass boost, multi-band dynamic processing, delay compensation, speaker compensation, and stereo image widening can be used to compensate for the practical limitations of speakers, amplifiers, and listening environments, dramatically improving the sound quality experience.
    The ADAU1401 program can be loaded from the serial EEPROM through its self-booting mechanism at power-up or from an external microcontroller. When turned off, the current state of the parameter can be written back to the EEPROM from the ADAU1401 to be recalled the next time the program is run.
    Two ADCs and four DACs provide 98.5 dB of analog input to analog output dynamic range. Seamless connection to other ADCs and DACs is possible with digital input and output ports. The ADAU1401 communicates over an I2C bus or a four-wire SPI port.
    This edition is suitable for ADAU1401/1701/1702 learning and product use. The ADAU1401 is more stable than the ADAU1701 in the harsh working environment of -40°-105° (the ADAU1701 is generally 0-70°). The program can be used universally and other indicators are the same. It has self-starting after power-on and can be completely separated from the single-chip operation. Leads to all function ports, which can expand digital I2S input/output/key function/LED drive/auxiliary ADC (potentiometer analog)/encoder volume adjustment.
    Application:
    Multimedia speaker system
    MP3 player speaker
    Car audio host
    Mini stereo system
    Digital Television
    Studio monitoring system
    Speaker divider
    Instrument sound processor
    Seat sound system (aircraft / coach)

    Specification:
    Dimensions: 3.5x5cm/1.38×1.97inch
    Color: Green
    Quantity: 1 Pc
    Note: 1.Please allow 0-1cm error due to manual measurement. pls make sure you do not mind before you bid.
    2.Due to the difference between different monitors, the picture may not reflect the actual color of the item. Thank you!

    Package includes:
    1 x ADAU1401/ADAU1701 DSPmini Learning Board (without retail package)

    People have used similar low cost DSP chips to make speaker crossovers. Probably doable with the one you mention. However, a DSP chip is only one part of a digital crossover system. There also needs to be a source of digital audio to input into the DSP chip and a dac channel output to drive the power amp for each speaker driver. Those digital audio source and destination devices have to be able to communicate with the DSP chip, which typically occurs at the I2S bus level. Making a whole digital speaker crossover system built around a particular DSP chip can be a fairly complicated undertaking for a beginner.

    Since you are looking to work on an audio application, I would use one of these boards: https://freedsp.github.io/index.html

    I built a freeDSP Classic SMD a few years back, when I needed a low cost DSP to do some crossover simulation and development. It worked like a charm.

    The Sigma Studio software is relatively easy to learn and IIRC it is relatively easy to use it to implement what you are looking for.

    Hi, glide 1-San,
    If you build two (Stereo) 3-way powered speakers, you can try FreeDSP Catamaran boards.
    It was designed in dual mono architecture with embedded four pots. Especially extreme high-performance differential ADC will be suitable for your case.
    https://github.com/freeDSP/FreeDSP_Catamaran_AB/wiki

    Additionally, you need ADI USBi, FreeUSBi, or Connection Conversion adaptor for DB-DP11219 Wondom Programmer.
    https://github.com/freeDSP/freeUSBi

    This might help. The add on board is on amazon
    https://daumemo.com/how-to-program-an-analog-devices-dsp/

    You can get an idea of the performance of the boards the op linked too via this, first rew lines
    https://www.analog.com/media/en/technical-documentation/data-sheets/ADAU1701.pdf

    Top ADAU1701 DSPmini learning board
    https://www.youtube.com/watch?v=0R5UnQydwBE

    low cost ADAU1452 China board…
    https://www.diyaudio.com/community/threads/low-cost-adau1452-china-board.309680/page-3

    Anybody played with this board?
    ADAU1401/ADAU1701 DSPmini learning board (upgrading to ADAU1401).|board|board board – AliExpress

    I got Sigma Studio talking to the board, but it doesn’t seem to save the program on the e2prom.

    It says it is written, no coms failure. However when I reset the power it goes back to the default program flashing the LED.

    I right click on the ADAU1401 IC then select ‘write latest compilation to e2prom’.

    I’m using the Cyrus programmer with drivers from FreeDSP. I have had no problems using it with the 1452.

    I think the problem might be the eeprom ‘WP’ write protect pin. What did you do with it? Do I need to ground it, or pull off a resistor or something? There are no jumpers!

    I got it working!!

    Yes, the ‘WP’ pin stands for write protect (which I’m supposed to just know?) and it must be manually grounded when you hit ‘write to eeprom’.

    I’m unsure why the ADAU1401 doesn’t automatically pull the pin low when doing a write command… the datasheet says that’s how it should work.

    Do not ground the WP pin before turning it on, or the ADAU1401 can not boot. Also remove the jumper to ground as soon as the data is written before you remove power. When the power is removed it can write crap to the eeprom and corrupt the data.

    In summery:

    1) Boot the board and ‘link, compile, upload’ your program in Sigma Studio.
    2) When you want to save it to EEPOM, add the WP to GND jumper. Write the EEPROM.
    3) Remove the jumper.
    4) Power off the board.

    Glad you have it working!

    That little board seems very good, and also has the multipurpose pins exposed, so it’s possible to expand it a little more. I have been asking how to do that, like add a potentimeter for volume and use external DAC’s for additional outputs.

    I’d like to ask what exact method you guys have used to install the Cypress USB driver for use with sigma studio?

    I wanted to document it for an Instructables so removed the driver then tried to do ti again but I’ve found it very variable.

    It used to be installed as ‘Analog Devices USBi (Programmed)’.

    Now when I install it, it calls itself Cypress Boot Loader’ but it does still work. Is this something to do with having the EEPROM jumper on or off during install?

    Also, if I tell windows to search in the driver folder it installs it as EZ-USB but it says the device doesn’t work. I need to use ‘Have Disk’ and manually install it.

    I find it all rather confusing.

    Thanks regnet for the files, it helped a lot to get the DSP up and running. Everything works wonderfully.
    To write the Eprom from the DSP I had to use SPI connection.

    I did some research and found out how the EZ USB Programmer with freeusbi driver communicates directly with the DSP board via SPI. The connection is very stable! See graphic for pin assignment. it also works with the 2 jumper board

    I’ve already customized the original Sigma Studio file for my purposes. The DSP offers a solid basis to control my diy active speakers. Can you use the STM32 microcontroller on the DSP somehow directly an LCD display; to connect ir remote and rotary encoder for control, or is it only used for communication with Arduino …?

    Check out the user guide for the Dayton Audio DSP board based on ADAU1401. It has some basic examples.

    SigmaStudio®
    Graphical development tool for programming, development, and tuning software for ADI DSP audio processors and A2B® transceivers.
    https://www.analog.com/en/design-center/evaluation-hardware-and-software/software/ss_sigst_02.html

    .NET based integrated development environment (IDE).
    Supports all SigmaDSP processors.
    Supports SHARC processors when SigmaStudio for SHARC extension is installed.
    Supports A2B transceivers when A2B Software for Windows/Baremetal extension is installed.
    Allows engineers with little or no DSP coding experience to add quality digital signal processing to their designs.
    Offers a wide variety of signal processing algorithms integrated into an intuitive graphical user interface (GUI), allowing the creation of complicated audio signal flows.
    The tool can help users lower their costs by reducing development time without sacrificing quality or performance.

    ADAU1401/ADAU1701 DSPmini Learning Board Update To ADAU1401 Single Chip Sy Dropship
    https://www.aliexpress.com/item/1005005346777483.html?src=criteo&albch=criteo_New&acnt=criteo-LF&albcp=157651&device=pc&clickid=651198c1c3e8669f966f6b47745b31be_1695652034_1005005346777483&cto_pld=uI_iRcLKAACrlxDvSD9Buw&aff_fcid=c2e8037a53aa40bcab6bd5fed0ea2b9a-1695652062094-05510-UneMJZVf&aff_fsk=UneMJZVf&aff_platform=aaf&sk=UneMJZVf&aff_trace_key=c2e8037a53aa40bcab6bd5fed0ea2b9a-1695652062094-05510-UneMJZVf&terminal_id=a84dd682a66047c694dfe5628f2eb974&afSmartRedirect=y

    Reply
  19. Tomi Engdahl says:

    The Reverse Oscilloscope
    https://hackaday.com/2023/09/25/the-reverse-oscilloscope/

    Usually, an oscilloscope lets you visualize what a signal looks like. [Mitexla]’s reverse oscilloscope lets you set what you want an audio waveform to look like, and it will produce it. You can see the box in the video below.

    According to [Mitexla] part of the difficulty in building something like this is making the controls manageable for mere mortals. We really like the slider approach, which seems pretty obvious, but some other controls are a bit more subtle. For example, the interpolation control can create a squarish wave or a smooth waveform, or anything in between.

    This is sort of an artistic take on an arbitrary waveform generator but with a discrete-panel user interface. The device contains a Teensy, a Raspberry PI Pico, a 16-bit ADC, and an external DAC.

    https://mitxela.com/projects/rscope2

    Reply
  20. Tomi Engdahl says:

    Robotic Mic Swarm Helps Pull Voices Out Of Crowded Room Of Multiple Speakers
    https://hackaday.com/2023/10/04/robotic-mic-swarm-helps-pull-voices-out-of-crowded-room-of-multiple-speakers/

    One of the persistent challenges in audio technology has been distinguishing individual voices in a room full of chatter. In virtual meeting settings, the moderator can simply hit the mute button to focus on a single speaker. When there’s multiple people making noise in the same room, though, there’s no easy way to isolate a desired voice from the rest. But what if we ‘mute’ out these other boisterous talkers with technology?

    Enter the University of Washington’s research team, who have developed a groundbreaking method to address this very challenge. Their innovation? A smart speaker equipped with self-deploying microphones that can zone in on individual speech patterns and locations, thanks to some clever algorithms.

    Robotic ‘Acoustic Swarms’

    The devices can readily isolate speech from different parts of the room. YouTube/Paul G. Allen SchoolThe system of microphones is reminiscent of a swarm of pint-sized Roombas, which spring into action by deploying to specific zones in a room. Picture this: during a board meeting, instead of the usual central microphone setup, these roving mics would take its place, enhancing the control over the room’s audio dynamics. This robotic “acoustic swarm” can not only differentiate voices and their precise locations in a room, but it achieves this monumental task purely based on sound, ditching the need for cameras or visual cues. The microphones, each roughly an inch in diameter, are designed to roll back to their charging station after usage, making the system easily transportable between different environments.

    The prototype comprises of seven miniature robots, functioning autonomously and in sync. Using high-frequency sound, much like bats, these robots navigate their way around tables, avoiding dropoffs and positioning themselves to ensure maximum audio accuracy. The goal is to maintain a significant distance between each individual robotic unit. This spacing increases the system’s ability to mute and create specific audio zones effectively.

    “If I have one microphone a foot away from me, and another microphone two feet away, my voice will arrive at the microphone that’s a foot away first. If someone else is closer to the microphone that’s two feet away, their voice will arrive there first,” explained by paper co-author Tuochao Chen. “We developed neural networks that use these time-delayed signals to separate what each person is saying and track their positions in a space. So you can have four people having two conversations and isolate any of the four voices and locate each of the voices in a room,” said Chen.

    Tested across kitchens, offices, and living rooms, the system is capable of differentiating voices situated within 1.6 feet of each other 90% of the time, without any prior information about how many speakers are in the room. Currently, it takes roughly 1.82 seconds to process 3 seconds of audio. This delay is fine for livestreaming, but the additional processing time makes it undesirable for use on live calls at this stage.

    Reply
  21. Tomi Engdahl says:

    https://en.cppreference.com/w/c/numeric/math/fma

    Return value

    If successful, returns the value of (x*y) + z as if calculated to infinite precision and rounded once to fit the result type (or, alternatively, calculated as a single ternary floating-point operation).

    If a range error due to overflow occurs, ±HUGE_VAL, ±HUGE_VALF, or ±HUGE_VALL is returned.

    If a range error due to underflow occurs, the correct value (after rounding) is returned.

    Reply
  22. Tomi Engdahl says:

    EQ Doesn’t Cause Phase Shift…
    https://m.youtube.com/watch?si=YGnAikH5mP-buizm&fbclid=IwAR11UESiAVjS9Uqiu2fdLtQH3dO_tWJ9P-3zA2jn0ySwa-Y7k-dxyUNDbvI&v=1ormfTMYfv0&feature=youtu.be

    These videos are very nice because they talk about actual dsp concepts from an audio engineering standpoint

    Reply
  23. Tomi Engdahl says:

    It may be possible to recover the original signal after a filter (convolution) by using a deconvolution method with a certain degree of accuracy. When the measurement error is very low (ideal case), deconvolution collapses into a filter reversing.
    https://www.dspguide.com/ch17/2.htm

    Reply
  24. Tomi Engdahl says:

    Extract phone numbers from an audio recording of the dial tones.
    https://github.com/ribt/dtmf-decoder

    Reply
  25. Tomi Engdahl says:

    DTMF Decoder
    https://dtmf.netlify.app/

    DTMF Encoder/Decoder
    This tool alows you to encode or decode DTMF (dual-tone multi-frequency) signals.
    https://nhollmann.github.io/DTMF-Tool/

    Reply
  26. Tomi Engdahl says:

    https://patents.google.com/patent/US4951054A/en
    Floating-point digital-to-analog converting system

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*