Submitted by Leo_D517 t3_11xd1iz in MachineLearning

audioflux is a deep learning tool library for audio and music analysis, feature extraction. It supports dozens of time-frequency analysis transformation methods and hundreds of corresponding time-domain and frequency-domain feature combinations. It can be provided to deep learning networks for training, and is used to study various tasks in the audio field such as Classification, Separation, Music Information Retrieval(MIR) and ASR etc.

Source Code: https://github.com/libAudioFlux/audioFlux

246

Comments

You must log in or register to comment.

xbcslzy t1_jd2eyo7 wrote

Nice, hope it helps me in my work

3

Leo_D517 OP t1_jd2g6pg wrote

First, librosa is a very good audio feature library.

The difference between audioflux and librosa is that:

  • Systematic and multi-dimensional feature extraction and combination can be flexibly used for various task research and analysis.
  • High performance, core part C implementation, FFT hardware acceleration based on different platforms, convenient for large-scale data feature extraction.
  • It supports the mobile end and meets the real-time calculation of audio stream at the mobile end.

Our team wants to do audio MIR related business at mobile end, all operations of feature extraction must be fast and cross-platform support for the mobile end.

For training, we used the librosa method to extract CQT-related features at that time. It took about 3 hours for 10000 sample data, which was really slow.

Here is a simple performance comparison

Server hardware:

- CPU: AMD Ryzen Threadripper 3970X 32-Core Processor
- Memory: 128GB

Each sample data is 128ms(sampling rate: 32000, data length: 4096).

The total time it takes to extract features from 1000 sample data.

Package audioFlux librosa pyAudioAnalysis python_speech_features
Mel 0.777s 2.967s -- --
MFCC 0.797s 2.963s 0.805s 2.150s
CQT 5.743s 21.477s -- --
Chroma 0.155s 2.174s 1.287s --

Finally, audioflux has been developed for about half a year, and open source has only been more than two months. There must be some deficiencies and improvements. The team will continue to work hard to listen to community opinions and feedback.

Thank you for your participation and support. We hope that the follow-up of the project will be better and better.

46

fanjink t1_jd2ghpk wrote

This library looks great, but I get this:
OSError: dlopen(/Users/***/opt/anaconda3/envs/audio/lib/python3.9/site-packages/audioflux/lib/libaudioflux.dylib, 0x0006): tried: '/Users/***/opt/anaconda3/envs/audio/lib/python3.9/site-packages/audioflux/lib/libaudioflux.dylib' (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e)))

3

Leo_D517 OP t1_jd2hhov wrote

First of all, we have noticed this issue and it will be resolved in the upcoming next version. For now, you can install by compiling the source code.

Please follow the steps in the Document to compile the source code.

The steps are as follows:

  1. Installing dependencies on macOS
    Install Command Line Tools for Xcode. Even if you install Xcode from the app store you must configure command-line compilation by running:
    xcode-select --install
  2. Python setup:
    $ python setup.py build
    $ python setup.py install
7

rising_pho3nix t1_jd2jjsz wrote

This is nice.. I'm doing MIR as part of my Thesis work. Will definitely use this.

16

JJtheSucculent t1_jd3axwm wrote

This is cool. I’m curious to try it out for an audio side project.

3

r4and0muser9482 t1_jd4e8qn wrote

Looks neat. How does it compare to OpenSMILE? The license sure makes it an attractive alternative.

2

Oswald_Hydrabot t1_jd5d0h0 wrote

Very cool! I have been looking for a better toolkit for audio analysis, this looks great!

2

ShowerVagina t1_jd5dnri wrote

Yes. We have image AI, NLP text AI, video is on the way, probably later this year. I've been waiting for music AI. Jukebox was pretty meh. I know it can be way better.

1

itsnotlupus t1_jd5td54 wrote

It's not a user-facing product, it's a building block that would be useful to train music-oriented neural network, be they diffusers or other types of models.

It's probably going to take a little while before we see new models that leverage this library.

If you're looking for "stable diffusion but for music" right now, you could look at Riffusion (https://huggingface.co/riffusion/riffusion-model-v1)

2

Oceanboi t1_jd6g49h wrote

How do these handmade features compare to features identified by CNNs? Only reason I ask is that I'm finishing up some thesis work on sound event detection using different spectral representations as inputs to CNNs (Cochleagram, Linear Gammachirp, Logarithmic Gammachirp, Approximate Gammatone filters, etc). Wondering how these features perform in comparison on similar tasks (UrbanSound8K) and where it fits in the larger scheme of things.

2

gootecks t1_jd6o1uo wrote

Really interesting project! Do you think it could be used to detect sound effects in games? For example, you press a button in the game which triggers an attack that makes a sound when it connects.

1

Leo_D517 OP t1_jd7sq7l wrote

OpenSMILE is mainly used for emotion analysis and classification of audio, while audioFlux focuses on various feature extraction of audio , and is used to study various tasks in the audio field such as Classification, Separation, Music Information Retrieval(MIR) and ASR etc.

1

Leo_D517 OP t1_jd7vszp wrote

Of course, you can use audioFLux to extract features and then build and train models for the sound effects audio that needs to be detected.

Then, real-time audio features are extracted from the audio stream obtained by the microphone, and a trained model is used for prediction.

1