Skip to content

Commit 876732e

Browse files
committed
Update README
1 parent 44b5e81 commit 876732e

File tree

1 file changed

+13
-25
lines changed

1 file changed

+13
-25
lines changed

README.md

Lines changed: 13 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,57 +1,45 @@
1-
CRNN for Live Music Genre Recognition
2-
=====================================
1+
CNN for Live Music Genre Recognition
2+
====================================
33

4-
Convolutional-Recurrent Neural Networks for Live Music Genre Recognition is a project aimed at creating a neural network recognizing music genre and providing a user-friendly visualization for the network's current belief of the genre of a song. The project was created for the 24-hour Braincode Hackathon in Warsaw by Piotr Kozakowski, Jakub Królak, Łukasz Margas and Bartosz Michalak.
4+
Convolutional Neural Networks for Live Music Genre Recognition is a project aimed at creating a neural network recognizing music genre and providing a user-friendly visualization for the network's current belief of the genre of a song. The project was created for the 24-hour Braincode Hackathon in Warsaw by Piotr Kozakowski, Jakub Królak, Łukasz Margas and Bartosz Michalak.
55

6-
This project uses Keras for the neural network and Tornado for serving requests.
6+
The model has since been significantly improved and rewritten to TensorFlow.js, so that it doesn't require a backend - the network can be run inside of the user's browser.
77

88

99
Demo
1010
----
1111

12-
You can see a demo for a few selected songs here: [Demo](http://deepsound.io/genres/).
12+
You can see a demo at [http://deepsound.io/genres/](http://deepsound.io/genres/). You can upload a song using the big (and only) button and see the results for yourself. All mp3 files should work fine.
1313

1414

1515
Usage
1616
-----
1717

18-
In a fresh virtualenv type:
18+
It's easiest to run using Docker:
1919

2020
```shell
21-
pip install -r requirements.txt
22-
```
23-
24-
to install all the prerequisites. Run:
25-
26-
```shell
27-
python server.py
21+
docker build -t genre-recognition . && docker run -d -p 8080:80 genre-recognition
2822
```
2923

30-
to launch the server.
24+
The demo will be accessible at http://0.0.0.0:8080/.
3125

32-
You can also use Docker Compose:
33-
34-
```shell
35-
docker-compose up
36-
```
37-
38-
The demo will be accessible at http://0.0.0.0:8080/. You can upload a song using the big (and only) button and see the results for yourself. All mp3 files should work fine.
39-
40-
Running server.py without additional parameters launches the server using a default model provided in the package. You can provide your own model, as long as it matches the input and output architecture of the provided model. You can train your own model by modifying and running train\_model.py. If you wish to train a model by yourself, download the [GTZAN dataset](http://opihi.cs.uvic.ca/sound/genres.tar.gz) (or provide analogous) to the data/ directory, extract it, run create\_data\_pickle.py to preprocess the data and then run train\_model.py to train the model:
26+
By default, it will use a model pretrained by us, achieving 82% accuracy on the GTZAN dataset. You can also provide your own model, as long as it matches the input and output architecture of the provided model. If you wish to train a model by yourself, download the [GTZAN dataset](http://opihi.cs.uvic.ca/sound/genres.tar.gz) (or provide analogous) to the data/ directory, extract it, run `create_data_pickle.py` to preprocess the data and then run `train_model.py` to train the model. Afterwards you should run `model_to_tfjs.py` to convert the model to TensorFlow.js so it can be served.
4127

4228
```shell
4329
cd data
4430
wget http://opihi.cs.uvic.ca/sound/genres.tar.gz
4531
tar zxvf genres.tar.gz
4632
cd ..
33+
pip install -r requirements.txt
4734
python create_data_pickle.py
4835
python train_model.py
36+
python model_to_tfjs.py
4937
```
5038

51-
You can "visualize" the filters learned by the convolutional layers using extract\_filters.py. This script for each convolutional neuron extracts and concatenates a few chunks resulting in maximum activation of this neuron from the tracks from the dataset. By default, it will put the visualizations in the filters/ directory. It requires the GTZAN dataset and its pickled version in the data/ directory. Run the commands above to obtain them. You can control the number of extracted chunks using the --count0 argument. Extracting higher number of chunks will be slower.
39+
You can "visualize" the filters learned by the convolutional layers using `extract_filters.py`. This script for every neuron extracts and concatenates several chunks resulting in its maximum activation from the tracks of the dataset. By default, it will put the visualizations in the filters/ directory. It requires the GTZAN dataset and its pickled version in the data/ directory. Run the commands above to obtain them. You can control the number of extracted chunks using the --count0 argument. Extracting higher numbers of chunks will be slower.
5240

5341

5442
Background
5543
----------
5644

57-
The rationale for this particular model is based on several works, primarily [Grzegorz Gwardys and Daniel Grzywczak, Deep Image Features in Music Information Retrieval](http://ijet.pl/index.php/ijet/article/view/10.2478-eletel-2014-0042/53) and [Recommending music on Spotify with Deep Learning](http://benanne.github.io/2014/08/05/spotify-cnns.html). The whole idea is extensively described in our blog post [Convolutional-Recurrent Neural Network for Live Music Genre Recognition](http://deepsound.io/music_genre_recognition.html).
45+
This model has been inspired by several works, primarily [Grzegorz Gwardys and Daniel Grzywczak, Deep Image Features in Music Information Retrieval](http://ijet.pl/index.php/ijet/article/view/10.2478-eletel-2014-0042/53) and [Recommending music on Spotify with Deep Learning](http://benanne.github.io/2014/08/05/spotify-cnns.html). The old version of the model is described in our blog post [Convolutional-Recurrent Neural Network for Live Music Genre Recognition](http://deepsound.io/music_genre_recognition.html). The new one is similar, key differences are the lack of recurrent layers, using the Adam optimizer instead of RMSprop and batch normalization. Those changes boosted accuracy from 67% to 82% while retaining the ability of the model to efficiently output predictions for every point in time.

0 commit comments

Comments
 (0)