As I have talked in my previous blog post: https://ciprianaa30.wixsite.com/aurismartwatch/post/voice-recognition-final-version, I specified the different tests and results of different models trained to increase the accuracy.
I have added an extra voice line to output the information about the battery:
battery level from BQ25 Charging IC chip
battery percentage from the voltage divider
battery faults
By adding the specific voice line, the feature space looks as follows:

Once put into the MFCC processing block, we get the following Cepstral Coefficients:

By testing multiple models and checking for accuracy and loss, I have improved the architecture of the Convolutional Neural Network, changing the amount of one dimensional convolutional layers to learn more intricate details about the samples.
Here is the architecture:

And here is the confusion matrix:

The system not only can recognise an extra voice command but it can also do it with more noise, different types of noise and different volumes of the voice command.
Then the system was optimised for memory consumption in terms of flash and ram, by changing all the weights from floats to ints.
This is the new table with all the new versions and their specific comments:

Comments