top of page
Search
  • Writer's pictureIfrim Ciprian

System Flowchart

As my system is coming to an end, in terms of development, both hardware and software, I have created a high-level flowchart to properly demonstrate the functionality.


MAIN SYSTEM

The following image, shows how the main blocks of the processes:


The system starts by initialising and reading the different sensors to check for 2 main situations:

  1. The gesture event has been triggered and the system should move to the computation and voice inference.

  2. One of the sensors has detected a value in the environment that is higher than the threshold and therefore can cause serious health issues, therefore, it needs to inform the user by playing a specific voice line.

I believe the second case is straightforward, for the first case however, once it detects the gesture it informs the user with the Visual Cue by turing the led on and by playing a "How can I help you today?" voice line. During the voice line, the system performs all the readings, filtering and computation. Then all the necessary data for the Machine Learning systems, is packed into an array and sent through I2C to the Xiao Sense.


As soon as the voice ends, the user can say a voice command, the recording of the voice command lasts 1650ms, and happens circa 50ms after all the computation has finished.


After 1900ms, another voice line is played in the form of "computing"(which lasts between 700-900ms).

After circa 2700ms total, the recording, voice classification, machine learning weather classification (4 models) and machine learning rainfall regression (1 model), have all completed and everything has been packed and sent back to the main system, the Nicla Sense ME.


Then the Nicla, spends circa 6ms to unpack the received array.


Once the system reaches this point, with a switch case structure, it changes the output based on the label received representing the voice line, then it plays the specific voice line with the specific data from the sensors.


At the end of all this, it returns to the main loop where it checks the sensors for any health adverse effects.


MACHINE LEARNING

This flowchart however, demonstrates the ML functionality.

Once the sensors have read all the data (with filtering and computation), it is but into an array, which is used for the 5 ML models:

  1. XGBoost Classifier & Decision Tree Regressor trained on the 2000-2019 dataset with no oversampling and 5 features, circa 7300 samples.

  2. Gaussian Naive Bayes and Decision Tree Classifier are based on the same 5 features and 2000-2019 data, but with the classes oversampled to have an equal amount of samples per class, resulting in circa 33 000 samples.

  3. Support Vector Machines Model trained on a completely different dataset from 2020 to 2021, with 6 features, this time including the UV index as well, which results in a highly accurate rain/fair model, with very low accuracy on the other 3 classes.

All these models are combined, and with a loop function, I check which label repeats the most, resulting in 3 outcomes:

  1. All labels repeat once, in which case, use the XGBoost model label, as the most accurate single model.

  2. A label repeats 2 or more times, resulting in the most frequent label, output that specific label.

  3. 2 labels repeat equally (2 times, 4 models). Therefore, output the first label that repeats twice, then proceed with the second label that repeats 2 times in the following manner:

"The machine learning classifier has identified the weather of your surrounding as clear" -> 1st label is 0

"There is also the chance of light showers" -> 2nd label is 1

3 views0 comments

Recent Posts

See All
bottom of page