top of page
Search
  • Writer's pictureIfrim Ciprian

General Plan - Supervisors Meeting

Updated: Mar 23, 2022

Today, for circa 2 hours I have had a Zoom meeting with my Supervisor, Denis Tsvetkov, in order to discuss the planning of the project, the goals and objectives, as well as the best approach to create a high quality project that fully respects the grading criteria.


We've discussed how the coding will be implemented in the Arduino IDE, and how features like multiple readings can be used to average the output and increase the accuracy, as well as using the TinyML arduino library to remove odd values that do not match what is expected. Use many sensors like MP280/Nicla Sense ME (gasses, temp, humidity, pressure etc.), microphone for noise analysis, IMU for fall detection or steps tracking (this is a big maybe), electromagnetic field sensors (helpful to know if there are broken cables or other hazardous areas around the user, as well as an extremely important precaution device for people with a pacemaker), heart rate sensor, oximeter etc. The list of sensors is to be seen based on importance, space available in the shell and implementation, as well as time available for development.


A good suggestion from Denis, was to get on loan some medical grade sensors/devices from the Bioengineering Course and test my readings against them and trying to improve my accuracy as much as possible to increase the "medical" capability of it.


For the voice output, I am planning to use a DFPLAYER mini to play certain MP3 files, based on the code. Divide the phrase structure in multiple words that can be interchanged. Example: "The temperature is 24 degrees Celsius". Have "the temperature is" as an MP3 file, "24" as a different MP3 file, and "degrees Celsius" as another MP3 file (string concatenation process but in MP3 form). This process helps when having a long phrase/multiple phrases that need to be said together.

This type of processing has been used in the early 2000 for voice answering machines, so the idea is to have a more advanced version of that with smoother transitions.

Initially the plan it to use a neural network TTS to create the voice lines, and then passing from there to actual voice actors to have the voice assistant more humane. The plan is to also add some "quirky lines" that would make the voice assistant seem more like a human which in turn would make the users happier to interract with them.


For the voice recognition, I am planning on using a Voice Recognition V3 Module, in order to train it with different commands, up to 79 (the maximum of the module). And towards the end I am planning to switch to a higher quality microphone in order to have the module trained on a better audio quality dataset.


Create an elegant shell CAD model, in SolidWorks, based on a specific design. Manufacture the shell with an SLA 3D Printer (which I have at home) and once the testfit has been completed, use a CNC machine to create it out of brass (Already spoken to Spike and Cecil in regards to using the CNC to machine a brass block and use the Waterjet to cut steel sheets/blocks.

If time is on my side, I will try different materials and choose what looks best/more elegant. Example: CNCed aluminium, CNCed brass that then gets brushed with wet and dry, CNCed brass that gets sandblasted to give it a specific look etc.


I am also planning to use KeyShot to create industrial-grade renders and 360/explode animations + Davinci Resolve for video editing.


There has also been a 30 minute meeting with Eris Chinellato in regards to monthly meeting and with general questions about the project structure and what is expected from us.

24 views0 comments

Recent Posts

See All
bottom of page