Hi All,
We are working on multiple ways how to develop adaptive screen experiences on multiple platforms. Our project is called breathing.ai (www.breathing.ai). We detect breathing and heart rate via web- & mobile camera and machine learning, and adapt the screen colors, fonts etc to support deeper breaths and slower heart rates.
We breathe over ca. 20,000 per day, and many breaths are shallow and cause stress. Stress causes more cardiovascular diseases (according to WHO number one cause of disease and death globally).
It takes more than a village to get develop this, and we would be grateful to work with you or receive feedback.
Our method is patent-based since we do not want this to be used to e.g. adapt ads to the nervous system of users etc.
Our prototypes have shown that heart rates change +/- 10 beats per minute with different colors and fonts, and the personalization can support deeper breaths.
Currently, all designs are one fits all with no customization to the user’s nervous system.
Hope to hear from you!
Thanks so much for reading and this community!
Hannes