I’m taking a graduate level deep learning course at a reputable school and have the opportunity to do a quarter long project with a team of up to three people. Do you guys have any good ideas about my project could be about? I want to do something that benefits society and also is related to mindfulness / emotional intelligence, and maybe pair programming as well.
After the project is done, might even present it at conferences!
Also should add that in my background is 10 years in software development, 14 years in meditation / yoga / martial arts, involved in some governmental social good things, studying Zen now.
Good news is that for my group project I can probably get help from pretty skilled people, from my classmates or from companies in the area.
Since May 2018 I had worked on some different projects (as a founder and CEO), which are close to HumaneTech main idea (to help people make the Internet useful, clear and personalized). But my problem is that I’m not an expert in software development at all. That’s why it is good for me to communicate with IT-specialists, who are ready to create a useful projects for people.
In January I’m going to do customer development in some Europian countries just to test some ideas and get a feedback from European customers. Maybe in February the task about deep learning software development will be actual.
Hello there! I am reading this a bit late perhaps… I have developed an algorithm for a Social Learning Network based on HOW the human brain is actually wired. Too bad, my domain is still in renewal… but if I get a reply to this I will send you links. The time is ripe to do something - NOW… Cheers!
How about starting an open-source (AGPL-licensed) machine learning project for development of humane social media algorithms? Develop them as libraries or self-hosted service that anyone can integrate in their own social media project or forum, bulletin board, etc. software.
The big social media platforms, FB, Insta, Snap, TikTok, YouTube, etc. all optimize their algorithms for maximum engagement, i.e. retaining eyeballs on the platform for as longs possible. The algorithms learn that outrage and divisiveness works best in this regards, and the platforms become ‘outrage machines’ that drive people ‘to the bottom of their brainstems’ as Tristan Harris likes to put it. There is no incentive to change this, as it rakes in the most advertisement dollars.
Your algorithms could do the opposite. Use text and sentiment analysis and maximize for insight, value, factfulness, non-divisiveness and reason. Provide recommendations to content someone might like to read and react to based on their previous contributions, but also recommend stuff that avoids leading them into a narrow filter bubble.
A basic, feasible variant of such algorithm could be applied to recommend interesting people to follow on The Fediverse, so you can attach a practical implementation with your study. (There are many fediverse applications and many more on the rise).