Closing gaps in Responsible AI: collectively turning principles into practice

One of our members of our LinkedIn Group just posted a very interesting tool that enables people to crowdsource ideas for making Responsible Artificial Intelligence a reality.

Here’s the tool:

It allows you to identify gaps where you think current AI is lacking, and score the gaps that other people identified. Then it goes on by giving you gaps of others and asking for your ideas to close them. Really creative and effective way to crowdsource I think.

The tool was created by the partnership on AI that brings together many partners to make AI more human-friendly and humane:

One way to think about this is to conceptualize corporations as a form of AI that runs on a cluster computer made of human nodes, using states as their OS. What sort of principles do people think need to be made inherent to the legal software corporations run on, to make them behave more ethically? Those lists of principles would probably be applicable to software AI.

That’s a nice way to look at it… good perspective to fill in a bunch of gaps!

BTW Interesting was that the Research Program Lead behind the initiative is a very active Mastodon member and responded to my toot about it.