Humanistic and ethical AI

Hi everyone,

I came across this TED Talk today and was wondering if any of you may have seen it and what your thoughts were on it. It is last year’s talk by Tom Gruber (co-creator of Siri) on the topic of [what he calls] Humanistic AI.

First off, what I find interesting is that our community here is focused on humane technology and we all seem to understand that to mean using less technology and spending more time with other humans. In Mr. Gruber’s case humanistic AI seems to mean more tech reaching more people.

My other thoughts on his talk were that, calling human memory a “cognitive limitation” and saying that it should be made to be as good as a computer’s memory, seem pretty arrogant statements to make. Our memory exists in the form that it does today because of millenia of evolution and adaptation; we don’t remember everything we see and hear because we are not supposed to. The brain uses incredible amounts of energy and attuning to every single stimulus in the present AND past would short circuit it. That, of course, doesn’t mean that it is perfect, far from it.

And speaking of being attentive, present and focused, if we were to save ALL of our memories, where would we ‘keep’ them, what would be the actual purpose of that and what would that mean in terms of where our attention would go? Presumably the long-view of this is that there would be some kind of app that would collect, summarize and make sense of our memories? The number of ethical considerations there is practically limitless. I find it interesting that he mentions at one point, in a somewhat jarring way, that all memories should absolutely be kept personal. This kind of memory collection, storage and processing would make privacy extremely vulnerable.

These were some of the thoughts that crossed my mind as I was watching this talk. Would be curious to know what you think!

Have a great day everyone.
Teodora

2 Likes

Thank you so much for this post!!

The ethical dilemma of technology is what we’ve been grappling with here. Ethics committees historically manage complex issues that clearly need to be addressed from human rights perspective- but the issues are usually gray and nobody can outright prove there is something really wrong.

Perhaps this could be another category to tag and deposit certain topics…,

Just thinking out loud.

1 Like

Do you mean a separate forum category? Yes, could have. In that case best propose to founders in CHT core themes topic…

Anyway there is a ton of interesting material on the subject: https://duckduckgo.com/?q=ethical+ai&t=fpas&ia=about

That member took the founders idea of humane tech to another level. In order to build a foundation that guides technology that respects people’s health, privacy and kids normal development- a board of people should be formed. Yes, opening a forum category to develop and brainstorm, but most importantly a group of chosen professionals completely separate from money, business development, politics and pressure to “conform to tech” HAS to be gathered together face to face.

The ethics group of professionals should include a psychologist, neurodevelopmental pediatrician, University professor of ethical logic, addiction specialist, orthopedic doctor (inattention injury/ergonomic deformity)… a framework for an ethics committee can be modeled after hospitals- hospitals have to step away from medical treatment goals periodically to look at the whole human. This is how ethics commitees were formed in healthcare when patients get overwhelmed with medical treatment. Humanity is overwhelmed with technology.

The catch is… by the time an ethics committee is needed- a vulnerable person or group has lost their voice. So that’s why there is no mass of people fixing the problem.

Anyways… to get the founders attention I’m not sure about that one… to be continued.