Knightscope: Is it Humane?

I remember doing a project on this company called Knightscope, which makes security robots, for my AP CSP class. Recently I heard a report that they are increasing their number of contracts and expanding. How do you all feel about this? It keeps security personel out of harms way, but it also severly automates a very human job.

1 Like

Probably the biggest notable stories about it include the following:

I first encountered them when I worked on the NASA Ames campus, as Carnegie Mellon University’s extension there in the Research Park used them to patrol the parking lot grounds back in 2016-2017. They never seemed to quite activate right or do much, and suddenly they all disappeared without notice.

Most of the coverage has been kind of on the comedic side. (Though you can argue that the SPCA case was dehumanizing and showed a complete outsourcing of human empathy.) I think we’ve been so inundated with headlines about AIs / machines beating humans at this or that, with the way these stories are framed, we are merely left to feel less useful and thus lousy about ourselves and our future. So there’s a kind of robotic schadenfreude at work here.

The other stories regarding human-robot interaction – say, take autonomous vehicles or wheeled delivery drones on sidewalks – seems to incite a sort of human anger of social encroachment without permission. Kind of like scabs crossing picket lines during a strike. So we hear stories of vandalism and human assaults of robots and vehicles. That said, we see some of that too with escooters, so a good part of that reaction could also just be resentment that someone is creating a new Tragedy of the Commons by exploiting public space to make some disembodied entrepreneur into a billionaire.

None of that says anything about your core question. I never felt threatened by them because they are largely an extension of an already large surveillance and intervention state (e.g., Obama as the “Drone President”, anyone?). And few have really rebelled against robotic vacuum cleaners taking the jobs of domestic help.

So some automation of a human job is both necessary and probably inevitable. But it again comes down to algorithms and trusting what’s in the hearts, ethics, and system awareness of its creators … which is where I have much greater doubts. It comes down to me not trusting the people behind them, regardless of their intentions even, due to a lack of foresight, a lazy approach to systems analysis and thinking, a governance model that is driven by profits over people, etc.