Hall of Shame: AI-based Job Interview software

Another example of AI dystopia that is almost fully on top of us: The use of AI on the jobs market!

(First wanted to post this to Examples of what is coming to us in the near future, but this dystopic software is in use today)

Multiple AI-based softwares are already applied to analyse people during their job interview. And - as some very basic research clearly demonstrates - this does not work as it should. It also highlights the fact that people easily place a lot of trust in automated systems.

What the research showed among others is that:

  • Choice of background greatly influences your chance to land the job (hint: show a bookcase in the back)
  • How you are dressed is similarly important (hint: do not wear your glasses during the interview)
  • Even the lightning/brightness of the video is important (hint: brigther is better)

I think the best you can do with AI software like that is flush it down the drain, and - importantly - strongly discourage anyone that considers using this stuff!

2 Likes

(I love the Hall of Shame idea by the way :smiley: )

The pandemic has exposed that this is a much deeper problem than we realize. Automation is already used extensively not just in the interview stage, but to filter out resumes during the application stage.

If an applicant’s work history has a gap of more than six months, the resume is automatically screened out by their RMS or ATS, based on that consideration alone.

IT systems are intended to be set up and then run autonomously from there, so when a major event like the pandemic happens it’s possible these systems will not be re-configured, IF that setting is even configurable.

2 Likes

Something related. Not sure if I’ve mentioned Joy Boulamwini’s work before or how well known she is. (I imagine folks have seen Coded Bias ) Coded Bias (Full-length PBS Film 2020) - YouTube

What I loved about her research project into racial bias in facial recognition software is the way that the project identifying the bias and the potential solution then rather than publishing the research going back to the companies and saying this is what we found; we will measure again in a year ti see if this bias is less evident. MS pulled their software immediately and worked with her to correct these problems. That may have something to do with it having started as an arts project.

There is a more recent research paper from a few months ago:
“ Importantly, considering the potentially harmful conse- quences of FR, future work should ask in what situations FR technology is inappropriate to use and the question of how misuse of these systems can be prevented, e.g., through policies. Only once these questions have been addressed can advocacy for less-biased FR be beneficial to marginalized communities.
As discussed, the easiest and least problematic way to improve error rates seems to be to improve the data that are used to train the FR models. This means ensuring that sensitive attributes and their intersections are more equally represented. Respective research would then need to con- firm that this is actually sufficient. Existing work by, e.g., Buolamwini and Gebru [16], already created a dataset that is diverse in terms of skin color and gender. Future work could continue on this path and ensure diversity along other axes.”

I’d love to know about any similar iterative design/discussion in regard to healthcare research and codesign. So much if the problems of pandemic disinfo stem from people not feeling respected or heard. I have an interesting three dimensional personal experience of this dynamic and I see it as a fundamentally important aspect of modern society and humane tech. An open source healthcare patient led research model would be very timely right now. I know of a couple but they are privately run. Is anyone aware of any fediverse version of this type of thing?