The dystopian world of science fiction is upon us but many are choosing to deny the horrific consequences and stay in ignorance bliss.
At a time when credible information continually gets lost in a tsunami of digital noise and misinformation, please support this free resource with a tax-free donation!
This $5B insurance company likes to talk up its AI. Now it’s in a mess over it
h/t Rachel Metz @rachelmetz
“Some Twitter users were alarmed at what they saw as a “dystopian” use of technology, as the company’s posts suggested its customers’ insurance claims could be vetted by AI based on unexplained factors picked up from their video recordings.”
AI emotion-detection software tested on Uyghurs
h/t Shannon Vallor @ShannonVallor
“A camera system that uses AI and facial recognition intended to reveal states of emotion has been tested on Uyghurs in Xinjiang, the BBC has been told.”
‘AI’ is being used to profile people from their head vibrations
h/t Prof Jean Burgess @jeanburgess
“there is very little reliable, empirical evidence that VibraImage and systems like it are actually effective at what they claim to do.”
A rogue killer drone ‘hunted down’ a human target without being instructed to, UN report says
h/t Daragh Ó Briain @CBridge_Chief
“The Turkish-built KARGU-2, a deadly attack drone designed for asymmetric warfare and anti-terrorist operations, targeted one of Haftar’s soldiers while he tried to retreat, according to the paper.”
65% of execs can’t explain how their AI models make decisions, survey finds
h/t Stewart Rogers @TheRealSJR
Despite increasing demand for and use of AI tools, 65% of companies can’t explain how AI model decisions or predictions are made.
Sign up link to get the full-length version of this newsletter delivered to your inbox on the first Tuesday of every month until September!