Quote:
Originally Posted by Jibartik
[You must be logged in to view images. Log in or Register.]
I think a big question is how AI works.
If Ai can do everything that humans can do, better, including love and feel, then what's that gonna do to us?
Like what if they immediately take our jobs, but within months take all our wives and husbands?..
Within 1 generation we're building birthing centers because people dont want to go through the pain themselves, and the love of their life cant even create a baby anyway?
I mean this is not that difficult to happen, VR +AI is all you need. You dont need "robots"
So, despite being far from the space exploration, we are on the tip of some kind of sombrero..
|
Hey Jibartik!
You want more nightmare fuel?
Read this:
Scientists at Harvard and MIT are part of an international team of researchers who found that artificial intelligence programs can determine someone’s race with over 90% accuracy just from their X-rays. But there is a problem no one knows how AI programs do it.
Artificial intelligence has a racism problem. Look no further than the bots that go on racist trumpet, the facial recognition tech that refuses to see Black people or discriminatory HR bots that won’t hire people of color. It’s a pernicious issue plaguing the world of neural networks and machine learning that not only strengthens existing biases and racist thinking but also worsens the effects of racist behavior towards communities of color everywhere.
And when it’s coupled with the existing racism in the medical world, it can be a recipe for disaster.
That’s what’s so concerning about a new study published in The Lancet last week by a team of researchers from MIT and Harvard Medical School, which created an AI that could accurately identify a patient’s self-reported race based on medical images like X-rays alone.
The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm. Computer programs, for example, have wrongly flagged Black defendants as twice as likely to re-offend as someone who’s white. When an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them. Even AI used to write a play relied on using harmful stereotypes for casting.
Not only Algorithms are not understood fully, they can be infected by human bias to boot and evolve from there.