[New post] If AI Is Predicting Your Future, Are You Still Free?
Janice Flahiff posted: " This is a subscription based magazine.But you might be able to see one free article (as I did)Realize I might be pushing this when it comes to health issues.But I think overly relying on predictions and algorithms , as stated in this article, do not desc"
This is a subscription based magazine. But you might be able to see one free article (as I did)
Realize I might be pushing this when it comes to health issues. But I think overly relying on predictions and algorithms , as stated in this article, do not describe reality.
This has big implications when it comes to public health. Putting people in groups (race, ethnicity, socioeconomic status) can belpful for campaigns and other broad efforts. But to treat individuals more on select descriptors than their individual medical history can be harmful, imho.
Thoughts anyone? I'd like to hear from you, please feel free to comment.
Part of being human is being able to defy the odds .Algorithmic prophecies undermine that
Everyone wants to know what's to come—right? But our obsession with predictions reveals more about ourselves.
Excerpts
As you read these words, there are likely dozens of algorithms making predictions about you. It was probably an algorithm that determined that you would be exposed to this article because it predicted you would read it. Algorithmic predictions can determine whether you get a loan or a job or an apartment or insurance, and much more.
These predictive analytics are conquering more and more spheres of life. And yet no one has asked your permission to make such forecasts. No governmental agency is supervising them. No one is informing you about the prophecies that determine your fate. Even worse, a search through academic literature for the ethics of prediction shows it is an underexplored field of knowledge. As a society, we haven't thought through the ethical implications of making predictions about people—beings who are supposed to be infused with agency and free will.
Institutions today, however, often try to pass off predictions as if they were a model of objective reality. And even when AI's forecasts are merely probabilistic, they are often interpreted as deterministic in practice—partly because human beings are bad at understanding probability and partly because the incentives around avoiding risk end up reinforcing the prediction. (For example, if someone is predicted to be 75 percent likely to be a bad employee, companies will not want to take the risk of hiring them when they have candidates with a lower risk score).
One major ethical problem is that by making forecasts about human behavior just like we make forecasts about the weather, we are treating people like things.
A second, related ethical problem with predicting human behavior is that by treating people like things, we are creating self-fulfilling prophecies.
No comments:
Post a Comment