【】

If the past few years have taught us anything, it's that algorithms should not be blindly trusted.
The latest math-induced headache comes from Australia, where an automated compliance system appears to be issuing incorrect notices to some of Australia's most vulnerable people, asking them to prove they were entitled to past welfare benefits.
Politicians and community advocates have called foul on the system, rolled out by Australia's social services provider, Centrelink.
SEE ALSO:Facebook reveals how many times governments requested data in 2016Launched in July, the system was intended to streamline the detection of overpayments made to welfare recipients and automatically issue notices of any discrepancies.

The media and Reddit threads have since been inundated with complaints from people who say they are being accused of being "welfare cheats" without cause, thanks to faulty data.
The trouble lies with the algorithm's apparent difficulty accurately matching tax office data with Centrelink records, according to the Guardian, although department spokesperson Hank Jongen told Mashableit remains "confident" in the system.
"People have 21 days from the date of their letter to go online and update their information," he said. "The department is determined to ensure that people get what they are entitled to, nothing more, nothing less."
Independent politician Andrew Wilkie accused the "heavy-handed" system of terrifying the community.
The siren call of big data has proved irresistible to governments globally, provoking a rush to automate and digitise.
"My office is still being inundated with calls and emails from all around the country telling stories of how people have been deemed guilty until proven innocent and sent to the debt collectors immediately," he said in a statement in early December.
The situation is upsetting albeit unsurprising. The siren call of big data has proved irresistible to governments globally, provoking a rush to automate and digitise.
What these politicians seem to like, above all, is that such algorithms promise speed and less man hours.
Alan Tudge, the minister for human services, proudly announcedthat Centrelink's system was issuing 20,000 "compliance interventions" a week in December, up from a previous 20,000 per year when the process was manual. Such a jump seems incredible, and perhaps dangerous.
As data scientist Cathy O'Neil lays out in her recent book Weapons of Math Destruction, the judgments made by algorithms governing everything from our credit scores to our pension payments can easily be wrong -- they were created by humans, after all.
The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their working invisible to all but the highest priests in their domain: mathematicians and computer scientists.
These murky systems can inflict the greatest punishment on the most vulnerable.
Take, for example, a ProPublicareport that found an algorithm being used in American criminal sentencing to predict the accused's likelihood of committing a future crime was biased against black people. The corporation that produced the program, Northpointe, disputed the finding.
O'Neil also details in her book how predictive policing software can create "a pernicious feedback loop" in low income neighbourhoods. These computer programs may recommend areas be patrolled to counter low impact crimes like vagrancy, generating more arrests, and so creating the data that gets those neighbourhoods patrolled still more.
Even Google doesn't get it right. Troublingly, in 2015, a web developer spotted the company's algorithms automatically tagging two black people as "gorillas."
Former Kickstarter data scientist Fred Benenson has come up with a good term for this rose-coloured glasses view of what numbers can do: "Mathwashing."
"Mathwashing can be thought of using math terms (algorithm, model, etc.) to paper over a more subjective reality," he told Technical.lyin an interview. As he goes on to to describe, we often believe computer programs are able to achieve an objective truth out of reach for us humans -- we are wrong.
"Algorithm and data driven products will always reflect the design choices of the humans who built them, and it's irresponsible to assume otherwise," he said.
The point is, algorithms are only as good as we are. And we're not that good.
Featured Video For You
The Time I Stole My GF's Identity | Tech Confessions
相关文章
Darth Vader is back. Why do we still care?
They saved the best for last in the first official trailer for Rogue One: A Star Wars Story, release2025-05-10- 男士尿道感染一般分成幾種類型,有泌尿係統感染以及前列腺感染等 ,泌尿係統感染一般是服用抗生素等藥物進行治療,而前列腺炎是采用四環素以及青黴素等進行治療 ,男士應該要根據炎症的類型去選擇合適的藥物,避免藥物2025-05-10
- 在寶寶成長的每一個時期 ,都需要去醫院注射不同的疫苗,這樣才能夠保證小寶寶的身體健康,因為注射疫苗都是為了小寶寶身體感染到一係列傳染疾病 。所以在大多數時候小寶寶在注射疫苗的過程中,都需要定時定量,才能夠2025-05-10
- 坐月子是促進產婦身體恢複的黃金時間 ,所以在坐月子期間 ,如果產婦不注重生活中的方方麵麵,就有可能會導致身體留下月子病。而月子病的存在不僅會給產婦的生殖係統帶來巨大傷害,也有可能會給產婦下一次懷孕造成一定2025-05-10
Is Samsung's Galaxy Note7 really the best phone?
On this week's。 MashTalk。, Lance, Pete and I talk about the new hot smartphone in town: Samsung's Ga2025-05-10- 如果患者懷疑自己的腸胃器官出現病變的話,就應該及時去醫院做相應的檢查 ,其中膽囊部位就是一個容易出現病變的器官。例如膽囊息肉就是一種相對較為常見的息肉問題,但是大多數時候膽囊息肉都是良性腫瘤,所以患者也2025-05-10
最新评论