On a related note, see ‘The Beginning of a Wave’: A.I. Tiptoes Into the Workplace (NYT)
"In all of this sociopolitical criticism of ML, however, what has gone unchallenged is the idea that the technology itself is technically sound – in other words that any problematic outcomes it produces are, ultimately, down to flaws in the input data. But now it turns out that this comforting assumption may also be questionable. At the most recent Nips (Neural Information Processing Systems) conference – the huge annual gathering of ML experts – Ali Rahimi, one of the field’s acknowledged stars, lobbed an intellectual grenade into the audience. In a remarkable lecture he likened ML to medieval alchemy. Both fields worked to a certain extent – alchemists discovered metallurgy and glass-making; ML researchers have built machines that can beat human Go champions and identify objects from pictures. But just as alchemy lacked a scientific basis, so, argued Rahimi, does ML. Researchers, he claimed, often can’t explain the inner workings of their mathematical models: they lack rigorous theoretical understandings of their tools and in that sense are currently operating in alchemical rather than scientific mode.Magical thinking about machine learning won’t bring the reality of AI any closer -- The Guardian
Does this matter? Emphatically yes. As Rahimi puts it: “We are building systems that govern healthcare and mediate our civic dialogue. We would influence elections. I would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge, and not on alchemy.”"
No comments:
Post a Comment