"And yet it is beginning to seem likely that some small number of smart people will one day roll these dice. And the temptation will be understandable. We confront problems—Alzheimer’s disease, climate change, economic instability—for which superhuman intelligence could offer a solution. In fact, the only thing nearly as scary as building an AGI is the prospect of not building one. Nevertheless, those who are closest to doing this work have the greatest responsibility to anticipate its dangers. Yes, other fields pose extraordinary risks—but the difference between AGI and something like synthetic biology is that, in the latter, the most dangerous innovations (such as germline mutation) are not the most tempting, commercially or ethically. With AGI the most powerful methods (such as recursive self-improvement) are precisely those that entail the most risk.Can We Avoid a Digital Apocalypse? : A Response to the 2015 Edge Question : Sam Harris
We seem to be in the process of building a God. Now would be a good time to wonder whether it will (or even can) be a good one."
Saturday, January 17, 2015
Can We Avoid a Digital Apocalypse? : A Response to the 2015 Edge Question : Sam Harris
Final paragraphs of a Sam Harris artificial general intelligence (AGI) reality check
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment