|Submited on :||Wed, 14th of Feb 2018 - 00:35:56 AM|
|Post ID :||7xambd|
|Post Name :||t3_7xambd|
|Post Type :||link|
|Subreddit Type :||public|
|Subreddit ID :||t5_2qh1s|
Think about a future Manhattan as a pedestrian-friendly oasis with major parts of the city re-purposed for foot and (electric) bike traffic framed by shiny clean skyscrapers (no emissions to dirty them up), 30-minute on-demand Amazon delivery of millions of items (supply benefits of urban density), and layers of autonomous transportation systems to support the supply chain and move the public around with extreme efficiency (some combination of subway, autonomous cars, hyperloop, and drones). Maybe AR tele-presence becomes sufficiently immersive and people feel less need to co-locate, and finally spread out around the country - but even that I don't think stops total domination of society by a handful of densely-packed elite cities.
I'd be curious to know what r/economics thinks of this - it strikes me as Straw Man fallacious thinking.
It sets out to pose & answer a question about the capabilities of Artificial Intelligence in the future.
Yet it uses past economics data.
No 1 - why is the question of whether different types of future AI may or may not master specialized tasks as quickly as more general ones, one for an Economist? Surely the experts here are AI developers?
No 2 - what relevance does past economics data have to answering this technical issue?
I get this argument could work if you conflate automation with Artificial Intelligence, but they are two separate things.
The AI of the future is not something that has existed before, so why would data about automation in the past answer questions about it?
TIL a Harvard professor who had worked with Bill Gates called him shortly after he had left to start Microsoft. The professor recalled, "He had moved to Albuquerque... to run a small company writing code for microprocessors, of all things. I remember thinking: 'Such a brilliant kid. What a waste.'"