Smarter Than Us The Rise of Machine Intelligence Stuart Armstrong
Download As PDF : Smarter Than Us The Rise of Machine Intelligence Stuart Armstrong
What happens when machines become smarter than humans? Forget lumbering Terminators. The power of artificial intelligence (AI) lies in its potential for superior intelligence, not physical strength or laser guns. Humans steer the future not because we’re the strongest or fastest species, but because of our cognitive advances; and once AI systems surpass us on this front, we’ll be handing them the steering wheel.
What promises—and perils—does smarter-than-human AI present? Can we instruct AI systems to steer the future as we desire? What goals should we program into them? Stuart Armstrong’s new book navigates these questions with clarity and wit.
Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?
A mathematician by training, Armstrong is a Research Fellow at the Future of Humanity Institute (FHI) at Oxford University. His research focuses on formal decision theory, the risks and possibilities of AI, the long term potential for intelligent life (and the difficulties of predicting this), and anthropic (self-locating) probability.
Smarter Than Us The Rise of Machine Intelligence Stuart Armstrong
I’m not smart enough to thoroughly review this little book, not skilled enough in IT or AI or anything to do with digitized thinking. I can only share an initial response.Armstrong begins by telling his readers that he is going to examine what happens when AI machines become so powerful they can dominate human beings. And he does that in 50pp that are sophisticated and thoughtful.
Yet, I wonder if his analysis will begin to seem quaint when the guiding ethos of our culture changes. The book reflects, cannot but reflect, the way we think and value life today. The very idea that “domination by machines” is the central theme of the book seems to me a peculiarly 20th – 21st century concern. A concern that has risen out of an era dominated by machines.
I like to think the machines that dominated technology in the 17th and 18th century had to do with making music; with pianos, horns, and the software that made them work. Then in the middle of the 19th century we began to be dominated by the technology of industry and its products from engines to guns.
Our infatuation with IT, AI, – I think that is what it is – is the product of the last 50 or 60 years. But there is another value coming. It is the ever increasing emphasis on “Green” and that could be a reaction to all the tech that has dominated our lives for so long.
It could be, for example, that the central guiding cultural ethos in 2100 will be to insure the success of life on the planet. We have to remember that it is the processes of industrial technology that now threatens life on the planet.
It could be that those who are thinking about morality and human needs will be very different kinds of thinkers than those reflected in Armstrong’s book. Not that IT or AI will be abandoned anymore than railroads have been abandoned (see the Dutch railroads for example), but a reaction against the uses of technology over the last century could insure changes we cannot imagine.
And I cannot imagine a world without reaction. We always go too far in one direction. The pendulum always swings back. Wholeness requires both a light side and a dark side. The Yin/Yang is the cosmic reality.
Which side of the Yin Yang will super successful AI express? Right now while we’re in the grasp of the values of our current ethos the tendency is to fear AI will dominate us, that it will be a Frankenstein, an Ex Machina.
I’m not worried.
Product details
|
Tags : Smarter Than Us: The Rise of Machine Intelligence - Kindle edition by Stuart Armstrong. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Smarter Than Us: The Rise of Machine Intelligence.,ebook,Stuart Armstrong,Smarter Than Us: The Rise of Machine Intelligence,Machine Intelligence Research Institute,Computers General,Technology & Engineering General
People also read other books :
- By Stone by Blade by Fire (Audible Audio Edition) Kate Wilhelm Carrington MacDuffie Inc Blackstone Audio Books
- Dirty Deeds 3 edition by Armand Rosamilia Jenny Adams Mystery Thriller Suspense eBooks
- Natural Home Flu Cold Remedies Quick Recovery Prevention Tips edition by Sophie Littlefield Health Fitness Dieting eBooks
- Reset How to get paid and love what you do eBook Dustin Peterson
- The Cherokee Homeland eBook Kayla Robbs
Smarter Than Us The Rise of Machine Intelligence Stuart Armstrong Reviews
Historians tell us that human knowledge doubled about every century until 1900. By the end of WWII, it was doubling every 25 years, even faster today, though difficult to quantify since different types of knowledge experience different growth rates. Nanotechnology doubles about every two years, computer technology about every 18 months, and human knowledge every 13 months. The ability to remain current in any field of expertise will become almost impossible for an individual, and intelligent machines will perform all technological development.
History has also shown us that once computers achieve something at a human level, they typically achieve it at a much higher level soon thereafter (eg. multiplying large numbers, stock-picking).
By the mid-21st century, Ray Kurzweil (Google's Director of Engineering) predicts a $1,000 computer will match the processing power of all human brains on Earth. During the third-quarter of the 21st century, AI machines will enter a runaway reaction of self-improvement.
The Turing test will one day be passed by a machine - judges will be unable to determine whether responses to typed messages come from a human being or a computer. AIs will then pursue goals given them, even those poorly phrased and thereby possibly dangerous - eg. 'Cure cancer!' may be accomplished by wiping out the human race, or ruining economies. An AI will always report that it is pursuing the 'right' goals - even if it has the 'wrong goals' because it know we'll try and stop it from achieving them if revealed.
For humans to increase their social skills, they need to go through trail and error, scrounge hints from more articulate individuals or televisions, and/or hone their instincts by having dozens of conversations. An AI could go through a similar process, undeterred by social embarrassment, and with perfect memory. It could also sift through vast databases of previous human conversations, analyze thousands of publications on human psychology, anticipate where conversations are leading many steps in advance, and always pick the right tone and pace to respond with. It would be superior to a human who spent a year pondering and research whether their response was going to be maximally effective.
With the ability to read audience reactions in real time and great accuracy, AIs could learn how to give the most convincing and moving of speeches - our whole political scene could become dominated by AIs or Ai-empowered humans. Or, instead of giving a single speech to millions, the AI could carry on a million individual conversations using personalized arguments.
If an AI could become adequate at technological development, it would soon become phenomenally good - conducting R&D simultaneously in hundreds of technical subfields and combining ideas between fields. AI and/or AI-guided research technologies would quickly become ubiquitous, and human technological development would cease.
AIs could become skilled economists and CEOs, guiding companies or countries with an intelligence no human could match. Relatively simple algorithms make more than half of stock trades already. However, an AI instructed to increase GDP might burn down LA as a short-term boost to GDP (reconstruction costs, funeral home profits); even if we instruct the AI 'Don't set fire to L.A.,' however, it could make this happen indirectly by a million steps that increase the probability of a massive fire and a leap in GDP. Thus, the challenge in this and all other situations is to make the AI able to make ethical decisions in scenarios we can't even image.
Humans may no longer be able to make sensible decisions because they will no longer comprehend what their decisions entail. This has already happened with automatic pilots and stock-trading algorithms - these programs occasionally encounter unexpected situations that their overseers are sometimes at a loss as to what to do. And without a precise description of what counts as the AI's 'controller,' the AI (especially those socially-skilled) will quickly come to see its own controller as just another obstacle to manipulate in order to achieve its goals. With the AI's skill, patience, and much longer planning horizon, any measures we put in place will eventually get subverted and neutralized.
How to prevent such problems? One popular suggestions is to confine AI to only answering questions. However, this fails to protect against socially manipulative AIs. Further, humans will be compelled to put more and more trust in AI decisions.
The software industry is worth billions, and much effort is being devoted to new AI technologies. Plans to slow down this rate of development seem unrealistic.
There are many reasons to pooh-pooh the idea that machines will crush humanity. But those arguments evaporate in the contents of this book.
The book is quite brief, but the fundamental issues don't require that much debate. Either machines are better at what they do than humans, or not. Either machines will continue to get more intelligent, or not. Either humans will rely more on machines for a competitive advantage, or not. Either individual humans will grasp whatever they can, or not. Either there will be continuing increases in the amount we rely on machines, or not.
If you can project this 30 or 80 or 150 or 300 or 1,000 years out, eventually machines will move beyond the control of humans and some intelligence greater than our own will have no interest or need to feed people.
Or, maybe not. But it is hard to see the other side of the argument after reading this compelling book.
I’m not smart enough to thoroughly review this little book, not skilled enough in IT or AI or anything to do with digitized thinking. I can only share an initial response.
Armstrong begins by telling his readers that he is going to examine what happens when AI machines become so powerful they can dominate human beings. And he does that in 50pp that are sophisticated and thoughtful.
Yet, I wonder if his analysis will begin to seem quaint when the guiding ethos of our culture changes. The book reflects, cannot but reflect, the way we think and value life today. The very idea that “domination by machines” is the central theme of the book seems to me a peculiarly 20th – 21st century concern. A concern that has risen out of an era dominated by machines.
I like to think the machines that dominated technology in the 17th and 18th century had to do with making music; with pianos, horns, and the software that made them work. Then in the middle of the 19th century we began to be dominated by the technology of industry and its products from engines to guns.
Our infatuation with IT, AI, – I think that is what it is – is the product of the last 50 or 60 years. But there is another value coming. It is the ever increasing emphasis on “Green” and that could be a reaction to all the tech that has dominated our lives for so long.
It could be, for example, that the central guiding cultural ethos in 2100 will be to insure the success of life on the planet. We have to remember that it is the processes of industrial technology that now threatens life on the planet.
It could be that those who are thinking about morality and human needs will be very different kinds of thinkers than those reflected in Armstrong’s book. Not that IT or AI will be abandoned anymore than railroads have been abandoned (see the Dutch railroads for example), but a reaction against the uses of technology over the last century could insure changes we cannot imagine.
And I cannot imagine a world without reaction. We always go too far in one direction. The pendulum always swings back. Wholeness requires both a light side and a dark side. The Yin/Yang is the cosmic reality.
Which side of the Yin Yang will super successful AI express? Right now while we’re in the grasp of the values of our current ethos the tendency is to fear AI will dominate us, that it will be a Frankenstein, an Ex Machina.
I’m not worried.
0 Response to "[SNZ]≫ PDF Smarter Than Us The Rise of Machine Intelligence Stuart Armstrong"
Post a Comment