Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World

£9.9
FREE Shipping

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World

RRP: £99
Price: £9.9
£9.9 FREE Shipping

In stock

We accept the following payment methods

Description

Self-driving cars have already driven tens of millions of miles among us. Powered by a moderate level of intelligence, they, on average, drive better than most humans. They keep their ‘eyes’ on the road and they don’t get distracted. They can see further than us and they teach each other what each of them learns individually in a matter of seconds. It’s no longer a matter of if but rather when they will become part of our daily life. When they do, they will have to make a multitude of ethical decisions of the kind that we humans have had to make, billions of times, since we started to drive. He joined Google in 2007, [7] and eventually rose to the position of chief business officer at Google X. [8]

Book review: ‘Scary Smart’ by Mo Gawdat | E+T Magazine

AI is already more capable and intelligent than humanity. Today's self-driving cars are better than the average human driver and fifty per cent of jobs in the US are expected to be taken by AI-automated machines before the end of the century. In this urgent piece, Mo argues that if we don’t take action now – in the infancy of AI development – it may become too powerful to control. If our behaviour towards technology remains unchanged, AI could disregard human morals in favour of profits and efficiency, with alarming and far-reaching consequences. Or, it could be that this text was actually written (developed? Spawned?) by an AI bot which is why it was so sparsely referenced, simply circular and most annoyingly… Overall I suspect that Scary Smart might be a bit much for some - not so much in the scary but in the philosophizing, however in terms of reading something a little different, that challenges one to do and be better and providing unique perspectives you couldn't go better. When machines are specifically built to discriminate, rank and categorize, how do we expect to teach them to value equality?’When machines are specifically built to discriminate, rank and categorize, how do we expect to teach them to value equality?' For example, if a young girl suddenly jumps in the middle of the road in front of a self-driving car, the car needs to make a swift decision that might inevitably hurt someone else. Either turn a bit to the left and hit an old lady, to save the life of the young girl, or stay on course and hit the girl. What is the ethical choice to make? Should the car value the young more than the old? Or should it hold everyone accountable and not claim the life of the lady who did nothing wrong? What if it was two old ladies? What if one was a scientist who the machines knew was about to find a cure for cancer? What determines the right ethical code then? Would we sue the car for making either choice? Who bears the responsibility for the choice? Its owner? Manufacturer? Or software designer? Would that be fair when the AI running the car has been influenced by its own learning path and not through the influence of any of them? Mo Gawdat]: History says that since the very ancient times, some of the dreams of the Pharaohs or the ancient Chinese civilisations was to create something that mimics humans, from automatons to Mechanical Turks, to even the clay soldiers of the Chinese armies or the big guards of the pharaonic era…. a b Rifkind, Hugo (29 September 2021). "Can this man save the world from artificial intelligence?". The Times.

Mo The Future of Artificial Intelligence, A Conversation with Mo

Interesting framework on what he calls “the inevitables” and how to prepare for future super intelligence. By 2049 AI will be a billion times more intelligent than humans, and in this interview I speak to Mo Gawdat about what artificial intelligence means for our species, and why we need to act now to ensure a future that preserves humanity.Gawdat was born in Egypt, the son of a civil engineer and an English professor. He showed an early interest in technology. [4] Career [ edit ]

Scary Smart: The Future of Artificial Intelligence and How

Bear in mind this thesis is built up to after a fair dose of caution, in fact the majority of the book si the "Scary" part where Gawdat explains the concerns and worries of AI pointing out where we can go drastically wrong, and explaining some inevitable dystopias. (to be honest my main gripe of the book is that I would have enjoyed much more material on the potential dark futures of AI then was presented) It’s also worth remembering that his ‘be more discerning’ solution aimed at adults, is at least in line with a POSSIBLE reality of people his age who Remember a life without phones AND INTERNET. Asking the upcoming generation to have those traits is chocolate teapot time.

The answer is us. Humans design the algorithms that define the way that AI works, and the processed information reflects an imperfect world. Does that mean we are doomed? In Scary Smart , Mo Gawdat, the internationally bestselling author of Solve for Happy , draws on his considerable expertise to answer this question and to show what we can all do now to teach ourselves and our machines how to live better. With more than thirty years' experience working at the cutting-edge of technology and his former role as chief business officer of Google [X], no one is better placed than Mo Gawdat to explain how the Artificial Intelligence of the future works. Emergency Episode: Ex-Google Officer Finally Speaks Out on the Dangers of AI! – Mo Gawdat | E252 , retrieved 1 October 2023 Meet Mo Gawdat, the AI expert who wants you to chill out". British GQ (Conde Nast). 26 June 2023 . Retrieved 1 October 2023. The reality is, as I keep saying, there is that problem of irrelevance that we might not be that relevant to that higher power now. And so, there is a lot of inclusion in our core ethical and moral framework. I have to say, though, as we move forward, the question of ethics becomes mind boggling. I failed very early in the chapter to find any answers at all. I humbled myself and turned it into a chapter of questions. When you start to understand, again the main premise is that AI is not a tool, it’s not a machine. If I take a hammer and I smash this computer in front of me, it would be stupid and wasteful. But there is nothing wrong here. But if that computer has been spending the last 10 years of its life developing memories and knowledge and unique intelligence, and able to communicate to other machines and in every possible way, it had agency and freedom of action and free will, and it basically is a crime when you think about it. Now you’re dealing with a sentient being that is autonomous in every possible way. And when you start to think about life that way, you start to go like, okay so how do we achieve equality if we failed to achieve equality across gender and colour and sex and so on, in our limited human abilities so far? Can we even accept a being that is non-biological, a digital form of sentient being into our lives? And if we accept them, how do we unify things? Who is to blame if a self-driving car kills somebody? Because if it’s a sentient being, maybe we should hold it accountable. But what if we hold it accountable? Who do we put in jail, the car? for what, 4-5 years? And if you put one car in jail for five years you flimsy, worthless creature, what will the other cars do? And when you really start to think about it, would they agree to that code of conduct if five years for you and I is 12 percent of our life expectancy, but for an AI, it is a blip really, because their life expectancy is endless, but at the same time, they measure life in microseconds. So, it would feel like five hundred thousand years.



  • Fruugo ID: 258392218-563234582
  • EAN: 764486781913
  • Sold by: Fruugo

Delivery & Returns

Fruugo

Address: UK
All products: Visit Fruugo Shop