Thursday, August 21, 2014

Extinction: A terrible inconvenience

Extinction is the rule. Survival is the exception.
(Carl Sagan)

Worth reading. Supertelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
(Tweet by Elon Musk, CEO of Tesla)

The ultimate risk

The Atlantic several months ago published an interview with Nick Bostrom, the director of Oxford's Future of Humanity Institution. Bostrom's contention is that human extinction risks are not well understood. (You can click onto the complete article We're Underestimating the Risk of Human Extinction below.)

While Bostrom talks about present threats to humanity, he is clearly more concerned with longer-term potential existential threats, which he thinks is where our focus ought to be. Among those at the top of his list would be artificial intelligence, while in the nearer term it could be what we might do in the areas of biotechnology and synthetic biology.

As well, it's Bostrom's contention that one of the very worst things humanity could do is to slow down or halt technological evolution because that in and of itself would “constitute an existential risk.”

In short, according to Bostrom, our “avoidable” misery will only get much worse if we can't improve the quality of life, and for him technology is the key, or at least, a central priority we can't afford to reduce. Bostrom, however, goes on to say that it would be difficult to slow down technology very much, let alone bring it to a halt because of the constituencies pushing scientific and technological priorities, along with economic interests and assorted individual and institutional priorities. Well possibly, yet....

Considering the finite then and now

In Thomas Cahill's wonderful book How the Irish Saved Civilization, he recounts a winter day in 406 A.D. Roman soldiers stood on one side of the frozen river Rhine, while on the other side were the barbari, thousands upon thousands of hungry and determined barbarians from assorted Germanic tribes determined to cross over to where the Roman legions now stood, the defenders of the “civilized” world.

Let it be said that virtually no one on the side of the Rhine where Roman soldiers now guarded the culture and the glory of antiquity and possessed the most advanced technology in the world—even at this late date—would you find people able to imagine that the Eternal City of Rome and its legacy would crumble and vanish. It would have been incomprehensible.

The late Kenneth Clark in his book Civilization asks the question, why did the Greek and Roman civilization collapse. His answer is that it was exhausted. Antiquity had run out of steam. It had become a static world as Cahill points out.

Doing the expected had become the highest value. Fear of war, fear of invasion, fear of plague—fear of everything was the norm. Why bother to do anything? Late antiquity had become a world of empty rituals, obscure religions and the wholesale disappearance of self-confidence.

The barbarians? The disappearance of self-confidence? The Islamic State currently ravaging parts of Iraq and Syria, a determined collection of murderers and psychopaths, with their longing for the Middle Ages, would have been a familiar sight to the citizens of the Roman Empire by the second decade of the fifth century.

The latest discoveries regarding our cousins the Neandertals, no longer thought to be ignorant brutes, suggest that the interactions between them and humans may have lasted considerably longer than we once thought. There is also speculation now that the Neandertals may have gone extinct because, unlike Homo sapiens, they were unable to adapt quickly enough to climate change. But of course we can't draw exact comparisons and we don't know what the future will be.

Will technology be our salvation as some believe or is there something else required? Will the elephant and the rhino survive the slaughter caused by China? Will humankind somehow stop doing the expected and rediscover a new self-confidence?

Saturday, August 02, 2014

The petabyte children

Chris Anderson the former editor of Wired Magazine, several years ago, published an article entitled The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. He believed that the age of “big” data would be “the end of science as we know it.”

We've had kilobytes, megabytes and terabytes, in other words, floppy disks, hard disks and disk arrays. In the Petabyte Age the “stuff” is stored in the cloud. Knowledge now begins with massive amounts of data. And can we deny in 2014 that those under thirty years of age are not constantly gazing into the virtual cloud?

Anderson said back in 2008 that data will be viewed mathematically first and a context for it established later. Correlations are the future young man. Causal analysis be damned! He ended his article by asking, “What can science learn from Google?”

Predicting the future

Regarding the future, I think a major challenge of the 21st century is acquiring a better understanding of the brain—and the mind, if any future is to become a reality. In the spirit of speculating I shall shamelessly promote my own ebook novel A Genetic Abnormality, which will be coming out on September 1, 2014. For more information go to

The modern scientific process began in the 17th century, and viewed against the thousands of years of human existence, science developed only yesterday. It's also “difficult” in that it goes against how we humans traditionally think.

If scientific thinking can be called analytic and objective, traditional thinking is more subjective and
associative, associative meaning we humans can see causal relationships between actions and events and sometimes indulge in quasi-magical thinking. Of course, at the same time, we appear to be the only species that can meditate on ourselves in ways other animals cannot.

The history of science has focused on ideas rather than methods. Theories are constructed based on observations and measurement of natural phenomena. It is a world of both inductive and deductive reasoning and creating hypotheses requiring testing, and in the best case can be repeated and replicated. Science likely has a lot more to teach all of us. The question still is what can Google learn from science?