Apocalyptic thinking is frequently found in certain future scenarios, especially when those scenarios are created by people concerned with military conflict, climate change, artificial intelligence, disease outbreaks, or other scary possibilities. CRN has ourselves participated in the making of such scenarios involving nano-weaponry, and our co-founders wrote a chapter for a book on global catastrophic risk.
So, for anyone who's read this blog for a while, or who has kept up with general trends in futurist thinking, the projected "end of civilization" is not an unfamiliar theme.
A recent article in the New Scientist suggests that "the demise of civilization may be inevitable."
Some say we have already reached this point, and that it is time to start thinking about how we might manage collapse. Others insist it is not yet too late, and that we can -- we must -- act now to keep disaster at bay.
The upshot is that a certain level of complexity is unsustainable, and that we have reached or are near that point in numerous areas, including energy production, environmental management, finance and credit, etc. Assuming you accept the premise of the article, our choice -- that is, the collective choice of our modern industrialized society -- is to either adapt or collapse.
Adaptation will mean huge changes in the way we function. This is not a new idea, of course. Since "The Population Bomb" (1968) and "The Limits to Growth" (1972), we've been hearing increasingly dire warnings about being on the wrong path and what we must do to correct it. But today, in the face of massive evidence that global warming will dramatically change our world no matter what we do, adapting in order to survive seems more urgent than ever.
Is total collapse actually possible? Well, obviously, ours would not be the first civilization ever to perish or to crumble under the weight of its own unchecked enlargement. So certainly it's possible.
As pointed out in the New Scientist article:
If industrialised civilisation does fall, the urban masses -- half the world's population -- will be most vulnerable. Much of our hard-won knowledge could be lost, too.
On the other hand, we now know a great deal more about the mechanics and dynamics of collapse than have any people before us. We know much more about sustainability and resilience. It is also possible then, if not likely, that we can avoid collapse by making just enough of the right kinds of changes just in the nick of time.
But if our civilization is to change as much as some people say is necessary, how will that affect current institutions, such as the corporation, the nation-state, or even democracy itself?
George Dvorsky, who writes the excellent Sentient Developments blog and who serves on the Board of Directors of the Institute for Ethics and Emerging Technologies (IEET), has some serious thoughts on the subject in a piece titled "Future Risks and the Challenge to Democracy."
This restructuring is already underway. We live in the post 9/11 world—a world in which we have legitimate cause to be fearful of superterrorism and hyperterrorism. We will also have to reap what we sowed in regards to our environmental neglect. Consequently, our political leaders and institutions will be increasingly called-upon to address the compounding problems of unchecked WMD proliferation, terrorism, civil unrest, pandemics, the environmental impacts of climate change (like super-storms, flooding, etc.), fleets of refugees, devastating food shortages, and so on. It will become very necessary for the world’s militaries to anticipate these crises and adapt so that they can meet these demands.
More challenging, however, will be avoiding outright human extinction . . .
Catastrophic and existential risks will put democratic institutions in danger given an unprecedented need for social control, surveillance and compliance. Liberal democracies will likely regress to de facto authoritarianism under the intense strain; tools that will allow democratic governments to do so include invoking emergency measures, eliminating dissent and protest, censorship, suspending elections and constitutions, and trampling on civil liberties (illegal arrests, surveillance, limiting mobility, etc).
Looking further ahead, extreme threats may even rekindle the totalitarian urge; this option will appeal to those leaders looking to exert absolute control over their citizens. What’s particularly frightening is that future technologies will allow for a more intensive and invasive totalitarianism than was ever thought possible in the 20th Century – including ubiquitous surveillance (and the monitoring of so-called ‘thought crimes’), absolute control over information, and the redesign of humanity itself, namely using genetics and cybernetics to create a more traceable and controllable citizenry. Consequently, as a political mode that utterly undermines humanistic values and the preservation of the autonomous individual, totalitarianism represents an existential risk unto itself.
George has eloquently restated some of the concerns that CRN has written about for years, many of them made possible or exacerbated by advanced nanotechnology. I urge you to read his whole article.
Clearly, then, the question to ask is not only whether our civilization can survive the challenges of this century, but if it can, what kind of civilization will it be?
That is what we must actively plan for and work toward if we hope to live in a better tomorrow.
Nice image. Where's it from?
Posted by: Nato Welch | March 04, 2009 at 08:44 PM
Google images, from io9.com
Posted by: Chris Phoenix | March 05, 2009 at 06:44 AM
Following up a little bit...
In George's article that I quoted above, he wrote:
Does that sound ridiculous? Well, as you may be aware, it has already taken place in the good ol' USA.
From Newsweek:
As Sinclair Lewis warned us, never say it can't happen here.
Posted by: Chris Phoenix | March 05, 2009 at 10:52 AM
Much as we muse about the future, you can't it. Can you?
It is possible to project individual trends with decent certainty, some distance into the future. And to carefully evaluate how specific trends might interact - accelerating or suppressing, amplifying the impact or negating it.
Could we hope to build a model with any predictive power? Maybe it is far to big and complex to contemplate? But not too long ago the idea of a large, self-generating on-line encyclopedia would have seemed impossibly naively ambitious... Perhaps a "Simiki" is equally insane and equally possible?
So how about creating an open source model of the future? Anyone could criticize the assumptions or elements, but that leaves them open to "So - think you know better? Go ahead, make your edits!"
Besides - this site needs 'A Project' - something beyond blogging, to draw attention to it. Remember "The Club of Rome" - and the attention it got?
:::Simiki Project:::
Phase ZERO: Specify project phases :-)
Phase I: Brainstorm directions and methods and goals. Is a spreadsheet adequate to get started on experiments? Is special purpose software available - or how could it get built? How might one leverage the internet? Open source? *Could* a wiki model apply, when each contributer might want their own variation on the model?
Phase II: Build small models to figure out requirements - aim to build in model power, flexibility, self-documentation, clear display of results, easy versioning/splitting/merging, etc. Expect to be frustrated, but expect to learn a lot.
Phase III: Identify key trends/events and potential relationships. Find or develop Simiki v0.1 through v1.0. In parallel, build trial models of modest size. Study stability versus predictive range.
Phase IV: Model version 1.0 - the Big Model Build, debugging, simulation and synthesis.
Phase V: Rinse, Repeat, Validate against - and update with - new information.
Scary big project. And I suppose one might object that its not a core focus here. But so much of this site does focus on projecting the future, though of course always centered on nanotech.
So do that with Simiki - keep it focused on nanotechnology and its potential effects. Simiki-Nano. If others take it and run off in other more ambitious directions, so much the better.
OK - crazy idea mode shut-down... :-)
Posted by: Tom Craver | March 11, 2009 at 08:10 PM
That first line was supposed to say
"Much as we muse about the future, you can't PREDICT it. Can you? "
Posted by: Tom Craver | March 11, 2009 at 08:11 PM
All civilisations surely evolve technological scenarios that threaten their safety. Surely they don't all wipe themselves out? Maybe the technology itself causes the "awakening" of society that allows the safe transition?
http://superconcepts.blogspot.com/2008/11/technology-will-it-set-us-free.html
Posted by: Stu | March 16, 2009 at 11:46 PM