The big blues marble

I first heard about global warming in the early ’80s. It was reported on by a local newscast, probably WPIX. I remember the jolly, post-story wrap up:

So New York might be as warm as Florida one day? Wow, that sounds great!

To recap our top story: beloved Yankee, Reggie Jackson, was traded to the California Angels today.

And that’s the news for tonight. Take care everybody!

I was freaked-the-fuck out. As a Cold War child, I was incredibly phobic about the end of the world. I went to bed every night wondering if there was going to be a tomorrow. To compound matters, my brother reminded me that regardless of the Soviets’ first-strike strategy we were doomed: we lived within ten-to-fifteen miles of major population, industrial, and communications centers. We’d be among the first to go.

This new global warming thing was icing on my apocalyptic cake. I can’t even tell you how many nights of sleep I lost after seeing the Orson Wells documentary-style film, The Man Who Saw Tomorrow, about Nostradomus’ life and predictions. According to the movie, the shit was really going to start hitting the fan in about 1986. No wonder I spent little time doing homework – in the words of Jim Morrison, “I [was trying] to get my kicks before this whole shithouse [went] up in flames.”

With Gorbachev’s glasnost and perestroika reforms, my fear of nuclear annihilation ebbed, but my mild obsession with global warming/climate change continues to shadow me, partially because the predictive models keep becoming more and more dire – what with global dimming, ocean acidification, and the giant methane fields hidden under the arctic, it’s a wonder we’re still here.

So I started doing what any sentient being would, I refocused my energy on self-preservation; I’ve celebrated our decline by finding joy and humor in both misanthropy and entropy. I know it’s not entirely related, but Bobby Bacala’s confusion between Nostradamus and Notre Dame cracks me up every time I think about it:

[youtube]http://www.youtube.com/watch?v=2pxyVgnwMsY&eurl=http%3A%2F%2Fvideo%2Egoogle%2Ecom%2Fvideosearch%3Fhl%3Den%26q%3Dthe%2520sopranos%252C%2520nostradamus%26um%3D1%26ie%3DUTF%2D8%26sa%3DN%26tab&feature=player_embedded[/youtube]

And George Carlin’s take on our impact on the environment is maybe my favorite comedy bit of all time:

[youtube]http://www.youtube.com/watch?v=dyxuVFzKypU&feature=related[/youtube]

This isn’t to say that I agree with every point that Carlin makes; I do think our impact on the Earth’s biosphere, especially when you take ocean acidification into account, is potentially dire. And I do think that there’s social (if not a great deal of environmental) merit in making the world safer for bicyclists. But I ultimately agree with his most important point: “The plante is fine; the people are fucked.”

Along these lines, I’ve taken much away from reading Alan Weisman’s, The World Without Us. For those of you who haven’t yet had a chance, I’d put it on my summertime reading list, especially you who live in NYC. (The chapter on your city crumbling after humans are no longer there to pump the water out of the subway system is super compelling.)

The other side of my belief coin that’s been evolving somewhat is the idea of geoengineering as the last likely option to save us from ourselves. This isn’t to say that it is the best option, but at this point, I have less-than-no faith that we’ll reduce greenhouse emissions enough to make much of a dent in the problem.

And now that the U.S. has passed the world’s-greatest-polluter torch to China (and soon to India), I see little in the way of justice when it comes to we — who have benefited economically for so long — in telling emerging economies that they don’t have the same right to destroy the planet for profit as we do. (Does this make me unpatriotic?)

For those not familiar with geoengineering, here’s a link to an article that was published in Foreign Affairs a couple of months ago. The basic idea is that we’ll potentially be able to save ourselves through relatively simple projects like releasing trillions of reflective particles into the atmosphere, which would bounce some of the sun’s energy back into space. (I do think that if we are to save ourselves, it will likely be through a hybrid of both conservation and technology.)

The other important part to understand when it comes to my belief that a technological solution may be possible is my recent drinking of the Ray Kurzweil Kool Aid:

[youtube]http://www.youtube.com/watch?v=cc5gIj3jz44[/youtube]

So according to Kurzweil, we are at the “knee of the curve” of exponential technological growth. And if this is true, at our current rate of technological growth, we will experience the equivalent of twenty thousand years of growth though the 21st Century. If this is the case, preserving humanity for the next thousand years or so should take about ten minutes for our computers to figure out. Whether or not the self-replicating computers will want us around in the future is another story entirely.

With all of this in mind, I shall drive my 20-mpg car on May 21st to sit in a wonderfully air-conditioned movie theater and gleefully watch the new Terminator movie. My new biggest fear: living long enough to be hunted down by killer robots.

[youtube]http://www.youtube.com/watch?v=zg9ooaozu-8&feature=related[/youtube]

Anyway, happy Earth Day everybody.

24 responses to “The big blues marble”

  1. (I had the same phobic experience of environmental catastrophe, and nuclear holocaust, as a kid. I don’t remember if global warming played into it though, I don’t think I really understood what was up with that until well into high school — at which point I was still pretty phobic but in a much more personal way.)

  2. Rogan says:

    At least we didn’t live through ‘Duck and Cover.’ (apologies in advance to our older readers).

  3. Natasha says:

    Scotty, I love your post :) This is the kind of post I wait for the whole week: global issues, Sopranos, and George Carlin, many of my favorite things all in one place. I’m running out now, but will go over it today and watch all of the videos and comment more. Just wanted to tell you how good your post was.

  4. Tim Wager says:

    Notre Dame has a hunchback *and* a halfback. Now that’s good writing, people.

    Also, can’t wait to drink more of the Kurzweil Kool-Aid! Also, can’t wait to inject myself with some microscopic processing chips that will go to my brain and fix it.

    Thanks, Scotty!

  5. Dave says:

    That Kurzweil stuff sounds awfully fishy to me. On the other hand, it may be true that our only semi-realistic option of saving humanity’s ass is to develop some kind of fancy geoengineering.

    I liked The World Without Us — I used to daydream when I was a teenager about everyone suddenly vanishing and what would happen to all the cities and such.

  6. farrell says:

    Scotty,

    I’m so glad you’ve opened up more of a discussion about Kurzweil. He’s been on my mind in a big way lately since I bought his book. I’m definitely drinking the kool-aid with you, you big geek. What I find so compelling about his ideas is his vision of immortality. As someone who gave up on the idea of “an afterlife” some years ago, I still hunger for this option. I love the idea that at some point in the future we will be able to upload a version of ourselves and exist for some indefinite infinite time interacting with other uploaded personalities. And that those versions of ourselves will be able to exist in a highly evolved world of virtual reality with other loved ones. That seems so insane and exciting. Sounds like the promise of the great religions. Bring it Kurzy!

    But I also have many other problems with Kurzweil. His hucksterism. His company that sells vitamin supplements that purport to keep us alive longer and how he swallows over 200 pills a day in his project of staying alive long enough to experience the singularity and see himself uploaded into a non-biologic version. That’s pretty annoying. And his book, oh boy, don’t get me started, the first 300 pages are an agonizing defense of why it will only take 40 more years for us to reach a stage of exponential computer/technological advancement. Painful. My advice to anyone approaching “the singularity is near”–skip the first 300 of them. And those four youtube video clips just don’t do him or his ideas any real justice. He comes off really condescending and arrogant and wrong. Skip them too.

    But those 100 or so pages from page 300 onward are really exciting with all his imagining of all the implications of a vastly more evolved world of AI computers, robots, and nanobots. That part is super fun. It makes me want to make it to 2045–or whenever–when that is actually a possibility. And the concept that our own planet and solar system and galaxy will become a sort of unimaginable powerful and intelligent being that goes forth to do stuff. Pretty cool The book really has had a big impact on the way I’m lately conceptualizing the future and its possibilities. It’s worth reading (even in the aisles of a bookstore for a half hour).

    I love the way he’s shifted the way I think–and hope.

    Oh, and the new terminator movie. Can’t wait. Those trailers look so awesome. Thanks for that. What a drag that AI might lead to such a dystopia. Sure hope it’s not the case. But what great movies it makes for. I’ll text you, Scott, May 21 while I’m waiting in line for popcorn. Thank you hollywood! And Ray!

  7. Natasha says:

    Again, what a wonderful post!
    Phobias are, in many ways, cultural. I too am a child of the Cold War and Perestroika, except the fact that I was brought up by a different kind of media and in a different society. We knew about the nukes and the world destruction. We had mandatory military and nuke explosion survival classes in school and, essentially, were raised to save ourselves. It seems to be a big part of the American cinema, but (strangely) not reality. I’d attribute that to Yellow Journalism, maybe…maybe not.

    Not to get into the same argument about greenness with you, I tend to take George Carlin’s position on this issue and not just because I am a huge fan of his, but because I am a big fan of geology. The earth will be around with us living on it or not. The magnetic pole reversal, theoretically, already wiped out the biological matter off its face at least twice. We know that there were other intelligent (well, evolving) beings here before us, which did not make it. So, the ocean acidification: we only look the way we do, because our bodies are perfectly suited for this environment, so are the fish, and the animals. If the oceans acidify, the fish will eventually mutate and, once again, it’ll be about the survival of the fittest. The oceans will rise, the earthquakes will strike, the air will be toxic, and we’ll all perish into eternity (as we would have anyway, btw.) Come, say, another million years and, look, something else will crawl out of the ocean for another exciting evolution. Don’t know why you worry. We’ll go – the earth will stay — nothing to worry about. Don’t worry :)
    But on the bright side, as far as Kurzweil goes (besides his bad taste in furniture and that he is a bit self-indulgent,) I think he makes a lot of sense. I’ve been interested in the subject for the past ten years and have been researching other topics on the scientific progress, biology, and quantum physics. Such scientific advancements are not only possible, but in fact, very real. I, also, think that if A.I. is feasible, it will not be vicious or vindictive: according to my research — the more intelligent, the more humane.

  8. Dave says:

    Doesn’t this singularity thing sound too much like MAGIC! to be believed?

    Also: This was never my field, but there’s a very strong case to be made that having human-like intelligence depends on, among other things, having a human-like body. Your brain is far from the only part of you involved in this thing we call “thinking.” This seems like a very good reason to think that human-like AI is virtually impossible. (See Bert Dreyfus for the full argument, based on Heidegger and Merleau-Ponty.)

    So maybe there can be some other kind of AI, sure — some kind of super-awesome computer + robots thing that figures out how to, well, um, how to what? I don’t even know how we’d determine that some electronic system was a full-fledged “artificial intelligence.” I can imagine something like Skynet coming into being, I guess, some kind of system that is “intelligent” enough and capable enough to launch a bunch of nuclear missiles. But I don’t see how that system is anything we’d be able to “upload” ourselves to. In fact, if embodiment is essential to human being (Dasein, if you will), “uploading” would be impossible.

    So how does this singularity/AI stuff allow humanity to continue in any sense? Defend yourselves, people.

    (I also strongly disagree with “the more intelligent, the more humane” as unsupported by human history, personal experience, or abstract theorizing.)

  9. Godfree says:

    Besides the idea of AI, and self-improving and replicating machines, the Singularity is marked by the convergence of humans and our technologies — we become indistinguishable. This is partially because the nanobots that will pump through our veins will use the basic molecules that already exist in our bodies to construct better versions of us from within. Need a new heart? One that’s a thousand times more durable will be constructed out of what’s left of your old one. Or, nanobots will be so much more efficient at circulating blood and/or carrying nutrients and oxygen to different parts of your body that the idea of a heart at all will be as quaint as the telegraph is to us.

    Back to the subject of AI, I was thinking about my relationship with my dog the other day, and wondering, as I often do, about how she might comprehend our relationship. And I started thinking about how if machines become as intelligent as Kurzweil predicts they will, we will never understand how they “feel” about us. It was kind of a (non-stoned) stoner moment, but I was all like, “if we were to become their pets, we will never know.”

    As far as it sounding too much like MAGIC! I have no problem with this. I’m still amazed every time I become airborne in a jet. Maybe I’m just a dumb oaf, but the Internet, cell phones, all the stuff of contemporary life seems magical (not to be confused with good) to me.

    I do fall short of buying into all of Kurzweil’s predictions. His most important point to me is his predictive model when it comes to tech-growth itself, not necessarily what we (humans) will do with it.

    I agree that intelligent does not = more humane. Maybe I’d agree that more intelligence gives one the capacity for more compassion, but if that pans out is another question.

  10. Godfree says:

    And Natasha, I agree with you about the Earth and its power to foster new life forms. In fact, I take a great degree of heart in the idea that we’ll destroy our ability to exist on the planet before we prevent all other life forms from doing the same. We may be smarter then the other animals, but we also seem much more fragile. Good thing, I guess. I just want to make sure that we don’t kill off all the really cute animals, like elephants and potbelly pigs.

  11. farrell Fawcett says:

    Dave: Human can only be human if it has a body? Do I have this right? Are you confessing to being an essentialist? What do we call that version of bigotry? A bodiest? No, i prefer humanist. Yes, Dave you are a sick prejudiced humanist. You and Bert Dreyfus. Humanists. You make my future cyborg Farrell Fawcett self sick to it’s bio-engineered 2.0 upgraded stomach. I’m vomiting nanobots.

  12. Natasha says:

    I don’t think that “embodiment is essential to a human being.” Our biggest asset is out consciousness, the rest is just an engine and some tires. Humanity will not continue the way we know it today, but then again, what we know today is very different from what it was two thousand years ago, that’s the beauty of it. I envision that Singularity will only be a small part of the process, along with the genetic modification, and quanta, but the key to immortality (to me) still lies in cell rejuvenation. Human cells have a “Hayflick limit” (a certain number of times the cells would divide and rejuvenate), the “Hayflick limit” is determined by capases (enzymes, which decide when a cell should be destroyed), if a cell continues dividing, an organism will continue living. So cool!

    Intelligent = humane does not mean intelligent = humanistic just as much as it does not mean intelligent = evil

    I liked the “stoner moment.” That’s a deep question. I thought about it for a while. “…we will never understand how they “feel” about us.” I think they will – we will become them.
    Preserving the cute bears? What about the cute dinosaurs, we weren’t there to preserve? We are arguing the liability issue, right, the bears are our fault and the dinosaurs are not? Does it then boil down to our guilty conscience? Our egos? We screwed up; we messed up our planet kind of thing? We will be remembered by the eternity as the biggest losers? What if rats could be genetically altered to a different species, the one that looks cute and does not poop, for example? Would we care to preserve the old kind? I always think about these things (well, not exactly the rats, that just came to my mind)

    There is a lot of information out there, we should, probably, gather the facts and separate the reasonable from the unreasonable. There is a lot of research, forecasts, and “Oh my gosh, we are dying!” — Theories, predictions, phobias, models, a lot of guessing. While knowledge and awareness are essential for the proper preparation and prevention, I doubt panic is. I’m honestly more concerned about super humans killing other super humans for the good old power and greed.

  13. LP says:

    “You make my future cyborg Farrell Fawcett self sick to it’s bio-engineered 2.0 upgraded stomach. I’m vomiting nanobots.”

    The 2009 Whatsies “comment of the year” competition is now closed. Thank you for your participation.

  14. Dave says:

    Our biggest asset is our consciousness

    The point of the Dreyfus argument (and a whole bunch of emerging work in neuroscience and related fields, from what I understand), is that our consciousness is thoroughly embodied — that it’s a kind of confusion to talk about human consciousness apart from human bodies. (This episode of the excellent Radiolab has some good stuff on this idea.)

    I don’t know enough to make the argument in a convincing way, but I’m pretty convinced by it.

    We have this idea that’s come down to us from Descartes that the “I” is a thinking thing that exists in some way apart from the body — that each of us is “really” a consciousness that coolly receives the inputs given by our bodily senses, thinks about them, and then outputs commands to various body parts to take various actions. The Radiolab guys noted that this picture is very much the one in that famous Woody Allen scene where the guy is making out with his date and you’ve got mission control up in the head sending messages down to the crotch to raise an erection, get the sperm ready to go, etc.

    It’s easy to see how this basic picture is what lies behind the dream of artificial intelligence: you can have a really powerful CPU running some amazing software, and that’s “consciousness” or Woody Allen’s mission control or Descartes’ res cogitans (“thing that thinks” — the ego of “cogito, ergo sum”) or post-upload cyborg Fawcett. The fancy CPU gets input from various sensors, including infrared vision if it’s a Terminator; it controls servomotors and other mechanisms to make the robot move or to launch nuclear warheads to destroy humanity.

    The thing is, this picture is not accurate. This is where you have to get into Heidegger, but basically he has a really deep critique of the idea of our primary engagement with the world being one of detached thinking about what our senses present to us. According to Heidegger, our most basic way of engaging with the world is doing stuff in a way that we don’t really think about. He gives the example of a carpenter using a hammer: the carpenter isn’t thinking about the qualities of the hammer, or about how her senses tell her it weighs two pounds and feels like smooth, varnished wood. The carpenter is just using the hammer to pound nails or pull nails or whatever. You can ask her to talk about the hammer, and she might be able to do that if she’s an articulate carpenter, but that’s a different and secondary kind of engagement with the hammer. Merleau-Ponty takes Heidegger’s ideas in this area and adds an emphasis on embodiment. And what makes their arguments particularly convincing is that a lot of work in neuroscience has been supporting these ideas, and AI has been failing as a project for basically the reasons you can predict based on Heidegger and Merleau-Ponty.

    Short version: Fawcett 2.0 is just Descartes 1.0, a spec that was deprecated in the middle of the last century.

  15. Dave says:

    Watch this video and the rest of the clips of that interview to hear Dreyfus giving an excellent introduction to all this stuff.

  16. Natasha says:

    Dave, I watched the video. Dreyfus is really analyzing the two different philosophies and explains that difference. Both schools are right, however, they only provide a partial explanation of our sensory reactions. It’s true that our perception is “intentional”; however, it is, also, true that once we learn how to do something, our perception becomes automatic. It has to do with the ruts in the brain, which require “intentionality” to establish and, thereafter, they become automatic (kind of like learning how to bike). It’s the reason why a brain automatically responds to similar situations in a similar way as long as it’s producing a positive result and re-trains itself, if the results are not desirable (which requires that “intentionality”, Phenomenology describes). Ayn Rand’s Objectivism, for example, talks about the existence of matter conditional upon perception (If I don’t see it – it does not exist). It argues the non-existence of God, based on the idea. I think it’s a bit shallow, because we already know that a human eye can only see objects that vibrate at a certain rate and cannot see what moves or vibrates faster or slower. There are, also, other things that come into play: collective consciousness, universal consciousness, premonition, intuition, sixth sense, which do not quite require the bodily input (well, somewhat). We know that we are only using 10% of our brain capacity, mostly, for the perception of stimuli, emotional interpretation, automatic function, problem solving, and other similar stuff. Our brain emits vibrations based on our thinking. It emits waves of various lengths (Alfa, Beta, etc). It, also, receives vibrations from the outside. Essentially, the brain is kind of like a convoluted antenna — a tool which processes consciousness in the form of energy. It is certainly a biological and mortal instrument, however, the output of its energy is not (if you are interested, I can explain in terms of physics and brain function). The waves it emits are pure energy, which is immortal. It is that particular function that we are interested in: to receive and emit energy and to process it from energy form to make neurological sense and to log it into the neurosynaptic pathways in the form of thought and back, which requires chemical processes. (To get off the subject for a moment: the computers will not be able to “feel” without the proper chemical input, which, I hope, when we do put chemicals into our computers, we know what we are doing.) A lot easier and safer way to go about immortality is to stop the “Hayflick limit” by either getting rid of the capases or genetically reprogramming them to buy ourselves time. When we can completely map the brain and write a chemical equation for each process, we can start uploading computer programming for optimum operation. Mathematically speaking, if you FOIL an equation of variables, it will produce the same result within a few different variable variations. That’s where Singularity comes in. If an equation is written for every bodily process, it can then be optimized. Our eyes, for example, will see higher rates of vibrations; who knows what we’ll find out.
    But to get back to humanity: to process consciousness, we will need a brain and a vehicle suitable for this planet — none of them will need to be fully biological as long as the equations are the same.You know those moments, when you stare at one thing and cannot take your eyes off it? These are the moments when the brain reboots. It experiences nothingness, yet in nothingness there is a potential. Nothing is potentially everything. That’s the basis of energy and consciousness, which mirror one another, as well as quantum physics. The only question we’ll have left then is how we define humanity: perspectives, cultures, history, and our physical appearance? Also, what we’d want to change and what we’d want to leave as is.

  17. Dave says:

    Ten percent of our brains.

    How close are we to writing all the equations?

  18. Natasha says:

    :) Nice, Dave, thanks for calling me an”ignorant paranormal pusher.” :) I’ve thoroughly studied brain function and have seen many arguments for and against the ten percent. The reason why I tend to believe it is not inveterate ignorance.

    The way I understand it: The brain is not a pie, which we can slice in ten equal parts and hold only one part responsible for the fundamental functions, which are somewhat known to us. We use it all but for the rudimentary things: we all know how to use our muscles, how to remember things, how to think, yet there are other real abilities that are possible, but not everyone engages their brain into. For example, the ability to slow motion in a Kung Fu fight, which requires extensive training. It’s a technique, which clearly entails the use of our brain, but not readily available to anyone at any given moment. It’s not a psychic thing, but a very real mental skill. I suppose, it’s more of a conceptual statement that while we breath, eat, think, and, in general, direct our brain to do the necessary things, there are a lot more possibilities of brain use, which lie beneath the unknown. Einstein believed it too.

    “How close are we to writing all the equations?”
    We have not even begun. I don’t think anyone is even thinking about it. It’s simply my indefinable understanding of the subject matter.

  19. Natasha says:

    But you are right about the whole “ten percent” statement; obviously, we don’t know the ratios.

  20. Dave says:

    I agree with you, Natasha, that there are a lot of things we can train our minds to do that they don’t normally do. For me, though, the hidden and surprising parts of human being, including “strange” mental skills, are another reason to think that we’re never going to be able to write a big equation that does everything that we as embodied beings can do.

  21. Scotty says:

    You have both brightened my day by at least 10 percent.

  22. Scotty says:

    In all seriousness, Dave, I really enjoyed the videos that you linked to. I know that you’re a Heidegger man, and I love that you opened my eyes to his work. I don’t know that I agree with the whole carpenter’s relationship to her hammer premise as an accurate illustration to the way my mind (and that’s ultimately what each of us has to work with here, is our own experience) works with extensive tools.

    I tend to think of this extension (for me it’s primarily with a guitar) as being more akin to a transcendental moment — or something like that. Not to essentialize, but I think that it’s likely that Eastern philosophies tend to get the mind-body connection a little better than Western philosophies. I do think the Enlightenment was a bit of a wrong turn when it comes to explaining how the mind works. And whether or not Heidegger sees himself as an extension of this, he’s still walking down the Cartesian path, i.e.: using quasi-scientific reason to explain the mind.

    Am I completely off base here? Please tell me that you’re not rolling your eyes at me — it’s okay to lie about it.

  23. Dave says:

    No eye rolling at all, Scotty. In fact, I haven’t really read much of it (I’m not and never have been a Heidegger scholar), but after he wrote Being and Time, Heidegger came to the idea that not just the Cartesian turn to dualism was wrong, but that basically all of Western thought went fundamentally wrong with Socrates and Plato. I’ve heard some people say that the later Heidegger has a lot in common with certain strains in Eastern philosophy, although I can’t tell you anything beyond that.