Sunday, May 20, 2012

The Path to Augmentation

There are two major problems that I see people having with the concept of technological augmentation.  The first, while easy to extrapolate the answer for, is still very difficult for people to comprehend (even for those of us interested in these things, or even in the same fields... heck, there is debate among scientists and engineers about whether the Singularity is actually something that could happen); that of "how".  They see big clumsy robots that can mimic (poorly) the natural ability to walk or climb stairs, or the lump of immobile plastic that often replaces a lost, flexible, functional limb, and wonder how we can make it even close to as good as the original.  How do we make artificial limbs with sensory feedback, much less natural control, movement, and appearance?  And even if we can do that, how in the world can we use technology to improve something as complicated as the brain?  Unfortunately all of the answers to those questions lie in trends and theory and conjecture (even if it is well-documented).  We aren't there yet.  Not even close.

The second problem, which is the one I want to discuss here, is that of "by what path" will these augments come into the social consciousness.  How will we come to accept the artificial replacements of our natural functions?  I believe that it will happen through the medical field.  Included below are some videos that show some of the remarkable advances in function replacement through technology that have happened in the last few years.

There are real hurdles to having technology improve our natural performance.  As Dean Kamen mentions in the fourth video in this post; "(Our arm) is way, way, way better than a plastic stick with a hook on it, but there's nobody here who would rather have it than the one they got."  Our technology is still trying to catch up to natural function, and lags behind in many crucial aspects.

However, we have been able to take some amazing first steps.  Remember, in most of our lifetimes we have gone from computers that took up entire rooms to ones that can fit in your hand and out-perform those early models by several orders of magnitude.  Just recently, in 2010, doctors in Britain and China have independently produced photosensitive chips that can restore sight to people with retinitis pigmentosa, a disease that causes the light-sensitive rods and cones in the retina to deteriorate.  Below is a video of a Finnish man, who had been completely blind, now able to recognize table settings and even read!  Image, in 40 years, what the smartphone version of this technology could do.



Also, in breaking news, brought to us by the guys who made BrainGate, which allowed people to control a cursor on a computer screen by thought alone, comes BrainGate2, which allows people to move a robotic arm just like they would their own limb.  In the video below, a paralyzed woman who hasn't moved her limbs of her own volition in 15 years is able to pick up a thermos and take a sip of coffee.  Technology like this could revolutionize treatment for paralysis, and is likely a first step to full-on cybernetic bodies, especially when paired with the work done by Dean Kamen and Darpa (see next videos).  Also, they hope to be able to use the decoding software that translates the brain's impulses to bridge spinal cord faults, possibly allowing paralyzed people to use their own limbs again... in a rough estimation of a technological nervous system.



In this report, the capabilities of DARPA's newest prosthetic arm is showcased.  These arms are controlled by nerve impulse rather than directly by the brain.  This results in a more precise and natural level of control than that of the BrainGate system, because it's one less set of logistical hurdles to overcome.  It works well for people who have lost limbs, and would not work well for people suffering from various forms of paralyzation.  They are even working on a rough approximation of haptic feedback by moving nerves that would transmit touch sensation to places on the shoulder and side that mechanisms in the prosthesis could then press on in order to let the wearer gauge the amount of force applied by the arm, or even gain improved fine motor coordination through more delicate control of individual fingers.  Imagine the eventual combination of these two technologies, and the eventual use of similar techniques for leg prosthesis.




In this video, which is quite long, but worth it to watch in full (if you just want the technical stuff, the first 7 minutes cover the meat of it), Dean Kamen, the creator of the DARPA arm, talks about the challenges, triumphs and reasons he is so passionate about this work.  The desire to replace what was lost, especially for those who have served in the military, I believe will propel this research into the realm of augmentation.  As Dr. Kamen said; "I'll stop when your buddies are envious of your LUKE arm."



And in a similar vein, here is a TED talk given by Aimee Mullins, a paralympic athlete (and main guinea pig for the sprinting prosthetic legs that Oscar Pistorius uses, who I'll talk about next), and model.  She talks about the ability of prosthesis to not only be functional, but beautiful as well.  She talks about how, through the imagination of children, and the envy of friends ("You're so tall!  That's not fair!"), prosthesis can possibly take us from merely human into the superhuman.


Finally, Oscar Pistorius is a paralympic athlete from South Africa, dubbed the Blade Runner and The Fastest Man on No Legs.  He was born without fibula (similar to Aimee Mullins above) and had his legs amputated below the knee when he was less than a year old.  He is currently one qualifying race away from making the 2012 London Olympics in the 400m.  Yes Olympics, not Paralympics (he'll be competing there, too), but the real thing, against the most able of able-bodied men.  A man with no legs, with the use of technology may soon be competing in the most prestigious athletic competition in the world.  He was actually banned for a short time from the Beijing Olympics because the IAAA thought his prosthetic legs gave him an unfair advantage.  Here is an excellent interview with Pistorius in which he talks about all of the things that have brought him to a place where he can make such spectacular history.

We are verging, in many ways, on the ability to match or exceed human ability with our technology, and it is my firm belief that through our attempts to give back to people what they have lost, we will slide unbeknownst into augmentation.  When the first person chooses to replace a natural limb with an artificial one, or the first person decides to implant a chip which lets him surf the web or use an external computer, we will have truly entered the time of the cyborg.  Once that happens I don't think there's going to be much that can slow it down.

There are many hurdles, however.  Next time I will be talking about those hurdles, and what we may be able to do to clear them.

Friday, February 3, 2012

The Big Bad Wolf: The Singularity and Humanity

This post was meant to be up many moons ago, but life got in the way (as it often does).  I promise to keep a much more regular schedule from here on out.  Anyway, here is my take on what is likely one of the most visceral possible problems of the Singularity.

Much of this blog will be focused on technologies that will help to bring about what is called the Singularity.  This is a point in time where we finally create a computer system that is more powerful than the human brain, or are able to enhance the human brain technologically, past which point the intelligence of the human race will increase exponentially.  There are a host of possible dangers with this.  From destruction at the hands of our Robot Overlords, to the Gray Goo Scenario.  However, there is another problem about which we speak (in specifics, there is almost always a subtext going through every discussion) much less often.

Many have called the Singularity the “end of humanity as we know it,” or the invention of a computer more powerful (intelligent) than a human the last invention that mankind ever need make.  The fear is that we will either lose out in the evolutionary battle against the superior AI that we create, or that by merging with our technology that we will become less “human.”

The idea of the Singularity (though the term was coined much later) first arose in 1965 when I.J. Good wrote of an “intelligence explosion” suggesting that if machines could ever surpass humans in intelligence that they could then improve their own intelligence in ways unforeseeable by their now-outmatched human counterparts.  In 1993 Vernor Vinge wrote, in what may be the most famous piece about the Singularity, that “Within 30 years we will have the technological means to create superhuman intelligence.  Shortly after, the human era will be ended.”  Vinge also posits four possible ways that this superhuman intelligence may happen:

1)      A computer that is sufficiently advanced as to be “awake” or a singular AI.
2)      A network of computers may “wake up” as a superhumanly intelligent entity.
3)      Computer/human interfaces may become so intimate that the users may be considered to be superhumanly intelligent.
4)      Biological science may advance to increase human intelligence directly.

The first three depend on the advancement of computer technology, based in large part upon Moore’s Law which posits (extrapolated, the original law was merely for transistors on a circuit) that the power of computers will increase exponentially, doubling every 18 months or so.  Ray Kurzweil, another important figure in the realm of Singularity study, has studied the history of information systems, from DNA to computers, and has shown that this exponential growth is fairly consistent through nearly every paradigm.  Basically every generation of computer benefits from the previous generation’s power and so can reach the next generation in a shorter amount of time.

“But!”  You may exclaim, “We certainly must be reaching the limits of our current technology.  They can only make silicon so thin, and current chips are already measured in the nano-scale, they must be reaching a limit to how powerful they can make our computers!”

The answer is: sort of.  Yes, we are reaching the limit of our current computer design, but there is already a wealth of research into new technologies that will allow us to build computer chips in 3 dimensions instead of the flat plane that our current chips currently use.  Improvements in nanotechnology and the use of carbon nanotubes and graphene are progressing rapidly, and will likely be able to take over where silicon leaves off.  And we’re even beginning to poke at the edges of quantum computing which uses the wackiness of entanglement and multi-state particle physics to transfer information at speeds that are all but instantaneous.  The point is that, even if the exponent slows, the advance in technology will not.  Barring an extinction event we should have computers more powerful than the human mind by 2045-ish.  We may also have neural prosthetics that enhance human intelligence.

But with the discussion of artificially enhancing a portion of ourselves comes the inevitable worry: will making artificial “improvements” create an artificiality in a person’s being/personality/soul?  Will the additional computing power of the new brains push morality by the wayside, for cold, robotic logic?  What use will emotion, community, family, and human connection have in a world where we are all supercomputers?  Where will heroism and altruism fit in a world where probabilities and best-case scenarios can be calculated by anyone in an instant?  What will inspire us when we can do anything we want?

Thing is… I don’t know how to answer that problem.  There is research into parts of the brain that include neurons thought to be the source of emotional intelligence that may allow us to improve our ability to connect and care about other people if we can (or want to) enhance those areas, but honestly I don’t know that that sufficiently answers the problem.  There is also the Kant-ian answer that through enhanced intelligence and logic, a greater morality will emerge, as we can better calculate the greatest possible good to come out of our moral decisions.  This doesn’t really comfort those afraid that we’ll all become unfeeling robots, however.  Quite the opposite, I would expect.

My only real advice in this regard is similar to the advice I would give people who worry about rising Governmental or corporate control: active, diligent attention.  Question the motives and effects of technology, especially that tech that could improve or enhance abilities.  There is a very real danger of enhancement technology creeping into modern ubiquity without a lot of attention paid to the repercussions.  We can only have a moral Singularity if we pay attention to the world and how we change it.

Monday, January 17, 2011

Theory of Relational Ontology

Here in my second post, I thought I’d explain a significant underpinning of my philosophy of how things “be.”  That is: how do we look at things, and describe how they exist in the world?  This study of the existence of things is called Ontology.  This discussion will necessarily be far more philosophical and conceptual than likely anything else I post here, but I feel it is necessary to explain in order to avoid needing to go over it multiple times when a related issue emerges in the future.  It will also be considerably longer than to what you may be accustomed.  This is necessary because I need to first set out the principles and history of four different ideas in order to get mine to make any sense within context.  And as context and nuance is a huge part of my own philosophy, this is something that I feel is necessary.  Fair warning, I will reference books you've likely never heard of and that are beasts to read… but very worthwhile.  I will attempt to give you as much cliffsnotes-style information about their content as I can, but a lot will be lost in the shortening… so I would urge anyone who is interested in these topics to go out and read them.

So in order to first explain what a Theory of Relational Ontology is (would that be considered technically a pun or a tautology?) we need to look at what other ideas exist out there that this may both deconstruct and be informed by.  Three significant ideas have been widely accepted in the mainstream over the last century:  Modernism, Postmodernism, and Actor-Network Theory.  My own theory is most closely related to actor-network theory, but differs in a significant way.  But first, let me give you a rundown of the history and precepts of the first two ideas; modernism and postmodernism.

Modernism was a philosophy that grew out of the enlightenment, coming to the fore in the mid to late 19th century after some serious, large-scale conflicts threw into stark contrast the ideals of those in the Enlightenment period with what was beginning to be the reality of an increasingly fractured world, modernism began to try and separate Nature from Culture, seeking to reduce that which was “natural” to a single objective (quantitative, rather than qualitative) view, and what was “cultural” to being a hierarchical, linear, series of (again, quantitative) steps from the barbaric to the civilized.  This philosophy was tied deeply into the colonial practices of Western Europe, and contributed greatly to advanced in the physical sciences… while at the same time creating a cultural separation between the civilized colonizers, and the "ignorant" and "savage" colonized tribes.  It created a hierarchy of cultural value, as one would create a hierarchy of property value.  Ironically, though this was intended to give “savage” cultures a path to “civilization” (as opposed to the static nature of culture from the romantic period), this instead created an “Us vs. Them”, “We vs. Other” dichotomy that only harmed other cultures’ evolution.  There are books and books and papers and papers about the effects of this on cultures around the world (most notably in Africa where national lines and tribal areas are in conflict and have likely been the cause of much of the area's instability over the last 200 years), so I can’t get too far into it.  But the point is that Modernism tried to separate, to purify, things into their constituent parts.  This was definably an English cultural trait; that was definably an African cultural trait.  This was definably a natural fruit; that was definably a cultural juice.  Kant, Locke, Freud, T.S. Eliot, Schoenberg, Picasso and Nietzche are staple examples of Modernist philosophers and artists: all of whom extol a fundamental structure, an objective truth, for everything, including music, morality and consciousness.

But the problems arose when we began to realize that in order to purify, in order to rigidly define the nature of a thing, we then needed to use some tricky translations to put everything back together.  If fruit was natural, but juice was a cultural creation (after all, we don’t see cran-raspberry juice hanging from trees in the wild!), then how do we get from one to the other?  Post-modernism came in to save the day.  It basically tried to de-construct all of these established paradigms of what is what, and basically said that everything that modernism said to be true was all really just language games, made to satisfy human curiosity and the necessity of our brains to classify.  It put forth the idea that everything is semantics, and that only an individual can truly ever understand what it is he or she sees, and that perspective was the driving force behind reality.  Basically the tenet of postmodern philosophy is summed up, oddly enough, by Descartes’ famous Cogito Ergo Sum thought experiment, where he found that the only thing that he could not doubt existed was his own doubt, and that following from that, he must also think, and that in order to think, he must exist.  Beyond that, everything could be illusion.  This movement away from objective truth into subjective truth led to abstraction and minimalism in art and music, and to a sort of cultural relativism that sought to establish all cultures as equal.  Michel Foucault, Philip Glass, John Cage, Kurt Vonnegut and Ernest Hemmingway are some influential figures in the realm of post-modern philosophy and art.

The problem there, as it might be evident, is that when everything is relative to the observer, how can we reach any consensus on, well… anything?  How can we conclude that empiricism, the foundation of scientific discovery, is reliable?  Not only that, but it doesn't solve the problem of the hierarchical nature of modernism, other than painting it in a new light of semantics.  So we are left with nature and culture as semantic arguments instead of material arguments… but we still need to do all of the work necessary to separate and put back together all of the things in the world that are blends of nature and culture.  So, how do we effectively describe an object’s complex reality?  Enter actor-network theory.

As a quick aside, both of these movements are obviously FAR larger and far more complex than the brief paragraphs I have given them here, and include a wealth of context and knowledge into the… epistemology (the study of the growth of knowledge) of the social theories over the last 200 years.  I am still not anything close to an expert on any of these movements, so any issues with over-generalization or missing possible critical points can be attributed both to lack of space and possible lack of information by me.  However, on to the meat of my point, rather than just the background.

Starting in 1979, a French scientist and philosopher by the name of Bruno Latour (a student of Michel Foucault, whose name you may recognize from earlier), began to look at the work of science as a social construction.  A couple of his most seminal works include Science in Action and We Have Never Been Modern.  In Science in Action, he introduces the concept of studying scientific endeavor as a social object, that has to be studied, as it were, in action.  To merely take the final result would be to miss a lot of the important contextual factors that bring about the answer.  He uses the example of the discovery of the double helix as evidence that social and political forces are instrumental in how a scientific discovery is brought into being.  Without Francis Crick’s political maneuverings, without the constant fight for funds, without a face in the political arena that gathers support, scientific discovery is nearly impossible to do.  This is the first blending of the natural and the social, of bringing context back into the equation of empirical, objective reality and letting the methodology color the result.  In We Have Never Been Modern, he breaks down the modernist (and post-modernist) nature/society dichotomy by introducing the concept of hybrid objects.  Objects that blend the social and the natural, and uses the idea of networks to hold up the construction of things in a way that bypasses the need for purification and translation.

The main idea behind actor-network theory is that every “thing” is a hybrid of the social and the natural.  And we can consider the ideas of nature and society as opposite poles of a scale.   Where religion may be strongly on the social end of the scale, a rock in the wilderness no one has ever seen may be heavily to the natural end of the scale.  And these objects, both human and non-human are held together by specific networks that, without any one part of it, could not exist.  The only real difference between any two objects (be they something as small as a campfire, or as large as a nuclear reactor, or as complex as a society) is the length of the network necessary to hold it up.  For example, if we compare a cooking fire and a nuclear reactor, we can see that effectively they do the same thing: provide heat and light to people.  However, in order for the fire to exist we need very few human and non-human actors to make it possible; the wood, the spark, the person who made the spark and gathered the wood, and the teachings that let the person know how to build a fire… aaaand that’s about it.  For the Nuclear reactor, you need hundreds of construction workers, instruments and machines, and thousands of hours of schooling and infrastructure, etc, etc… much, much larger network necessary to make it possible.  Two objects, which do similar things… whose only difference is the scale of the networks necessary to bring them into being.  We can look at cultures the same way.  Instead of the modernist hierarchy of societies, we can see that the size of the networks that a small, tribal, “savage” society needs to exist are simply much, much smaller than the networks necessary for a “civilized” society such as that of Europe or the US.

There is one more author whose work I need to reference before we come to the end, and that is Annemarie Mol.  You may remember I referenced her work last week.  In her book The Body Multiple: Ontology in Medical Practice, she puts forth the idea that, basically, context determines reality.  Only through interaction can anything be shown to exist.  She uses the example of arthrosclerosis (and let me tell you when you have a class that is a 3-hour discussion and you need to say arthrosclerosis dozens of times a day… eeeesh) to show that no thing is only ever one thing.  To a patient prior to diagnosis, arthrosclerosis is pain in the legs, inability to walk and swelling in the extremities.  To the doctor who diagnoses the condition, it is a weakening of the pulse through a stethoscope.  To the surgeon it is crunchy veins and the plaque he just used a tool to scrape out of those veins.  To the guy doing the necropsy on amputated limbs, it is a thickening of the arterial wall under a microscope.  The idea is that in each of these contexts, arthrosclerosis is a different object… because how it is being enacted is different.  To the person in the lab looking at slices of sclerotic veins under a microscope… without the microscope, the amputated leg and all of the tools relevant, that thickening of the arterial wall doesn’t exist.  Depending on who is enacting an object and with what, an object has a different ontological reality.  There is no practical single objective reality to anything.  It all necessitates interaction.  Things literally cannot exist in a vacuum.

My own philosophy is sort of a blending of Mol and Latour’s ideas.  Latour’s networks are too static, they assume a network for each object and that if any part in the network breaks down, the object it holds up will fail to come to be.  If we look at Mol’s theory in comparison, we see that everything is in constant interaction, it must be continually upheld by enacting other things and being enacted by them.  This gives us a much more dynamic, shifting environment than Latour.  But Latour is still correct in that the network is necessary for an object to exist.  Without the concrete, we could never build the nuclear reactor.  So how do we get through this stalemate?  With one, very simple, change:  There is only one network, the only difference between objects is not how long their network is, it is how many first-order enactments are necessary to uphold it.  So what do I mean by all of that?  Well, if we follow the networks that are required to hold up any object in Latour’s model, we can eventually follow them to include, well... everything.  For instance, I mentioned the comparison between the fire and the reactor earlier, and I said that there was very little necessary to make that fire possible:  wood, spark, person, culture that taught fire-making.  Well, that was a bit of a lie.  In order to get wood, you need trees, which need an ecosystem in order to grow, mature, and then die in order to provide wood.  In order for there to be a society that can teach fire-making there needs to be several people, all who do many things, including hunt, gather, farm and trade with other societies, which expands their network to include many other societies, which can then expand into more societies, and then the entire world.  But if we can connect a simple cooking fire with the entire world, then how is that different with the reactor which must be able to do so as well?  Well, that’s where the first-order connections come in to play.  My original list of network connections for the fire was only kind of a lie… all of the things I mentioned are necessary first-order connections, they are the things immediately necessary for an object to exist in a specific time.  All of the other parts, the greater society and the connections with a large ecosystem and trade with other societies, are only necessary to uphold the objects that are the first order connections.  They take two or more steps to reach from the “fire” object.  But how does this improve over Latour and Mol?

Well, when connections break, things don’t magically disappear; they are merely re-shuffled into the network and bring about a new reality.  If a concrete truck breaks down and cannot deliver a necessary load to the construction site, the construction site doesn’t go poof, the contractor shifts his network to include a new truck and schedule repairs.  If a person cannot get enough funding for a project, he may instead join a team doing similar work in order to become better established (and do related work that could improve his intended project) and find that funding in the future.  If a trade agreement with another tribe breaks down, that does not immediately effect the ability of the fire-making tribe to make fire now… but it may shift the balance of power and eventually drive the society to a place where it disperses and then cannot make fire (by virtue of not existing any more).  Through enactment and a shifting network, the realities and contexts become much more dynamic, much more stable and much closer to the reality we see every day than a stripped-down stylized approximation.  Mol’s theory of enactment also falls a little short as in her estimation, human actors are required to enact objects and determine their reality, heading a little close to perspectivalism and the pitfalls of postmodernism and subjective relativism.  If we allow non-human objects, through the networks, to enact each-other via the rules of the universe, we can expect a systematic understanding, and empirical consistency that neither theory allows for, without sacrificing the dynamics and variability that we see in the world.  We still get context-sensitive reality while maintaining a foundation in reproducibility, logic, and reason.

The study of science as a social endeavor (usually called Science and Technology Studies, or STS), I tend to think of more in terms of anthropology: the study of human behavior and culture.  It grounds the study in an empiricism that a sociological approach can skirt in favor of a certain level of relativism.  In Anthropology, when you study a culture, you must necessarily produce artificial bounds on what you study, because if you tried to study a culture including all of the ways that they interacted with other cultures, then you would also have to study those other cultures in order to understand the dynamics, and then it’s turtles all the way down.  The same is true here:  We need to realize that context for anything we look at is, in truth, infinite.  We must, if we want to fully understand, include all aspects of the world in regards to everything.  This is, naturally, impossible… so we must artificially end our study at some point.  My attempt with my own relational theory is to better understand where the limits of effective study actually are.  If we can find significant tangential effects of a discovery or of a scientific practice we should explore them.  If we can enumerate all of the first-order connections that make a thing be, then we can better trace lines of effect throughout the greater society, and I think that will give us a greater understanding of science and technology as it relates to society.

Thank you to all those who finished all 3000 words of that monster, and discussion as usual is welcomed below.  If there are any questions or comments, please do not hesitate to ask.

Sunday, January 9, 2011

What Is a Cyborg, and Why Am I One?

As my blog is titled “The Cyborg Apologist” I thought I should first discuss what, exactly, that meant.  The immediate meaning is twofold: first, that I am an apologist for the idea of cyberization, for the idea of the cyborg as a good thing, a positive advancement of the human condition; and second that I am a cyborg, and an apologist for my own existence and the relevance of science and technological advancement as social forces.  So we then have to answer the question of what a cyborg is.  It will be difficult to be an apologist for a concept if we’re not on the same page as to what that concept implies.

Popular media has characterized the cyborg as a mostly-machine villain, human perhaps only in appearance as a disguise to carry out some nefarious robotic program.  We need only look to movies such as the Terminator franchise (or more cheesy 80s b-movie in the Cyborg movie franchise… yes, franchise) to see this.  Darth Vader is another good example.  But in other fiction, cyborgs are not inherently evil.  Frankenstein’s monster could be considered a cyborg, as a person built of flesh and science… but he, though flawed and scarred, was not evil (ignoring the popular monster movies, of course).  The Replicants of Blade Runner may have been antagonists, but they were not evil by nature.  In the Ghost in the Shell series, nearly everyone is a cyborg of some level, both hero and villain.

But what IS a cyborg?  The dictionary defines it as a person whose physiological functioning is aided by or dependent upon a mechanical or electronic device.  Others define it as a being with both biological and artificial parts.  But that begs the question of where “being” and “physiological functioning” begin and end.  And those are more difficult questions.  Do we only include internal physiological functions via the dictionary definition or do things like sight, hearing and communication qualify?  And where, by the second definition do the parts of a person end?  Do we include a person’s wardrobe, their location, job, and tools?  Can we distill a “person” to their flesh and blood body and brain sans everything else that they may use to interact with the world?

No, I do not believe we can.  I do not believe that we can only include our internal organs and body parts as our only physiological functions.  I do not believe that we can separate our actions, our creations, or our tools from what we consider to be our own being.

Georges Canguilhem, in a lecture given in 1947 called Machine and Organism, laid the groundwork of the idea that the human mind considers the tools it uses to be extensions of the body.  A Hammer, once grasped, is no longer a hammer, but rather an extension of the arm that can now drive nails.  Considering the ease with which anyone can use simple tools with little to no training, this idea is not so revolutionary as one might think.  The idea of tools as organs becomes even more striking when we look at items such as eyeglasses and contact lenses, telescopes and microscopes, microphones and hearing aids, perfumes and deodorants, or more strikingly: artificial limbs, heart valves and joints.

If we consider our “selves” to be only that which we need to survive, and nothing more, then we would still need to include clothing and shelter.  Remove those and the human being cannot live except in very small zones near the equator (that we originated in such places is thus no surprise, but I digress).  But if we also consider that human society is a necessary part of the human condition, that it is inseparable from what makes us human then we must include the trappings thereof; Religion, philosophy, churches, schools, our books and our stories, our governments and our buildings, our homes and our transportation, our phones and our computers.  All of these are necessary parts of our current human society.  We could get rid of a lot of it and survive, as a minimal requirement, but many would not survive the removal of modern “convenience,” and the very idea of human society would be irrevocably changed.  How well do you think your average CPA can hunt, gather or farm for food?  How necessary would his skills be in an agrarian society?

Another author whose work has informed my view is Annemarie Mol, through her book The Body Multiple.  In it she posits a theory of ontology (the study of how things “be”, or how they can be said to exist) that requires interaction, or as she puts it: enactment.  The idea is that we cannot separate what something is from the context in which it exists.  We cannot purify it down to some conceptual “natural” objective reality, because such a state doesn’t exist.  We cannot separate the man from all of the things that he enacts each day: his family, his clothes, his job, car, house, etc.  To do so would be to strip him of what makes him… him.

So, if we cannot separate the person from the items he uses – and we accept that the mind considers what we use as parts of ourselves – then where are we with regard to the original question?  What is a cyborg?  Well, we are cyborgs.  We all of us have artificial parts that perform physiological functions.  In fact, ever since the first proto-human picked up a stick to club his lunch we have been cyborgs.  We have been taught to consider the blending of human and machine as other, as unnatural, as scary, as human hubris gone too far.  But if we just look around at the amount of machine that we require to live our daily lives, even if those machines may not reside within our flesh, we can see that it is not so scary, it is not unnatural (our machines have grown along with our understanding, our society, and ourselves as humans, in a completely organic way), and it is not other.  It is us. 

We are.



Comments, constructive criticism and all things related are welcomed below.  Next week I will delve more deeply into the idea that I quickly passed over here: existence as interaction.  Join me for a look at the Theory of Relational Ontology.  Then for an examination of why we fear the idea of cyberization in The Big Bad Wolf: Losing our Humanity.

Monday, January 3, 2011

Welcome

Welcome to The Cyborg Apologist!

Here we will be discussing new and exciting forays into the realms of science and technology, look at the shifting cultural paradigms that they inform (and vice-versa), and probably get a bit into politics and pop culture as well.

As for the name, I am a cyborg... sort of.  And so are you... sort of.  My goal here is to make sure you're cool with that fact.  I aim to expose the nature of cyberization as the natural process that it is, to remove the fear and uncertainty, and instead to explore the really cool possibilities that our technology has in store for us.  This isn't to say everything I write will be all roses and optimism.  Dangers do lurk and we need to be aware of them.

As the new year begins, so do I.  Return next week for the opening salvo in what is sure to be an interesting journey.

Next topic: What is a Cyborg, and why am I one?