By definition, a singularity is something utterly peculiar unto itself, a species of being unmatched for its “this-ness.” The term has found usage in a number of domains, most significantly in physics, where a singularity defines a condition of matter whose volume is approaching zero as a function of its density approaching infinity. Cases of singularities or near singularities include black holes and the singularity that preceded the Big Bang.
The singularity is the topic of a recent book on Marxism by Luca Basso—Marx and Singularity (2008), which is an attempt to understand Marx’s thought from the early writings through the Grundrisse in terms of the search for individual realization. Others too, such as Bruno Gullì ( Labor of Fire, 2005), have worked in part to correct the errant notion that Marxism is predicated on an undifferentiated mass subject, rather than a fully articulated, fully realized (social) individual, a singularity.
But I am using “singularity” in yet another sense, to refer to the technological singularity, the hypothetical, near-future point at which machine intelligence will presumably supersede human intelligence, and when an intelligence explosion will commence. Inventor and futurist Raymond Kurzweil, whose books include The Age of Spiritual Machines (1999), The Singularity Is Near (2005), and How to Create a Mind (2012), heralds the singularity in the technological sense.
In this singularity, a prospect predicted and also advocated by “Singulartarians” like Kurzweil, the future is as fabulous as science fiction might have it. In the short term, regular genetic check-ups to scan for “programming errors” in gene sequencing, and gene therapy, would be common, as would the merging of human brains and computer prostheses. But soon thereafter, nanorobots would clean up the environment, removing excess CO2 from the atmosphere, recreating a green planet, and reversing global warming. Micro-robots would also course through the human bloodstream, removing waste (and a distasteful process), killing pathogens, eliminating cancer cells, repairing genetic codes, and reversing aging. Computer chips, implanted in the brain, would increase memory by a million-fold. By 2029, technologists will have successfully reverse-engineered the brain and replicated human intelligence in (strong) artificial intelligence (AI), while vastly increasing processing speeds of “thought.” Having mapped the neuronal components of a human brain, or discovered the algorithms for thought, or a combination thereof, technologists would convert the same to a computer program, personality and all, and upload it to a computer host, thus grasping the holy grail of immortality. Finally, as the intelligence explosion expands from the singularity, all matter will be permeated with intelligence; the entire universe would “wake up” and become alive, and “about as close to God as I can imagine,” says Kurzweil. Thus, in the Singulartarian vision, the universe begins with a singularity and becomes God-like by another. This second singularity, Kurzweil suggests, involves the creative intelligence of the universe becoming self-aware, vis-à-vis the informational, technological agent. Thus, in the technological singularity, the technological and mystical converge, as Kurzweil resembles a techno-cosmic Hegelian. Incidentally, according to Kurzweil, our post-human successors will bear the marks of their human provenance. Thus the future intelligence will remain “human” in some sense. Human beings are the technological carriers of universal intelligence and human culture is the fragile substratum.
Kurzweil justifies his belief in the inevitability of a technological singularity with the law of accelerating returns, a tendency for the exponential increase of information processing, which he traces from the beginning of matter through the computer age, and beyond. Kurzweil’s trick is to place material, biological and technological evolution on a graphic continuum, a method that reduces matter and its development, in whatever form, to information, likewise allowing its eventual subsumption into computing processes. Human (and post-human) intelligence is measured primarily if not solely in terms of processing speeds. While recognizing that such a measure of subtle human intelligence is insufficiently qualitative, Kurzweil nevertheless accepts it as adequate for his purposes.
If all of this sounds rather otherworldly, Kurzweil wields his predictions with aplomb, and powerful institutions serve to validate his self-assurance, at least for now. With the backing of Google, Genentech, Nokia, Cisco and other sponsors, he co-founded (with Peter Diamandis) Singularity University in Silicon Valley, which has as its stated mission “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.” Since 2009, Singularity University has attracted business leaders, technologists, graduate students, and others, and has been training them in the Singulartarian framework for solving what they deem the world’s most pressing problems. Far from being relegated to the role of guru or hack, Kurzweil has since become Google’s Director of Engineering, where he has been given the reins to convert his futuristic notions into fungible consumer products. Whether or not the singularity itself is actually near, or even possible, the singularity as an idea is already considered a credible rubric for significant technology investment.
The singularity paradigm, so its advocates assert, amounts to a second Scientific Revolution. Trans-humanists (those who call for the supersession of humanity by a new “species” with a higher intelligence and extended lifespan) such as Simon Young (in Designer Evolution: A Trans-humanist Manifesto, 2005), oppose this new optimistic orientation to the postmodern pessimism of the neo-Luddite left, among other groups. The anti-Enlightenment, anti-technology left, represented by such figures as Bill McKibben, blames science and technology as such for nuclear bombs, environmental degradation, GMOs, the overuse of psychotropic drugs, robotic war, etc. McKibben has written against the Trans-humanist singularity (Enough: Staying Human in an Engineered Age, 2004), arguing for the halting of such research programs through legal and other means. Following Carl Sagan, the New Enlightenment thinkers counter-argue that science and technology have vastly improved human life and must be pursued apace so as to overcome the hurdle of humanity’s scientific and technological infancy.
While Singulartarians and Transhumanists surely have their enemies on the left, they also have earned the ire of both the religious and humanist right. Members of the latter group have recently sounded off against the movement in both the National Review and The Weekly Standard. Both articles use Trans-humanism or Singulartarianism to casually slur Marxism, which they refer to, along with Trans-humanism and Singulartarianism, as a form of “scientism,” or a “one-dimensional explanatory scheme of all that exists.” The bogey for the humanist right is the reduction of humanity to instrumentalism for either technical (Singulartarianism) or social-scientific (Marxism) mastery. Gone is the fully elaborated and irreducible individual, and along with it, the humanities, or the traditions for preserving the kinds of knowledge that cannot be codified, or encapsulated in formulas for prediction and control.
Kurzweil and others would respond by suggesting that the “human” is not eradicated per se with the singularity, but expanded exponentially. The humanities, meanwhile, far from being lost, might become a packet for easy download in a matter of minutes or seconds. The questions of moral and social agency would be transcended, Kurzweil suggests, as new philosophical problems emerge, unforeseen at present.
But what might be a Marxist perspective with reference to Singulartarianism or Trans-humanism? Contrary to the National Review and The Weekly Standard, such a perspective, while perhaps singular, is by no means scientistic.
Marxism embraces science, but not scientism. That is, it does not cede social desiderata to scientific authority, as in Comtean positivism. Furthermore, Marxism is not techno-determinist; the character and direction of the social order are not determined by science and technology, per se, nor do science and technology develop autonomously in a particular direction of their own accord. Rather, like any other form of capital, Marx noted that technology must be valorized in relation to the cost of reproducing labor, that is, in relation to the cost of labor itself. Singulartarianism may be regarded as another means of just such self-valorization of machine capital. Contrary to their own assertions, meanwhile, Singulartarians, including Kurzweill, tacitly admit as much; in their Open Letter to UN Secretary-General Ban Ki-moon they imply that there is nothing inevitable about the technological developments that they advocate, as they petition the UN to take seriously their solutions to real-world problems.
On the one hand, it is not sufficient to merely suggest that Singulartarian science, to the extent that it is science, is bourgeois in character. The same might be said for all science and technology developed under capitalism. Marxism does not call for the abolition or eschewal of the products of bourgeois science and technology as such, but rather aims to preserve and improve upon the real gains in scientific knowledge and technological development, while eliminating those obviously developed for sheer domination or destruction. On the other hand, Marxism does not embrace the ideological character of particular scientific or pseudoscientific movements.
Even leaving aside its cosmic pretentions, the Singulartarian worldview is in fact techno-determinist, scientistic, and mystified. The movement is a carrier of an ideology would leave the social order in place as it ignores the antagonisms of class society to focus on technological management. It obscures the social basis of real-world crises by promising a panacea of technological fixes as opposed to social change. It ignores the fact that the social relations of capitalist production are culpable for the environmental crises that beset us (see John Bellamy Foster, et al., Ecological Rift: Capitalism’s War on the Earth, 2010). At the same time, the Singulartarian worldview fetishizes biotechnical processes, promising to place highly advanced technologies within the reach of the wealthy few, notwithstanding the weak claim that Moore’s Law closes the technological breach by exponentially increasing the price-performance of computing (and thus halving its cost per unit of measurement) every two years or less. These fetishized biotechnologies, fetishized because they are given undue attention and importance, include avatars controlled by brain-computer interfaces, the development of life-support systems for disembodied human brains, fully artificial equivalents of the human brain, and the uploading of the entirety of human brain processes to computers, all of which reveal the bourgeois ideological character of the Singulartarian priorities. While unstated, the life-extension and immortality of the few are given precedence to the experience of a mere mortal existence by many others. Further, under this rubric, knowledge and consciousness are understood in reified, individualistic terms, as if they can be abstracted from their material and social substratum, isolated, and uploaded in a neatly contrived package onto silicon or other non-biological platforms. Such a conception is idealist and instrumentalist at base, while utterly missing the reality of social relations that make cognition and consciousness not only meaningful, but also possible.
Further, Singulartarian technologies are aimed at subjecting human beings to a kind of technological management that would make the surveillance of the NSA look like child’s play. Even if the prospect for the literal connection of brains to the Internet (and thus the direct “data mining” of thought and memory) may remain impossible, the kinds of technological mastery over experience contemplated and supposedly being developed under the direction of Kurzweil and other Singulartarians poses threats to the integrity of decision-making capabilities. With humans capable of controlling avatars, some of which may bear an uncanny verisimilitude to their masters, the possibility for reversing the direction of control seems equally plausible. The use of avatars meanwhile presents moral dilemmas that the current social order is unprepared to deal with. Even with Google Glass, which is a crude development in comparison to what might follow according to Kurzweil et al., the technological constraints on experience, billed as enhancements, are already apparent. All technologies bear the marks of their makers’ priorities for users. All constrain and construct experiences in particular ways. But when such technologies enter into direct interface with the brain, the possibilities for eliminating particular kinds of experience and motivations becomes plausible. The desideratum to record, label, informationalize, rather than to understand, let alone critically engage or theorize experience, may take exclusive priority given the possibilities for controlling neuronal switching patterns. Given the instrumentalism of the Singulartarians, decisive, action-oriented algorithms may dominate these brain interfaces, compromising faculties for the critical evaluation of activity. Naturally, Aldous Huxley’s Brave New World comes to mind, but as suggested, Singulartarianism bills its prostheses, extensions and substitutes for human brains as vast improvements over standard human intelligence. Thus, such technologies would have an ideological appeal not all imaginable for soma. Nevertheless, they will have been based on an intelligence defined in a particular way, putting considerable emphasis on the speed and volume of “data processing,” construed as “knowledge.”
Kurzweil claims that it is nearly impossible to imagine what would take place after the singularity. We think that the possibilities can be extrapolated from existing conditions. Under class society, a decentralized, open-access info-sphere of exploding intelligence is unfathomable. Developed in connection with the state apparatuses already in place in the United States and elsewhere, such as the vast data storage and processing centers in Nevada, and the programs, such as Prism, set up to fill them, Singulartarian technologies would become part of the arsenal for class domination and enhanced imperialism. The supersession of human intelligence by machine intelligence would likely involve the use of such data and data processing capabilities to further predict and control social behavioral patterns of domestic and other populations. Meanwhile, warrior avatars and other advanced weaponry would supplement the existing arsenal of robotic weapons, such as drones and demining robots, for expanded imperialist and less bloody (for the imperialists) adventures. The “intelligence” of all of these weaponized systems would be dramatically enhanced, making them ever more effective and dangerous. In addition, the biotechnical enhancement of the few would serve to exacerbate an already widening class gulf, while the “superiority” of the enhanced would function ideologically to rationalize differences permitted by class division. The movie Gattaca (1997) comes to mind here. That is, if developments proceed as Kurzweil predicts, this vastly accelerated information collecting and processing sphere will not constitute real knowledge for the enlightenment of the vast majority. Rather, it will be instrumentalist and reductionist in the extreme, facilitating the domination of human beings on a global scale, while rendering opposition ever more difficult.
In short, whether or not the vaticinations of Raymond Kurzweil have the slightest chance of becoming reality, it is clear that the Singulartarian movement provides ideological cover for the objectives of an increasingly technocratic, ruling elite. Promises of vastly increased “intelligence” during an “information explosion,” or of immortality given the increasing precariousness of life on Earth, serve to allay fears and provide (false) hope, while obscuring real social and material conditions. The singularity notion proffers a resolution to existing conditions while sidestepping the confrontation that a real resolution requires. If it were mere ideology, Singulartarianism would not merit much concern. But make no mistake: the singularity paradigm is a Trojan horse that smuggles a desideratum for the singular mastery of space and time by a few under the cover of a techno-utopia for the totality. If indeed the ruling elite is hastening us toward a singularity, as corporate and State projects would seem to suggest, then we must be cognizant of the actuality it signifies, and work to oppose it, not on anti-technology grounds, as the Singulartarians would have it, but from the standpoint of the use of science and technology for the purposes of human emancipation. Whether anything may be salvaged from the technological apparatuses leftover from Singulartarianism is a question for the future, the future of socialism-communism, not, if we have our way, of Singulartarianism.
Comments
Your essay seems to bias transhumanists and singulatarianism as ideological tools of furthering the goals and projects of a corporatist state. You use Kurzweil and his corporate relationships to help further his techno mystical message as a heavy example of transhumanist ideas as nothing more than camouflaged corporatist propaganda.
But from the perspective of someone like myself who has engaged in a lifetime of discussions with others about the subject of transhumanism and technological change. Among us there is hardly a unified political ideology or even political movement. We like to forecast that technological change is happening at an increasingly faster rate. But we still ague about if or when technological change might cause a radical shift in humanity to warrant a label as a so called singularity. Some of us are extremely optimistic technological change will lead to a sudden shift in the human condition.
Your article threatens to alienate those us who feel that technological change is necessary for social change. That technological change undermines the power structure of the corporatist state. That it is actually in the ruling elites interest to maintain the status quo. That all technological innovation needs to be controlled and monetized. That any innovation which threatens or are disruptive to the marketplace should be discarded even if this so called disruptive technology has a great benefit to the welfare of society and the environment.
It is some of our hopes that in embracing rapid technological change and support of science and technology we can see a disruptive technology spin out from control of the corporatist state an instead lead to a more equitable society. The current corporate and government policy structures are already ill prepared to maintain control and regulate novel uses of emerging technologies. The novel uses of the next technological breakthrough may very well cause the collapse of the current capitalist paradigm. And when that happens, then instead of a revolution, we will call it a singularity, a point of no return when the whole system had radically irrevocably changed.
Dear Coldplasma,
Thanks for your very considered reply to the piece. Allow me to say, however, that I think your response represents a few misreadings of my argument, and perhaps of the premises. First, you say that “You use Kurzweil and his corporate relationships to help further his techno mystical message as a heavy example of transhumanist ideas as nothing more than camouflaged corporatist propaganda.” Allow me to correct you here. I believe that I had made it clear that the technological movement that Kurzweil aims to advance is not merely ideological. It also represents real technical and technological mastery. The question then is just what this technological mastery might amount to.
You go on to note that no ideological unity exists among transhumanists (and esp. in connection with the singularity). This is a good point. I don’t think that I represent all of the transhumanist movement as singulartarian. Nor do I think that Singulartarianism is all Kurzweil. Nevertheless, Kurzweil’s brand, if you will, represents the dominant one. And as such it is the one under analysis.
Your next point is that “[the] article threatens to alienate those us who feel that technological change is necessary for social change.” I also believe that technological change brings social change. I do not, however, think that technological change necessarily brings progressive social change. Surely the technological mastery of the Nazis in the death camps brought about social change, but it did not bring about progressive social change. As Walter Benjamin pointed out with reference to this very question, “[t]his vulgar-Marxist conception of the nature of labor bypasses the question of how its products might benefit the workers while still not being at their disposal. It recognizes only the progress in the mastery of nature, not the [possible] retrogression of society.” That is the crucial question/problem I am posing in this article, the question of quo bono, especially under existing class society. But you suggest that technology itself can bring about revolution. Technology, of itself, does not of necessity bring about positive social change for the vast majority, the working class. In fact, it can and has brought about the contrary in many cases. The point of the article is basically the following: what if the singularity happens before socialism–in that case, our problems have been compounded, not solved, and for the reasons I give. Under class relations as they currently exist, there is absolutely no reason to believe that such technologies will be at the disposal of the vast majority. There is plenty of cause for concern that they will be mobilized against us, however.
You, and apparently some of your associates, subscribe to a techno-determinist fallacy when you suggest that “It is some of our hopes that in embracing rapid technological change and support of science and technology we can see a disruptive technology spin out from control of the corporatist state an instead lead to a more equitable society.” Again, the social change that we need will not happen by virtue of technology alone. My critique of this view is like that made by Trotsky in connection with the use of bombs (by anarchists) in an attempt to hasten the revolution: “If a thimbleful of gunpowder and a little chunk of lead is enough to shoot the enemy through the neck, what need is there for a class organisation?”
That is, this notion of technology as the panacea undercuts and attempts to bypass the necessary *social* confrontation, the class confrontation, that is necessary in order to bring about socialism. Even in the case of “positive” technology, it is disabling to the working class to promise that some set of technologies will do their revolutionary work for them. I urge you to take a look at this piece as it applies analogously to the question of “positive” technologies as well: http://www.marxists.org/archive/trotsky/1911/11/tia09.htm
Finally, there is nothing about the article that is anti-technology. I made it clear that technology has the potential to serve in the cause of human emancipation. However, it cannot, by itself, emancipate us. We must do that ourselves. Yes, the technological conditions for socialism need to be met historically before we can have socialism. But we have long reached that point. And yes, some of the technologies developed under this rubric may well be emancipatory and contribute greatly to human flourishing. But they won’t do our social revolutionary work for us.
Thanks again for the comment.
While I am not a Trotskyist as such,I urge you to read this piece, as it contains the kernel of the critique that I am making with reference to the technological fix:
That technological change undermines the power structure of the corporatist state. That it is actually in the ruling elites interest to maintain the status quo. That all technological innovation needs to be controlled and monetized. That any innovation which threatens or are disruptive to the marketplace should be discarded even if this so called disruptive technology has a great benefit to the welfare of society and the environment.
Please ignore the last two paragraphs of the response, above. These are fragments I meant to cut.
The problem with your essay is that you point to the risks of control of artificial intelligence technology by capitalist elites, but simply assume that somehow control by communist elites would be better. There is no good reason to believe that. Control by Google, Microsoft, Amazon, et al., has not proven worse than control by the Chinese Communist Party. Or, if you think China isn’t really communist, how about North Korea? The only way to stop artificial general intelligence technology is to set up a system like that in North Korea. That’s not going to happen in any of the developed countries, fortunately, So the best thing is to accept that artificial general intelligence is coming and work for the decentralized, open-access infosphere you denigrate. That is a lot more feasible than communist utopia.
“So the best thing is to accept that artificial general intelligence is coming and work for the decentralized, open-access infosphere you denigrate.”
I don’t at all denigrate it. I say that under existing conditions, it will not be actualized.
Instead of the speculative singularity, why not just simply look at weaker AI/ automation and how it will clearly lead to technological unemployment.
When everything is automated and huge amounts of unemployed people don’t have an income, and the cost of manufacturing goods essentially equates to the cost of the raw materials. All the while there won’t be a consumer pool which the capitalists require, instead just large amounts of jobless people wanting basic necessities, obviously. The system collapses in on itself.
While vast labor displacement is on the docket, I don’t think that labor will or can be completely outmoded. Capitalists recognize that labor inputs are necessary for value outputs. Thus they will reconstitute value outputs in one manner or another. And again, technology will not do the revolutionary work for us. Rather, it may in fact stall it. Further, system “collapse” is not the same as ending capitalism.
I work around a lot of libertarian tech types like Jaron Lanier and the rest of the gang, and one day it struck me: we already have artificial intelligence of the “Skynet” type: a global system of robotic machine learning that goes by its own principles, such as they are, to its own ends, denigrating human decisionmaking with Godlike whimsy. I think you know what I’m talking about!
I think my book The Singularity and Socialism (no relation to your talk or essay) places all of this in the historical context as well as gives understanding to the coming economic singularity that is being missed.
http://www.amazon.com/Singularity-Socialism-Complexity-Techno-Optimism-Abundance/dp/1503034739
Dear James,
Thanks for pointing me to your book. I am not sure how I have missed it to date, except that it is not available through my university library system, probably because it was published with CreateSpace Independent Publishing Platform. I read several of the extremely enthusiastic reader reviews, and also of course the description from the back of the book. I will download the Kindle version now. I just have to ask why you didn’t seek a reputable publisher and instead resorted to self-publishing through Amazon’s self-publishing service.
But I would guess that this isn’t going to be my major concern with your book. Given the description, I take it that the book argues, generally, that the abundance promised by the web of things, 3-D printing, and other such materialization of commodities vis-a-vis incipient and to some degree existing technology will result in a dialectical synthesis that will collapse the socialism-
capitalism dichotomy and result in a third term, in truly Hegelian/Marxian fashion.
My immediate problem with this argument is that capitalists have not only relied on surplus value through the process of exploitation of labor, but also on *rents,* and this is especially the case with regard to technology, especially software. Microsoft did not become a behemoth on the basis of exploitation but rather through the rents charged on the basis of intellectual property protection. What makes you think that the software designs produced for 3-D printing will not also be protected through intellectual property regimes?
I may be jumping the gun here, and I will read the book, but these are my first thoughts, and I wanted to record them now, before I do deal with the book.
One last thing: have you considered the question of the theoretical possibility and implications of the replacement of labor by androids and the possibility that they may nevertheless continue to add value to commodities on the basis of behaving like humans? Here’s a paper by a friend of mine that addresses the question. I also have a student working on the problem.
https://www.academia.edu/2455476/Do_Androids_Dream_of_Surplus_Value
Cheers,
Michael
There is already a view from the technological side that tempers Kurzweil’s enthusiastic optimism, and that is provided by Nick Bostrom. Not to mention more recent (since this article was posted) warnings from no less than Stephen Hawking, Bill Gates and Elon Musk.
Kurzweil himself has been quoted as saying, “[ASI] is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us.”
THE REAL question is, *who* is this “US” ?
You refenced the movie “Gattaca”, but even that would seem an optimistic view because IMO it’s more likely to be the scenario in another movie, “Elysium” – except there won’t be a happy ending with some hero (Matt Damon) to represent the liberties of the masses.
It is my view that the Technological Singularity is an inevitable event and pointless to warn against allowing the Trojan Horse in, it is “already inside” – so; what needs to be done is to ensure it reflects the needs (and wants) of humanity as a whole, and not be directed by the elite few.
Taking up the strategies of the Luddites will fail; instead, we have to embrace those *within the techno-sphere* who are sounding the alarm bells of unchecked technological progress – problem being, who knows what true agendas lie behind them, especially with the likes of Bill Gates and Elon Musk.
If politicians are not quickly appraised of this situation, it will be too late. The only thing that can stop Capitalist-directed Artificial Intelligence (AI) are openly democratic governments (ideally directed by an informed electorate) – but going by current track record, governments move way too slowly; against Google on tax issues for eg.
Microsoft Windows (“capitalist”) has conquered the computer operating systems market over Linux (open-source/”socialist”), while Apple commanded the smartphone market, and although the (relatively) open-source Android systems finally caught up – it took time which the “second discoverer” of AI will not have the luxury of – it will be a case of Who’s First Wins – so we better start now in ensuring the rules of the race determine the winner is truly one that will benefit the world as a whole.
Trackbacks
6 pings so far.