Wednesday, January 25, 2012

Auto-autos - a little less biology in the loop


What I hear at the water cooler

I recently visited the Silicon Valley Open Source Automotive meetup at HackerDojo. Ostensibly required reading ahead of the meeting was the wired article on auto-autos. You kind of have to read that one as a backgrounder, but the summary: google already has dozens of robotic cars on the road driving around with you, and they are better drivers than humans. Been that way for a while. Yeah. I was surprised too. But wake up and smell the robotic equivalent of coffee, whatever that is. (Or just watch these robots make breakfast instead.)

Anyway, had a nice chat with some knowledgeable people. Some similar opinions to what I heard from electric car junkies at CES. Consensus at the meeting was in line with the wired article concerning the timeframe for commercial availability of auto-autos - 10 years or so.

What seemed to be totally unknown was the business model. Even the little baby steps of car automation, auto-parking, nicer computing systems with heads up displays, etc. are out of reach of the average consumer due to price concerns. Actually, I can't immediately think of any future car technologies that would save money on the initial purchase price of an automobile. So although everyone seems to accept that self-driving electric-hybrid robotic cars are the future, no one has a clear grasp on the business model that will enable that future. FWIW, from what I hear, that is the consensus at all the clued in major auto makers.

What we are on the cusp of is a form of transportation with a 10X advantage over what exists (at a higher initial price). You park it anywhere, and you can let it automatically drive you. It's safer, cheaper on energy, and frees up your time. It's like having a driver pull up when you want him, and chauffer you around. With luck, an auto-auto can drop you and park itself somewhere convenient (seriously, who wants spend their time parking or paying someone to drive you). Rich people will just buy these things outright, but the rest of us need another model.

A cursory amount of research reveals that big rental car companies are already renting electric cars. Local government intervention and purchase of advanced vehicle fleets occasionally pushes clean car tech as well. Without conflating electric cars with auto-autos too much, I believe that the markets and challenges are similar for both technologies.


Spin we're likely to hear

Briefly I'll mention psychology. There is, of course, fear - range anxiety, and any other anxiety that you can whip up in people about new car technology - this will definitely kill individual purchases. Certain segments of the American market which are primarily driven by fear are ... unretrievable. Those people will buy only what they are told to. But with a 10X value advantage the rest of us will get over that fear. Interesting thought - corporations are motivated to uncover the facts behind the FUD to improve the bottom line - corporations are FUD creators, not FUD consumers! Corporations are not easily scared away from value.

The energy industry has to be considering which side to weigh in on. Oil demand destruction is not in line with oil company interests. Interests that can afford the best congresspeople money can buy. Exxon predicts 60% of cars will be gas-only, 40% hybrid, by 2040, with a negligible sprinking of fully electric vehicles. Not in line with what I've been hearing elsewhere. Interestingly, though, they expect all oil demand increases to come from the commercial sector. On the other hand, it seems at least some public utilities might stand to gain power through charging and other infrastructure. They are also fleet customers.

Finally, there is no legal framework for robots driving cars, particularly without a human in the loop. However, everyone seems to be looking the other way right now. This is where the opponents will hit hard and fast, buying congressmen like it's black friday.

Smarmy smarty-pants ill-informed theory

It seems that fleets are the current wedge by which electric vehicles and auto-autos are entering the market. And since even electric car nuts won't pop for the first auto-autos, renting from fleets makes sense, ala the carsharing models of zipcar, u-car-share, philly-car-share, hertz on-demand, etc. etc. Auto-autos actually make perfect sense for these fleets from an insurance perspective as well as an efficiency perspective, especially if the law will look the other way to driverless auto-autos.

Cab companies will love them as well. Cabbies will not. If there is any middle class left in the US in 10 or 20 years, they will probably be more likely to call an auto-auto to get to work than a cabbie.


First we replace the horse, then the rider.

We used to ride horses around. That lasted for what, 1000 years? It's still cool to own a horse and ride it around, for fun, I mean, but it's getting rare. Farming is done by robots, self-flying planes are almost there. Automatic cars are so close we can taste it. Tastes like robot.

Also see Brad Templetons stuff:

talk slides

website

Monday, January 23, 2012

We are all Agnostics now.



David Brin recently spoke at the Singularity Summit, a place where the attendees are, for the most part, transhumanists. They are there to discuss the creation of "small" gods - things just a little better than humans in some way, that will improve themselves.

I know that Mr. Brin recognizes that even these small gods, once self-improving, will be beyond detailed human understanding. He counsels the would-be creators of small gods that they should learn to better understand those who worship "large" gods - gods that create universes like ours, such as mentioned in the bible. To that end, his primary weapon is interpretation of the bible.

This creates a room full of agnostics.

No one who believes we can make a small god should believe we can understand a large god. As long as these people interpret the bible, they represent a belief in their interpretation, and therefore create a non-overlapping-magisteria. Basically, Brin removed the conflict between science and religion for those who will take his advice.

Brin is saying we need to rationalize in order to associate in order to survive. As it turns out, any suitable rationalization makes us nicer, more accepting people. I am proud that David Brin is one of us.

His final point discusses science as appreciation of God. He proposes that when children perform scientific experiments, their wonderment is a finer, more true appreciation of Gods creation than a wrote prayer or words extolling his greatness without really *appreciating* his great works. Scientific revelation is like someone saying to God, "I truly understand you, and I am truly impressed." and then proving it.

On the topic of appreciation, I have often thought that the universe is a self-organizing system that learns to appreciate itself.
But only for a moment. We are that spark of insane emotion burning brightly.
Humans are the love and the insanity in the universe, quickly extinguished.
Love oneself, respect others, a natural state of affairs, the best that can be hoped for, independent of scale, among parts of a whole.
The dream of a cohesive, thinking universe isn't a bad one. Better to be a cooperating part of any vastly superior force. No? Why not think of ourselves as the love in the universe?

What can I say? Humans rationalize compulsively.


Thursday, January 19, 2012

Robot Arms Race



Continuing to catch up on Robot Futurism - I re-read Bill Joys article in Wired of over 10 years ago, Why the Future Doesn't Need Us - along with numerous rebuttals, references, and relevant discussions.

The central premise of Joys article is that we are creating dangerous stuff, and maybe we want to think about regulating ourselves. As GNR (Genetics, Nanotechnology, Robotics) tech advances, it gets easier and easier to destroy all human life. We need to considering slowing development of GNR tech, until we can properly defend against it's destructive potential.

Most of the rebuttals to his paper suggest that:

A) It's not that easy to create world-destroying stuff with GNR.
B) If the good guys don't figure out GNR tech, the bad guys will, so we need to go full steam ahead with fundamental research.
C) GNR applications are a moral imperative for quality of life reasons and the advantages outweigh the risks.
D) I don't like Bill Joy because he quoted the unabomber.
E) Anyway, you're right, we need to figure out how to regulate ourselves.

Most GNR sits precisely in the camp of biotech as a threat. Development cost will continue to shrink for weapons that could kill humans and spread quickly with total disrespect for borders.

My conclusion is that labs that study these technologies need to be reasonably careful. The threat isn't huge now, but it will be someday. As with Brains article on Robot Economics (see my last post) the threat is vastly more serious if we are looking a dislocation in the face, rather than a very gradual ramping up of world-changing technologies. No one seems to know which we are looking at. A "Center for Nanotech Control" should probably be funded at a level commensurate with the threat. Depressingly, that's not happening (I hope I'm wrong, I hate being depressed, so please feel free to correct me).

In fact, relatively little has been written on the topic of nanotech defense in the last 5 years. There was a flurry of activity around Joys article, and that was that. It may not have been helpful that most of the predictions that were made publicly by prominent futurists around that time were significantly off. People just seem to have gotten disinterested.

I guess we're just going to have to be happy with the amazing benefits of GNR and not think too hard about the dangers for a while. We'll live forever until we're dead. Wait...

Tuesday, January 17, 2012

Robot Economics

For my one reader, a quick summary of the last of Marshall Brain's articles in his robot series


Marshall is speculating 30 years out from trends present today.

Robots = Greedy Bastards

He concludes that the centralization of wealth will become extreme as automation (which he calls robots) puts people out of work. So he comes up with a number of taxation schemes to generate a welfare state when this happens, without wanting to call it a welfare state. Well enough. He doesn't take it further than that.

He is probably right about the unemployment issue, because unlike other technologies, robots will create robots. We are making things that are as good or better than we are at most things. We are not making tools anymore that require lots of humans to operate. That's really the fundamental point, most of us will not be needed in this industrial revolution, and why this is worth reading and thinking about.

Orwell is looking smart right about now

It is an interesting read, because it puts you in the mindset of considering 50+ percent unemployment. It is very hard to imagine a way to transition to that level of unemployment from 10 percent unemployment, without generating much more tax revenue. Any other state is a war zone. A fiscally conservative state with few social services would be immediately crushed by internal forces. We do have to consider a very high level of taxation ( aka collective control aka socialism ) in a future that rapidly adopts a high level of automation ( aka robotic workforce), or we have to move to an Orwellian model to repress our citizenry (which is where we are going).

It is, of course, implied that the same or higher level of productivity will be obtained with only half the human workforce, and that our governments are not configured to act in advance of rapidly decreasing employment. We will be in a real pickle. It sounds reasonable as you read it. Over fifty percent unemployment never ends well.

So Brain throws out some plans for socialism (although he picks a narrow definition of socialism to escape this much maligned word). He doesn't take it too far, just covers some possible methods for collecting taxes.

There may also be principles involved when constructing a capitalist-socialist-robot utopia.

For one thing, you want to keep incentives aligned toward innovation. Take only the amount required from the captains of industry to provide a reasonable quality of life for the unemployed. Arguably, you want the unemployed to be innovators, and to be free to innovate and create jobs for themselves and others, but you also want them to be somewhat hungry - their lives not full of stress, but lacking highly desirable comforts. So you would start with the need, and work backwards - the captains of industry need to raise enough money to support the unemployed at a reasonable quality of life, after that point, they can make their lives as ridiculously good as they like with any extra earnings. Taxation methods can then be derived from this principle.

Taking that to the extreme, you have to consider what happens when this morphs into a society in which all work required to maintain the basic necessities human life is done by robots, including maintenance of the robots, generation of energy required, etc. This will raise the bar of social welfare to a fairly high level. It's a long way out, and quite a speculative story, but possible at some point. A lot of fundamental industries will run themselves, and the captains of industry themselves will be outmoded. Every human will be focused on innovation and new industries, as the old ones become, basically, part of nature. O.k. well, maybe I'm an optimist.

Overall, I found Brains robot articles a good way to encourage a geek like myself to think about the future of economics. What would you do for a living if you had your basic needs met for life, but still had the opportunity to make anything of yourself if you worked hard at it? The answer is, ultimately, you would seek your potential (metamotivation as per Maslow). I know I am metamotivated and I find others in the same state to be more fun to hang out with than those who are not. What model makes that world possible? Not the one we have when unemployment hits 50 percent.

And what about those whose metamotivation tends toward evil? The more things change, the more they stay the same. Hopefully our robot overlords won't have time for that shit, and I'm definitely not risking my exoskeleton for their greedy asses.


Marshall also wrote the following two articles on the same topic:

http://marshallbrain.com/robotic-faq.htm
http://roboticnation.blogspot.com/