Tuesday, December 13, 2005

<announcement>Star trek badges are almost here...


ucAsterisk is a project to breach the last barriers that stand before transparent telephony designs. It will combine Asterisk, ucLinux, and Open Hardware.

Once ucAsterisks milestones have been met, it will effectively complete a transparent embedded telephony design, from the hardware designs through firmware, DSP code and applications - all freely available under the GPL.

David Rowe has made the port of Asterisk to ucLinux and is now working on the open telephony hardware. This shouldn't be surprising as David started the first company to build multi-port voice telephony cards with open source Linux drivers and he's contributed heavily to projects like Bayonne, Speex, Asterisk, and Ctserver.

One of the goals of the ucAsterisk project is to build a PBX running on the BlackFin 533 target platform. This is a small, inexpensive (5USD in quantity), low-power (sub-watt), 500MHz CPU with some DSP functionality. To buy a development board and tools for a few hundred bucks look here. The BlackFin PBX will be an important milestone.


ucAsterisk is a proof of principle and an omen. Telephony design is now open end-to-end.

ucLinux/OpenCores have finally brought together open hardware design and the dominant platform for open-source software design. For the next couple of years, this will be the most transparent platform for end-to-end open designs. Porting important applications to this platform is the best way to plant the seeds of completely open end-to-end computing systems (hardware and software) with practical, commercial applications.

Since Asterisk is the dominant Open-Source telephony application, it is now possible to build a dominant platform that is completely transparent, end-to-end. This is as near a foolproof recipie for creating an industry standard reference design as can be imagined.

This is another data point suggesting that there is nothing left to hide in this field, and no beneficial reason to do so (beneficial to consumers, that is). Another data point showing that the free exchange of ideas trumps any system of patents. Designs will move more quickly, they will be more secure and more innovative.

To me, it is just as important that this project feels right. It's a huge relief after being in this industry as long as I have to see proper reference designs developed in an open manner.

NOTE: If you haven't seen the OpenCores project you probably don't follow me. You should check it out.</opinion>

Sub 100USD Linux Boxen

This new tiny platform for Asterisk has me excited about toying with tiny platforms for Linux. So I'm taking another look at small, Linux-compatible boxen. Asterisk runs on all of the following, as they all run standard Linux.

Gumstix still looks like the most bang for the buck, although a little underpowered. 99 bucks gets you a very small, relatively slow little linux board with some super-cool features like bluetooth modules. Good deal.

If you want something turnkey and don't need lots of mips or a tiny package, get yourself a Linksys WRT54G series wireless router. OpenWRT has a lot of ported applications. At 80 bucks retail, including the wireless router, this little item is the low cost leader.

If you want something powerful and don't need it too small, used 500MHz laptops (or even 1GHz laptops with a broken screen) can be found on ebay for around 100USD.

Linux Devices has a small list of fairly turnkey stuff. Compulab has some sub-100USD boards that look good as well.


Let's face it though, the most attractive thing to you, geek, is the most Open system - the system you can actually engineer. To get there you need to geek out a little more, and, unfortunately, spend more money. The single most interesting hardware project out there today is probably OpenCores.org.

My goal is to find out how cheaply I can get FPGAs and the hardware to program FPGAs that will run ucLinux/Asterisk/Ethernet. There are OpenCores platforms that will run ucLinux and emulate risc, motorola, sparc, arm, etc.

I really haven't got a solid understanding as to which OpenCores modules will fit on which FPGAs. You have to really dig to get this kind of information from the opencores site (the forums are the most useful repository). A matrix of this type will have to wait for another article.

One problem that is recognized but unsolved by the OpenCores community is that the tools and components required to build an embedded system are not cheap. It looks like most people prove they have a design and then humbly request for help from commercial vendors.

AFAICT, the cheapest FPGA development kit that will definitely run ucLinux on opencores is the Cyclone II from Altera, which supposedly will be available for 270USD (according to the Altera website). I don't know enough yet to evaluate that, but here is one evaluation.

The best way to get started might be to order a prototype board kit from a designer.

Ah, well, that was fun. Back to work on Rails.

Monday, December 12, 2005

Saving the Remote Environment


So I was at Superhappydevhouse when my thinkpad unexpectedly went into hibernation. It took me a little while to figure out that someone across the room inadvertantly flipped the switch that controlled the outlet I was plugged into, and my battery had died. The din in the room was loud enough that I could not hear it's last, plaintive beeps.

My battery was resurrected at the flip of a switch, but I lost some time setting up application sessions, so I asked around as to how people save session data in their application of choice.

While there is no way to restore the complete state of my Linux system upon a crash (a car battery to thinkpad power adapter is on my list of things to do), there are quite a few things I can do to make things less painful. Here are the best of state-saving helpers I looked at:


First, Emacs has the commands desktop-save and desktop-read.

This feature is directory dependent, that is, emacs will ask you what directory your desktop config file is in.

I have one emacs desktop file in each of my project directories now. The desktop feature doesn't re-tile your windows, but it at least brings up all the files you had open and, of course, emacs tells you which ones have auto-save data.

The single-desktop-per-directory limitation encourages subdirectories for different work modes.


I don't run most programs in crappy emacs shells, I use real terminals. I need something to save the state of all my terminals. I have finally been getting to know GNU screen. Good tutorial here.

Using screens keybindings is painful. However, unlike gnome terminals bindings, they don't collide with the readline namespace. Alt-B really takes me back a word, instead of popping up some useless menu. (Update: I have since been enlightened by Eterm, and now run screen in an eterm window.)

Screen runs as a daemon and manages consoles - sort of like a text-only window manager. Obviously, it can't restore running applications after a crash. However, run over an ssh session, it will happily run whether you are attached or not, so it's a nice way to work remotely. Evidently, it can also be configured to tee I/O for multiple clients, aiding in collaborative development.

Screen encourages one to work remotely on a more reliable system; which effectively means it encourages one to have a non-production machine that is internet accessible.


Third, Firefox has SessionSaver.


Session saver is useful, but not exactly feature complete. A few things I noticed:

One, to overwrite a saved session, you have to open the session, configure as desired, right click on the saved session to delete it, and re-save the current browser state with the same name.

Two, SessionSaver actually uses a good bit of CPU.

Three, SessionSaver explicitly suggests that you save your data via WebDAV and has some utilities to make it easier to do.

More encouragement for remote work. I'm starting to get the message...

The overall hint is that I should give in and store everything on a remote server. I could use screen, emacs, and webdav/sessionsaver to store everything there.


I had been experimenting with WebDAV long ago, so I re-installed it on a server I have access to. My primary goal is to get this going with firefox sessionsaver. However, my secondary, more exciting goal is to get this working with the actively developed and nearly useable DavFS2 for the linux kernel. WebDav installation and usage is tangent to the omnibus remote-coding review of this article, but I *will* handle this in a subsequent article.


As long as we're on a roll, lets slap on another convenience utility for setting up our sessions that I have ignored for a long time: ssh-agent. Ssh-agent helps you manage passwordless ssh logins - here is a good explanation. If you run linux, you should already have this tool, and every other tool I'm covering (with the exception of a quick SessionSaver download for firefox). This particular tool is something that is a better friend to system administrators than developers, but it *might* save me time in the long run.

am I really better off?

So now I can quickly login with ssh-agent, reconnect to my screen session, fire up desktops with emacs and sessions with firefox. If my laptop dies, very little time lost. (and thinkpads are cheap so I always have two of them running - very, very little time lost.) I figure in a week or so I'll actually be used to using them.

The answer is yes; I am better off. Philosophically, I have come to the same place I came to years ago when starting my last business. I recognized that I wanted every business application I touched on a daily basis to be a web-application. Personal mobility and data integrity are worth spending time and money on.

So what happens when my server dies?

Well, my server has RAID 1 and is probably in the most reliable colo my friends can afford. Unless it gets maliciously cracked, I think it's going to stay up and hold on to my data.

However, these tools have made me a little more dependent on one particular server and an internet connection. They have also made me a little more security conscious.

Hmmm...I guess I don't know if I'm better off after all. This clearly and definitively proves that you need to read very bad haiku to put your pain in perspective:

Switch to remote work
Trading worries for worries
Emotional wash

Deposit days now?
Why tilt at crazy windmills?
silent dividends...

Synchronize, Backup
Endless server maintenance
Gag me with - - - a spoon.

No! not the next blog!
I have a better idea?!?!?.;
It hurts, I know-Yiiiiiii!

(Update: I should mention Elinks somewhere, which is my ideal reader for primarily text-based web sites. You'll want to read it's FAQ. Try it with gmail vs. any other browser and I think you will agree. Blazing fast, and no advertisements. It supports JavaScript as well. Just the ticket for a text-based environment.)

Monday, November 28, 2005

<rambling> Web Ontology and the man.

RPM idea

Resale Price Maintenance (RPM) commonly refers to the action of forcing resellers to keep their prices above a certain level. It's sometimes called Retail Price Maintenance and it's illegal in most places. It's also so easy to do that only a vegetable could get caught (that's an actual quote from an experienced law enforcement official).

Don't be a vegetable. Stay in the governments "Meat" group by only Suggesting MSRP in writing.

The easiest way to fix prices among resellers is to tell all your vendors that you really like it when prices for your product are maintained at a certain level, and, nod-nod wink-wink, you are thinking of dropping some vendors that you do not like.

Nod and wink too much and someone *might* catch you doing it on tape. Not to worry, we'll discuss the alternatives. But first, why would a manufacturer want to do this dastardly deed?

RPM is a very handy tool for manufacturers. RPM means consistent profits for vendors. RPM maintains profits for small vendors and vendors who offer excellent service. RPM maintains the percieved value of the product, increases the likelihood that vendors will spend time marketing your high-profit products, and generally contributes to a quality reseller network and quality end-user experience.

RPM is illegal because it definitely doesn't contribute to the consumer's pocket-book(directly). Controlling RPM is of questionable value if competition exists at the manufacturer level. It gets more complicated to evaluate that when you consider intellectual property law and other factors affecting competition. Anyway, it's an interesting problem to think about, so here we go.

Let's play the "bad guy". Here's a way I came up with to maintain prices among your resellers without nod-nod wink-wink agreements. After going cross-eyed reading research papers I haven't been able to figure out what this method is called, so for now we'll call it the bodo method.

Here's the situation: You have a long-distance telephone service network. You want all your resellers to buy at 50 bucks a minute and sell at 100 bucks a minute.

Fix your sell prices to everyone, no matter who they are. In your reseller contract, tell your resellers that you will sell to certified resellers in all cases but one: you reserve the right to sell direct to customers that present a reciept or quote below MSRP. Obviously resellers can't match your 50 dollar price. RPM is not prosecutable if you are offering the customer a *lower* price than someone else. You gave absolutely no instructions to anyone to sell at a specific price, or to keep prices high. Your resellers do not have to be students of game theory to figure this out and everyone will sell at "suggested" MSRP.

All RPM schemes work better if you have some power in directing customers. For instance, let's say you have a website that refers customers to vendors. Someone is at the top of that list, someone is at the bottom, and some vendors don't make the list. How do you know who to put at the top?...say no more.

Of course, you need to keep track of what your vendors are doing. It's just good business to give customers a perk if they register a product with you, and to have them show their reciept to do so. This way you don't *have* to demand reports from your vendors. Anything to keep em' happy ;)

Not knowing much about RPM investigations, I can still hazard a guess as to why most economic safeguards are ineffective. First, if even I, a lowly hu-man on caffine, can think of unmeasurable ways to bypass economic safeguards, then anyone can. Second, the govt. can't effectively police something like nod and wink RPM, there are just not enough investigators. The only way to find this stuff and put countermeasures in place is by doing some statistical analysis and investigating the right people. However, the government doesn't have the data.

But what if they did?

Web Services Nirvana for you, me, and Uncle VAX

What if you purchased your long distance services as web services? Today, web services are things that are currently trolled for by humans and compared to one another in spreadsheets and code. However, in science-fiction land, you just tell a bot to do all that stuff - cruise the markets, discover services and run the standardized tests on standardized APIs. There is no human intervention. Heck, there are no humans! We turned most of them into green pellets and made the rest into slaves! No, today is another bright, beautiful day in the land of robots. So the end user gets phone service that has audio quality tested between the endpoints we care about, and great prices. The vendors change all the time, subject to the bot's search results. Kind of like a super-duper Least Cost Routing with web services. Same goes for every service the end user ever buys. Nice.

But wait, the manufacturer robot can't cheat anymore! RPM is impossible; not because we are probably cutting out any possible reason for a reseller network (a good point but it has some holes), but because we have standardized! Web Ontology has made the transactions easy to measure. RFID's are going into cash, and you have to assume that this is going to put lightbulbs over a few heads.

The web, after all, is where almost all our commerce is going to occur, so Uncle VAXs nefarious surveillance tactics will be web-based.

Cash is officially deprecated. Who cares about cash? Privacy advocates care, for one. Those who accept cash have RFIDs to bridge the gap to the web, and Uncle VAX will soon require that they report such transactions elecrtronically.

As soon as we step into that brave new world of web services, Uncle VAX is going to recognize the opportunity. A standard! He will demand electronic reporting and create some bloated XML specs for it. (Just when we were crawling out of XML soup, they drag us back in!) And Uncle VAX will not only be able to track every sale of every service ever made, but he will know exactly how to compare each one to the other, and he will recognize every trend. The data will not only show up RPM, but most tax evasion and other things he cares about.

When he's got regulation of services in place, he will regulate products the same way. Cash is dead. Transactions will report themselves ontologically via the wireless web server in your implant to...THE MAN. Yes, THE MAN likes WebOnt.

I don't see any immediate danger. The US government does have XML reporting initiatives, but in their current state they are for volountary filing, not real time reporting. Real-time ontological reporting will require oversight. What happens when a government no longer asks for information of any kind, but requires access to it at will? Well, clearly, any competent government with that much power will never be peacefully replaced. </rambling>

Thursday, November 17, 2005

XSLT development with ruby - picking text nodes

Setting up for XSLT development

If you plan to use XSLT in your ruby program, read on:

Let's assume the command line is a more agreeable code viewer than your graphical web browser and write a minimal XLST processor. Make sure you have an up to date libxml2, libxslt, and ruby-xslt, then write the following into min.rb:

require 'xml/xslt'
xslt = XML::XSLT.new()
xslt.xsl = IO.read("test.xsl")
xslt.xml = IO.read("test.xml")
out = xslt.serve()
print out

Setting Emacs up for three windows (one for test.xsl, one for test.xml, and one for output) and typing M-! min.rb (or C-x-ESC-ESC) made for a fine xslt development environment for me. Emacs has a nice xsl mode, too. If you look around you'll see that this program is equally trivial in any language.

The best introductory article I've found on ruby-xslt is Alex Netkachev's, and the best mailing list I have found for XLST help is xsl-list@lists.mulberrytech.com

I'm currently reading Inside XSLT by Steven Holzner, and I'm glad I found it. XSLT is one of those markup languages that takes some real time to master (it's turing complete!). A major goal of the W3 working group is to make XSLT2 easier to learn and use. Ah well, a bit late for me.


In most cases I can think of, writing an XSLT transform (stylesheet) is probably easier on the programmer than interfacing to a tree-parser, reading, transforming manually, and writing out the result. However, you have to weigh that against the fact that it takes a couple days to learn XSLT and implement your first practical, non-trivial transforms.

Learning XSLT does expose you to lots of XML standards (namespaces XPath XBase, etc.), so if you are behind on learning them, you might just consider an XSLT project a practical means to learning XML standards. I'm glad I did.

And the number one reason to learn XSLT is that I now know a good bit of it ;). For the usual multitude of reasons, the more people who know a language, the more useful code written in it can be for all of us.

While studying I wrote a well-documented example of using XSLT to pick text out of XML, including some of the tricky parts of the easy parts. In particular, this little example explores how to deal with text in nested tags.

The XML file:

<?xml version="1.0" encoding="ISO-8859-1"?>
<!-- test.xml - Source XML for self-explanatory XSLT exercise -->
This output is a result of applying the transformation test.xsl to
test.xml. Default XSLT processing rules will handle anything we don't
specifically handle ourselves in test.xsl (i.e. this paragraph).

<Radio crud="blah">First, we would like all <foo>"Radio" tags
contained within tag set "SomeGroup"</foo> to be processed
identically, copying the text within the tags to the output.

<Radio crud="hooha">The text of any inner <foo>tags will be
copied as well. As you might expect, text in the .XML file
that is not specifically handled is</foo> copied to the output by
default.</Radio> However, "SomeGroup" *is* specifically handled.


<Radio crud="watoozie"><foo>Second, when we encounter any "Radio"
tags outside of "SomeGroup" tags - we will print only the content of
the "foo" tags within those "Radio" tags .</foo>So don't print

<YetAnotherTag blah="and don't print this," yadayada="Third, we will
print this attribute. ">...but not this text.</YetAnotherTag>

<YetAnotherTag who="no printola.">Fourth, <foo>don't print this
inner foo tag,</foo> print this text followed by the tag

<YetAnotherTag>Auugg! <foo>And fifth, print only this foo tag
text.</foo> Muhuhuhahaha!</YetAnotherTag>
<ForgottenTag>Unfortunately, we will forget to handle
this...<YetAnotherTag> But fortunately there is a catchall handler
for YetAnotherTags in our xsl file, so the inside bit won't print.

Bye! <!-- Some literal text to test -->


The XSL file:

<?xml version="1.0" encoding="utf-8"?>
<!-- test.xsl - Transform for self-explanatory XSLT exercise -->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"

<!-- Handle the root tag -->
<xsl:template match="/"> <!-- Plain text in an XSLT sheet gets sent to the output -->
Hello! I'm covering all the basic XSLT I needed to get most
simple things done. Note that newlines after tags in the .xsl
file count. Whitespace processing is left for another
<xsl:apply-templates/> <!-- Apply all of the templates below to continue processing this section -->

<!-- Handle "SomeGroup" tags -->
<xsl:template match="SomeGroup">
<!-- Handle Radio tags that lie within SomeGroup tags -->
<xsl:for-each select="Radio">
<xsl:value-of select="."></xsl:value-of> <!-- loop through the Radio tags and print everything they contain\ -->

<!-- Handle the YetAnotherTags -->
<xsl:template match="YetAnotherTag[1]"> <!--match only the first YetAnotherTag occurence -->
<!-- print the yadayada attribute -->
<xsl:value-of select="@yadayada"/>
<xsl:template match="YetAnotherTag[2]"> <!-- print only the text elements in the parent -->
<!-- value-of only gets the first match, and we want them all -->
<xsl:for-each select="text()">
<xsl:value-of select="."/>
<xsl:value-of select="name()"/>
<xsl:template match="YetAnotherTag"> <!-- print only the text elements in the child -->
<xsl:value-of select="*"></xsl:value-of>

<!-- Handle any "Radio" tags that are unhadled thus far -->
<xsl:template match="Radio">
<xsl:value-of select="foo"></xsl:value-of>

You can also pick text nodes that start with specific tex with the "starts-with" parameter to the text() function.

<!-- That's it. If there is anything I didn't handle, I might not like the results -->

The Output:

Hello! I'm covering all the basic XSLT I needed to get most
simple things done. Note that newlines after tags in the .xsl
file count. Whitespace processing is left for another

This output is a result of applying the transformation test.xsl to
test.xml. Default XSLT processing rules will handle anything we don't
specifically handle ourselves in test.xsl (i.e. this paragraph).

First, we would like all "Radio" tags
contained within tag set "SomeGroup" to be processed
identically, copying the text within the tags to the output.
The text of any inner tags will be
copied as well. As you might expect, text in the .XML file
that is not specifically handled is copied to the output by

Second, when we encounter any "Radio"
tags outside of "SomeGroup" tags - we will print only the content of
the "foo" tags within those "Radio" tags .

Third, we will print this attribute.

Fourth, print this text followed by the tag

And fifth, print only this foo tag

Unfortunately, we forgot to handle


Wednesday, November 16, 2005

A humble personal blog

Friends, here I'll log all the techie commentary/ideas that have no righful place anywhere else. This blog should keep a lot of that crap out of your chat windows, mailing lists, and newsgroups.

WARNING - The average visitor will find this blog about as interesting as a Klingon would find proof-reading the Geneva convention. (As an occasional Klingon diplomat at sci-fi conventions, I can attest that that's ghay'cha' petaQ .) Seriously, for anyone who has been misdirected here by chance, you are welcome, but I won't be posting anything significant.

MVC Frameworks

I intend to write a web app some day soon. I'm continuing my retraining for that purpose, and I'm beginning to discover exactly how much of a newbie I really am at the coding game.

Unfortunately one can build just about any telephony server with minimal scripting (only partly my fault). Years of minimal scripting, it turns out, is devastating to real coding proficiency. Still, things are familiar enough that I need only survive the usual newbie ridicule on mailing lists/irc for a month or two at most.

I've begun evaluating three MVC frameworks: Maypole, Rails, and TurboGears.

Maypole is super-basic, and would work for me if I had not rediscovered my dislike for Perl. So, I gave it up shortly after getting it running and documenting it a bit. If you like PERL and want a simple MVC framework, Maypole is a good bet.

Rails required a LOT of ramping up because Ruby is not quite like PERL or any other language that I am very familiar with. So I've got both the Programming Ruby and the Dev with Rails books and I'm chugging through them. The pragmatic books are well written. After learning more about XSLT, I'm a little suprised that this and other frameworks would rather employ templates that are half code(or a non-standard transformation language) and half HTML rather than seperating out code and XSLT.

I'm also experimenting with TurboGears on the side, but I can't take on both of these at the same time. So far it looks great. The turbogears.org site is an example of the awesome web presentation that even the youngest Open Source projects have these days. Commercial competitors would be jealous.


I've just discovered libxslt and started writing some transformation scripts. If anyone has done a lot with XSL transformations I would love to get a pointer or two.


I'm also starting a VoIP Technical Interest group in the bay area with a couple other people - BAVoIP.org. We will probably finish that website up and have our first meeting before the end of the month.


I'm still super interested in the same types of analysis I have always been interested in, so I'll gladly fritter time away with you discussing any of it.

And no, my human side does not consider the Geneva Convention to be weak and worthless. In fact, it's one of the few glimmers of hope the hu-mans have shown us - qapla!