- Personal Software Process
- Mongrel Book
- PeepCode Screencast on webserver benchmarking, with Zed Shaw
Interviewed by Geoffrey Grosenbach
Geoffrey Grosenbach: It is the Ruby on Rails podcast. I’m Geoffrey Grosenbach. Interview today with Zed Shaw. Disclaimer there’s some slight profanity and male body parts are mentioned. However I think it is one of the best interviews I have been a part of. I hope you enjoy it.
So I got an email a few months ago from someone that said PeepCode is great but the rails podcast is getting a little stale. That hurt. Who is the most interesting, intelligence, and provocative individual I could talk to? And of course Zed Shaw was the first person I thought of. Right here in New York City the offices of East media. Where I believe most of the original code of Mongrel was written. Is that correct?
Zed Shaw: Yes, actually well no. Most of the code was written in my very expensive, very small New York apartment. Then I came here and blessed East Media with all of my wonderful genius so they could make some cash. Yeah whatever guys. But no I did most of the testing here. It was on a project that I had to do. Can I tell them what the project is? Yeah so it was the same project for open ID so it had to be rocks solid secure and everything. So this is where Mongrel became a viable server and grew up very quickly. And turned to dog food just about every night. And re-factored during the evenings. Four a.m. with Matt eating Chinese.
Geoffrey: Now was the plan always to make it open source? Or did you just eventually feel like hey we have a good thing and Vera sign was willing to make it open source. How did that happen that it got given to the community?
Zed: Well I actually started it as LGPL code right from the bat. Most of that was because I was actually working at the New York City departments of correction. They said yeah you can release it LGPL and their you go. You know, because I kind of had to confirm with them real quick. But really I was doing the coding on my own. You know, so it was kind of my own thing. You know, that was the way it went. It was LGPL. Then when I started working with VeriSign and then East Media and Matt and all of them. We just kept it as an up LGPL. Also I kind of do not want to sell it or do anything with it. I just wanted to give it out3.
It was more fun building it up and destroying all competitors with my little pinky finger. It was fun. Then later on I changed the license to Ruby’s default license so that Apple could include it because they were definitely afraid of the GPLP3. I’m not sure, maybe they imagined Stallman coming in in a Godzilla custom and destroying Cupertino. I am not sure. So that is why it is licensed Ruby right now.
Geoffrey: Now with all the New York connection I am surprised it was not called sewer rat or something. How did you come up with the name of Mongrel?
Zed: I guess sewer rat, that actually would have been a better name. No, so Mongrel, I have this problem where I will work on projects. I can’t really work on them until they have a name. Then when the name hits me that is when I actually work on it. I actually have about 50 projects sitting in a directory that I don’t touch because I can’t find a main. One of them is like Zed CRM. That is not a good name. With Mongrel it was, I hated Tomcat.
I like dogs better than cats. Cats hate me. I have been bitten twice by cats. Dogs love me. So it is going to have a dog name. It was written in part Ruby, part C. So I said it is a Mongrel. It just fit it was perfect. You search flicker for pictures of Mongrel and you get great pictures like the one that is on the main website.
Geoffrey: I am also interested in the marketing of open source projects. For a lot of people if somebody becomes popular, maybe personally they will get consulting gigs or whatever. Yet for a lot of open source projects people just like the fact that other people are using it. And it’s being contributed or whatever. But even now a lot of people think, “Well Mongrel, it’s super fast. It’s secure. And it’s reliable.”
In reality, yes that’s true. The speed part is really par with fast CGI. Was that something intentionally that you tried to market? Or did that just kind of happen and people ended up believing that on a very large scale.
Zed: Well as you know, I’m a complete and total liar about everything. So, the way to destroy fast CGI was to tell people that it was faster. That’s only partially true. What end up happening was, originally it was faster. When it was just a very small web server it was a lot faster then fast CGI but once you add all the gear that you really need to run a web protocol on a web server, it gets a little slower.
And also, just the way fast CGI does its IO gives it an advantage. So I tell people nowadays that mongrels a little slower then fast CGI. Sometimes it’s faster. But the main thing is that you can’t extend fast CGI. Mongrel you can extend. You can add your own handlers. You can beef that thing up. You can push that stuff out the door, all those things. You can’t do that with fast CGI. It’s not about speed, it’s about the potential speed. So fast CGI can go faster, but its not as extendable as Mongrel.
Now the marketing of it was… I had Mongrel up and we were getting traction from it from East Media. It was a lot of fun. I put it out and then the problem is that people won’t really use tech unless there’s some kind of marketing thing that sucks them in. It’s kind of sad. I mean it really should be that tech kind of wins out, but no you got to have the gimmicks and the marketing and all that stuff. So, my marketing plan was more to be semi-subversive, anti-marketing, marketing.
So if you notice. If you go to the Mongrel website and look up how to become certified, you’re not really getting a certification recommendation. It’s called the mud-crap CE certificate, and it basically tells companies to go screw themselves. So it’s marketed, but it’s marketed in a really fun way.
What I did, the really funny part went, I grabbed the template from open webdesign.org or something like that, downloaded it. I switched out a bunch of the photos with some from flicker. Gave the dudes credit for the photos. Gave the one who did the template credit. It took me a whole seven hours. I used WebJam to write up the content and the copy that’s in it. And I put it up on-line. And if you go look at the graph of downloads, you can see that month when I put it up, and it’s like ten times increased in the number of people going to it. I’m probably exaggerating but it’s a huge increase.
So, the sad side of it is that you do have to have some sort of marketing. I think a lot of source projects all viral marketing. They’re all just people who are in the know. Kind of talk about it and stuff. But for the most part, if you have some kind of very simple, straight foreword marketing project it really helps.
And I think the number one thing that helps is really good documentation. If you don’t have good docs people can’t figure out your stuff. And actually if you can’t write good docs, your stuff probably sucks, probably not useable. So if you’re able to write about it, then you’re probably able to use it.
So if you look at the Mongrels, and you look at the code, and you look at the documentation, I think there’s twice as much documentation as there is code in the Mongrel project – just like the comments. And then we’ve got docs in the directory, and you can download all the raw documentation for it. And we’ve got a book. Just the documentation for it dwarfs the actual code for Mongrel, and I think that’s really what gives it its advantage.
Geoffrey: Well like you said, not only can you extend Mongrel itself but it works well with other servers, Apache and Engine X. On Saturday you were mentioning something briefly about how your using a Rails app page cached within Mongrel. And then service side it includes within Engine X to really make that a lot faster. How did that work?
Zed: So right now what I’m playing with, and we’re still trying to see if it can work completely as well as I think, but so far the tests are showing it’s really good. What you have is this problem where you have page caching, and you have a little part that has to be dynamic. So 90 percent of the page can be static, and you just need this one little part to be re-updated or refreshed or something. Maybe even that little part just has a different timing on its own page caching or its own partials, so what people use now is partial caching.
What I want to do, and what I’ve been playing with, is you can use SSI – Server Side Includes – and Engine X has the ability to actually make http requests on its includes. It also has the ability to do block replacement, so you can put in a stub content, and you can tell Engine X and replace that block with this include from that background server.
So then you page cache that main page, it does a background request to your copy control or whatever and pulls up whatever you asked it to pull up, and then serves up the whole page. If then on your copy controller you start doing page caching from that – “page caching”, quote-unquote – it’s a whole controller with a little bit of copy. But Engine X parses it, throws it into your main page and sends it off; you get basically the best of both worlds. You get the page cache components in your page at different intervals and different time, so it’s a huge speed boost.
Geoffrey: That’s cool! It seems like we have to go through this process of learning all the different pieces and then using them together. It can definitely be tens more powerful with all that.
One thing I’ve always been fascinated whenever I’ve talked to you about it is just your personal process of development. You’re definitely someone who tries to pick the best tools and customize them to make them work as well as you can. You also have something where you keep track of your bug rate and how many bugs you’re writing, and tests that fail and then you adjust your process. How does that work?
Zed: That’s not recommended for everyone. You have to basically be really, really disciplined. I’m actually not really, really disciplined; I’m doing it on one project, on my U2 project, and I’m tryout basically kind of like a quality control process – physical quality control. All I do is I track a bunch of metrics that don’t necessarily say how many bugs there are exactly, but they’re indicators of the bugs. I track them over time, and then I use statistics to tell me if I’m starting to suck or if I’m improving.
I’m doing mostly C coding on that project, so a lot of this is I’m running my program under Valgrind with heavy testing. Then I track what my test coverage is, and then basically it’s just a series of numbers stream across my screen as I code. It’s kind of like auto-test – when I compile the thing it codes it – and then about every maybe 300 sampled I take a break, go in and crunch the numbers, and I see if I did better than last month.
A lot of times what I’ll do is I’ll try a new technique; I’ll try a technique for a while and then I’ll go crunch the numbers and see if I actually had a statistical improvement or not. That’s the biggest thing; I don’t waste my time on stuff that doesn’t actually improve the bug rate – the defect rate.
For example, at first I wasn’t doing code coverage. I wanted to see if code coverage improved your testing – coverage of your test code, or your test code having coverage. I wanted to see if that improved quality. So I didn’t do any code coverage. I measured all of my defect rates and figured out what my average defect rate was. I did maybe about 700 or 800 samples.
Then I started doing code coverage and beefing up my code coverage. I spent maybe about a month improving my code coverage. In C code it’s real hard to get really good coverage because so many lines do so much stuff. But I got it up to about 60 percent. Then I went and crunched the numbers again to see if increasing the code coverage in test improved my defect rate.
What happened was it didn’t improve my defect rate; my defect rate was still about the same. What it did improve was when I made changes – like if I had to do re-factoring – it reduced the amount of time to get my defects back down. So you make a change, you do your re-factoring, your defects go up, your defects go up, and then you have to spend time fixing all that.
With heavier, more test-coverage it made it go down quicker, but it didn’t really improve my defect rate much. There’s some complexity in that. When you have more coverage, you are seeing more of your defects, so that’s part of it, but I found that test-coverage doesn’t really justify an improvement in quality initially. It mostly just improves your time to fix later.
But out of the ways, that’s some weird stats crunching. The process actually comes from the Capability Maturity Models – Skies, Watson Freeze, Personal Software Process. So all you’ve got to do is rather go get his book, go through what he recommends. The key is as your code. Keep metrics and then crunch numbers to see if that’s improving things for you, and that’s really all that is.
Geoffrey: Another thing you’ve talked about often and mentioned it in an enlightening talk on Saturday was that HTTP generally as a protocol… You’ve even had a whole plan for if you were going to rewrite Internet protocols from scratch. Why do you think HTTP is such a bad protocol?
Zed: The thing I tell people is the problem with HTTP is that it was created in the dark ages of the Internet, when people still were using line-ended protocols or streamable protocols. That works great when you are doing text, like SMTP or IRC or anything like that. But when we start transmitted digital stuff, images and things like that, binaries, having line-endings like MIME boundaries and stuff just doesn’t work.
The problem in HTTP is that it has a framing issue. If you go and look at the spec, it’s got four different ways to frame the size of request. It can do MIME boundaries, it can just chunk coding, it can use multipart MIME, it can even…
Even its problem with pipelining and keep-alives, there’s no framing. It has issues with the graceful close. You don’t know if the client should close or the server should close. It’s all ambiguous. What happens if I send 20 requests to the server and then stop? Does the server process them all? All this ambiguity, and the specification and everything.
The crying shame is that it works great, everyone’s based a ton of money into a business on HTTP, but now we are getting semantic web coming out and basing all of its stuff on HTTP. I think of that as like putting Einstein’s brain on a crack whore’s body. It’s just the worst foundation for transmitting the very good data.
You are not going to see it change. It’s not going to change. There’s too much money in making on web servers and on existing HTTP protocol. Then people bastardized and abused the heck out of it. They tried to put chat on it, they tried to put asynchronous messaging, like the Twitter guys. They tried to put RPC protocols. The overhead of that is ridiculous. It’s like 500 bytes a pop just to be able to send a simple query. Plus the XML overhead is so ridiculous.
For me, my recent passion has been basically that if I were to sit down and design a protocol… I always say, if you are going to bitch about something, try to fix it. If I were to design a protocol, what would I do?
So that’s what the Utu protocol is trying to do. It’s doing it in a small scale, testing out a few ideas and things, trying different framing mechanisms, trying different things to do. Basically I start writing about “Look, this is why HTTP sucks,” and “This is how I’m doing my thing,” and then “This is why we should probably start looking at new protocols for stuff.”
I don’t think HTTP will ever be replaced, but I’m starting to tell people, if you are thinking of doing something new, and the first thing you do is run toward the old interface, consider writing a side protocol or find a protocol that’s more in line with the app you are trying to do.
If you are trying to do messaging off of HTTP, that’s just miserable. You can just use Jabber, which also kind of sucks. There’s AMQP, which is a message queuing thing. There are all sorts of options.
But for me it’s just that HTTP is horrible. Keep in mind I’ve ran an HTTP server, and I used a parser that’s based on their grammar. I’ve seen all the nasty corners. There are a lot of things I’m actually protecting Mongrel users from, because there’s some stuff that’s disgusting and should never even be in there.
Geoffrey: I was thinking about this. Definitely in the browser, we are pretty much tied to a lot of this, but people are doing some crazy things with Flash. It’s almost like with a little Flash widget in a browser, you could then start using your own protocol back to some server without just having to be fully tied to the web browser. Do you think that’s a good way to go, or does most of this effort need to be away from the browser and starting over from scratch?
Zed: Well, the beauty of the browser with Flash is that it’s a completely packaged, very easy to deploy to platform. So yeah, doing it with the browser is fantastic. HTTP is a really good heavy lifter: you can get tons of data out, you can get people’s apps to them, you can do a lot of really great apps in it. The browser is a really well known platform. I mean, it’s annoying because you have 20 different browsers to deal with, but you can pretty much get a solid app up and running with it.
The nice thing about Flash is, yeah, you can actually do your own socket program. So, yeah, if you have a side protocol, and you need to do some programming, Flash could be a great option. Especially with Adobe pimping it and pushing it as hard as it can, and putting it on mobile phones too, that’s another really good one. So you can get this ability to distribute stuff out to mobile phones, using whatever protocol you want, in theory.
But a really good example of why that’s important is just doing upload progress. Right now it’s retarded. If I write a protocol now, and I want to transmit a file, like FTP or whatever, what I do is open a socket, and I count how many bytes I’ve sent over it. That’s how I know how much I’ve sent. I tell you 20%. I can do estimates.
Upload progress right now with HTTP is that I open a socket, I start sending a file, and then I start making requests to server to ask it how much I’ve sent to it. That’s retarded. Why do I ask the server how much data I’ve sent on a socket? That’s retarded.
So part of it is that the browsers has a broken-ass socket APIs. I should never have to do that. I should be able to say, “Send that file and do this callback to tell me how many bytes pass in whatever time.” So that’s kind of the core of what I’m getting it. It’s not necessarily HTTP being broken, but it’s the whole foundation: web servers are inconsistent, browsers are inconsistent. It just sucks to work on this stuff.
Frankly it’s burning me out. I don’t like to do it as much of these days, but I keep slogging on so that other people have some base to collapse on.
Frankly, Utu is the thing I’m experimenting with. All I hope that comes out of that thing is that it gives people ideas for maybe the next generation of protocol designers to go out and do something different, something new. Instead of replicating all the bad ideas.
Geoffrey: So finally, tell us about Utu. You are going to be saving the Internet with hate. This is the year of hate, although we are in May, so I guess we are partway through it. Maybe it’s financial year, the fiscal year of hate or something… But what’s Utu?
Zed: [laughs] The fiscal year of hate, that’s it. I love the IRS.
OK, so basically people don’t know it, you can basically find out that I shave my head and grow a goatee. I’m telling people it’s my year of hate, basically because I’m working on some protocols and some stuff that’s on the evil side.
One of them is Utu. It’s basically the Maori word for the revenge justice legal system that they had in New Zealand for a long time. It’s also the word for Sumerian god of the dead. It all fits in the way protocol works. It’s really cool.
You can go to SavingTheInternetWithHate.com. It’s totally open-source. I don’t really think I’ll make any money on it. I’m just doing it because it’s fun, and I’m kind of sick of web programming, and I’d like to do something with chat.
The main thrust of it is that I hate IRC. It should die. I think all of you guys who are making IRC suck need to go away too. So I’m designing Utu, so that it kills off IRC, shows people how I think protocols should be designed, and sets up a good foundation for fully identified, centralized protocols.
I’m sick of P2P, I don’t want porn traders, and kiddie porn guys, and all these dudes who think that they need to be anonymous getting on my system. I want to know exactly who Geoffrey Grosenbach is. I’m not talking to that guy. I don’t want to be talking to some dude who is astroturfer or trying to get me to buy penis pills.
The main thing with Utu is that the sender pays messaging system. The idea is that when you hate someone you can tell them you hate them by paying a bit of hate to the hub. (I’ll tell you what hate is in a second.) When you pay that amount to the hub, then the hub makes that dude pay it from then on, if he wants to talk to you or talk to the chat room.
You can also block; it’s fully encrypted. I’m going to try and make it invitation-only. I’ll make it that when you invite someone, and the guy you invite is a dick, well then he turns around and the hate that he gets, gets applied to you in a little bit. So if you invite a bunch of jerks, you are not going to be able to talk too much.
Hate is a cryptographic hashcash calculation. So it’s pretty much established, it’s nothing new, none of the crypto’s new. I’m not going to invent anything. It’s just old kind of implementation. Hashcash is literally you get some crypto you got calculated, it uses a bit of CPU. It’s cryptographic, so we know who you are, we know your public key, and we know that you did it. You sign it, and then after that you paid the toll, and then the other guy has to start paying that toll.
It’s variable, so you can say, “I hate people at a 10”, “I hate people at a 15”, “at a 25.” “At a 32” they’d need like a billion computers and a couple of billions years, and the sun will end, and we won’t have to worry about that guy anymore. Assuming you can pay that. [laughs]
So the big thing is, right now it’s kind of up and running. It’s just the thing I’ve been tinkering with for a while, but a lot of people really liked the idea, and so I’m trying to get it out. There’s a Ruby client for it. You can actually use a little tiny binary that compiles anywhere, called the “mendicant.” It basically sits between you and the hub, and you blast these really simple messages to the mendicant. The mendicant does all the bullshit of talking to the hub.
Then you get to join the network. There’s already a Ruby library. It’s about maybe 500 lines of Ruby. It does all the stuff you need, it can connect to the hub, do your chats, do all that good stuff. And a couple of simple chat clients.
It’s still really low find. There are still no capabilities for doing the hates there. I haven’t really worked it into the server yet, but it’s coming out about this week. I should have the code up, people should be able to download and play with it.
It’s not invitation-only yet, so it’s good for people to get in. I want people to bust it; I want people to destroy that thing. I want to make sure that this thing is like a really tight little fortress in a futile kingdom, surrounded by Mongol hordes that are coming to try and kill it from IRC. That’s what we need to get.
Then it will be invitation-only. And maybe I’ll make people pay, and I may even throw in some trademarks in the protocol, so I can sue spammers who abuse the protocol. [laughs] Because there’s trademark law. Any ideas are up for grabs. If you got any ideas, send them to me, man. I’m all for it.
Geoffrey: Well, thanks for the chat. You’ve been all over the place. Where are you going to be next?
Zed: I’m down at Florida right now. I’m working on a site down there, and it’s sort of up. I’ll just pimp it really quick. It’s called CityCliq.com. After that I’m going to be up in New York. I’m going to move back here, and going to try to rocket in New York maybe for a year or so. I love New York; it’s a great place, assuming it’s not winter or summer. [laughs] Need lots of AC in the summer.
Otherwise then I’m also going to be at RailsConf, and I’m also may try a couple more of the regional conferences. The Goroku conference was really good. It was really good day of cool people, small conferences, bunch of like-minded folks from the region. I think it was really good. A lot of people actually already knew people, so it was really neat and fun time.
Obi, you probably hear this, but you were a little sloshed that one night… [laughs] I don’t think that was a woman.
Geoffrey: This has been a Ruby on Rails podcast. Thanks for your support. I’ll be going full time on PeepCode Screencasts starting next month. Thanks also to Sebastian Delmont of Street Easy for putting me up for a few nights at his house in New York.