Apparently, some of the boxes in their database cluster had the right data and some did not. This is a relatively common problem in a database cluster. It can happen if replication fails because the network gets partitioned or whatever.
The normal solution is to backfill the databases that missed updates, resolve any conflicts, and go on your merry way, all without impacting users.
Instead, Google is asking their users to correct their database for them. My jaw dropped. From the post:
I'm very sorry to say that, if your blog was on this database, posts and template changes made in the last 18 hours or so were not saved. They may appear on your blog now, but will disappear if you republish. If you made a post between Friday afternoon and now, we suggest that you look at your list of posts ("Posting" tab, "Edit posts" sub-tab) and compare it with what is published on your blog. If posts are missing, copy them from your blog pages before you republish.Whaaaa? Google, king of the cluster, you want your users to cut-and-paste entries from their blog to fix your problems with database consistency?
In general, Google seems to have what borders on disdain for modern databases. They seem to see rigid file system and database consistency as the paranoid ramblings of the anal retentive, preferring the very loose consistency guarantees offered by their core systems such as Google File System and BigTable.
I think consistency does matter. Databases shouldn't lose data. Problems with the database shouldn't be seen by users. Databases should do what they're supposed to do, store data and give it back again when you want it.
For now, the problem is in Blogger. No financial transactions are involved. But what happens when Google expands their payment system (GBuy) or moves into e-commerce? If Google doesn't start caring about data consistency, this problem will bite them again and again.
See also my previous post, "Lowered uptime expectations?"