Criticsm of Google's Book Search
Using this essay by Paul Duguid as a basis, if:book takes a look at quality control problems surrounding Google’s Book Search program. As Ben Vershbow asks, “Does simply digitizing these—books, imprimaturs and all—automatically result in an authoritative bibliographic resource?”
Duguid’s suggests not. The process of migrating analog works to the digital environment in a way that respects the orginals but fully integrates them into the networked world is trickier than simply scanning and dumping into a database. The Shandy study shows in detail how Google’s ambition to organizing the world’s books and making them universally accessible and useful (to slightly adapt Google’s mission statement) is being carried out in a hasty, slipshod manner, leading to a serious deficit in quality in what could eventually become, for better or worse, the world’s library.
As is so often the case, the devil is in the details, and it is precisely the details that Google seems to have overlooked, or rather sprinted past. Sloppy scanning and the blithe discarding of organizational and metadata schemes meticulously devised through centuries of librarianship, might indeed make the books “universally accessible” (or close to that) but the “and useful” part of the equation could go unrealized.
There are a lot of issues to debate in relation to this project, but if even 5% of the texts are as unreadable as those featured in Duguid’s article, I think this project will have problems.