As several reviewers of Steve Rosenbaum’s Curation Nation have discussed, a major theme of the book is the importance of human input in curation. Rosenbaum repeatedly hammers home the idea that high-quality curation, which makes it possible to find things on the Web, has to be done by human beings rather than computers. There are many passages in the book on this theme. I’ll quote a few here from Rosenbaum’s introductory comments to give the flavor (boldface added):

(p 3-4) Curation is about adding value from humans … Curation is very much the core shift in commerce, editorial, and communities that require highly qualified humans. Humans aren’t extra, or special, or enhancements; humans are curators. They do what no computer can possibly achieve.

(p 12-13) No longer is the algorithm in charge. Human curators have become essential software. What emerges is new human and computer collaboration  … The important news of the emergence of a Curation Nation is that humans are very much back in charge.

Rosenbaum’s emphasis on the importance of humans especially strikes me because that’s also been a major theme of this blog, starting with the first article in the blog, on the importance of human input for organizing pictures. Other articles on the theme are listed in the category human input.

A subject that’s closely connected to human input and to curation, that Rosenbaum also stresses, and that I’ve written about (category: Pattern Recognition), is the quintessentially human capability of pattern recognition. He has several good snippets based on prominent blogger Robert Scoble:

(p 134) Humans are essential. So exactly what do they add? Is it magic, or something more quantifiable? Taste, judgement, serendipity? Scoble says what they add is uniquely human. “Algorithms are good at picking the big stuff, because computers are good at counting numbers or links or numbers of clicks or numbers of retweets. Humans aren’t going to compete with that. But as humans, our brains are pattern recognizers. I can look at the tree across the street, and in a millisecond I know it’s a tree. A computer has to look at an image of a tree for hours and spend a lot of processor time to figure out it’s a tree.” … (p 140) “I think curation is seeing a pattern in the world and telling someone else about that pattern.”

The exciting bottom line for librarians — As several library people have noted in discussing Curation Nation, this is right up our alley! The sorts of skills that Rosenbaum discusses are just what we’re good at — Careful, Caring Curation of the world’s information.

Eric Rumsey is at: eric-rumseytemp AttSign uiowa dott edu and on Twitter @ericrumseytemp

Google has been under attack recently, because its search results often seem to be overwhelmed by spam-generated links. On the other hand, Wikipedia has gotten many laudatory commentaries on the occasion of its tenth anniversary.

The timing here is interesting — Google, which is driven by computer-generated algorithms, is being “outsmarted” by human SEO engineers who have figured out how to “game” the system to get their sites a high ranking in searches. And Wikipedia, powered by smart human curators, has risen to become “a necessary layer in the Internet knowledge system.

I’ve looked at several of the tenth-anniversary commentaries discussing the uniqueness of Wikipedia, and it’s surprising that I haven’t seen any that note the significance of its being a human-generated tool. TheAtlantic had a good round-up of commentaries by 13 “All-Star Thinkers” — Some of them do talk about the importance of collaboration in the working of Wikipedia, but none of them make the more basic, and, to me, even more acute observation that, in this age of the computer, it’s done by human beings!

In the Wikipedia article on Wikipedia, in the section The Nature of Wikipedia is this interesting quote from Goethe:

Here, as in other human endeavors, it is evident that the active attention of many, when concentrated on one point, produces excellence.

Indeed — As my library school teacher used to say “if there were enough smart humans we wouldn’t need to rely on computers.”

So — Librarians Take Note! Have you ever considered becoming a Wikipedia editor? — On the occasion of the tenth anniversary, Wikipedia founder Jimmy Wales is making an effort to foster more diversity in curation — He especially mentions reaching out to Libraries for help.

Finally, on a related thread — Another notable aspect of Wikipedia that hasn’t been mentioned in anniversary articles — Not only is it done by humans, but it’s done by humans on a volunteer basis — As I discussed in an earlier article, Daniel Pink uses this as a classic example of “intrinsic motivation” >> Wikipedia vs Encarta: The Ali-Frazier of Motivation.

Related articles:

Eric Rumsey is at: eric-rumseytemp AttSign uiowa dott edu and on Twitter @ericrumseytemp

Interesting thought by Mike Shatzkin on the unlikeliness of pictures in eBooks anytime soon (bold added):

The proliferation of formats, devices, screen sizes, and delivery channels means that the idea of “output one epub file and let the intermediaries take it from there” is an unworkable strategy. [Here’s one reason why:] … Epub can “reflow” text, making adjustments for screen size. But there is no way to do for that for illustrations or many charts or graphs without human intervention (for a long while, at least.) Even if you could program so that art would automatically resize for the screen size, you wouldn’t know whether the art would look any good or be legible in the different size. A human would have to look and be sure.

Mike is talking here about the issue I wrote about in the foundational article for Seeing the Picture — Pictures are in many ways an intractable problem for automation — In many situations, the best use of pictures requires intelligent human input.

Flickr takes the sun out of the sunset“Flickr takes the sun out of the sunset” — The picture to the left from Flickr shows the full picture and its square thumbnail, in the inset. Thumbnails like these are generated automatically by Flickr and other photo management systems. They work by taking a portion from the center to make the thumbnail. This works well if the center has the most important subject in the picture. But if the picture is relatively wide or tall, and its main subject is not in the center, as in the example at left, with the sun being to one side, the thumbnail misses it. Looking at this example (Long Beach Sunset) in Flickr, note that the first thumbnail on the Flickr page (top left) is the one for the larger picture (that’s shown on our page with the thumbnail in yellow-outlined inset).

In large mass-production systems like Flickr, automatic thumbnails are unavoidable, and my point is not that they should never be used. Instead, my point is that, on many levels, pictures require more human input than text to make them optimally usable. Pattern recognition — the simple observation that the thumbnail of a picture of a sunset SHOULD CONTAIN THE SUN — is something that the human brain does easily, but this does not come naturally for a computer.


Another sort of problem in automatic production of thumbnails is making a thumbnail by simply reducing the size of the large picture. If the main subject of the picture is relatively small, it is not visible in a small thumbnail.

The picture to the left is from the Hardin Library ContentDM collection. The inset in the upper right shows the thumbnail that’s generated automatically by the system, which does a poor job of showing details of the picture. The lower inset shows a thumbnail made manually, which gives a much more clear view of the central image in the picture.

Cropping of a picture to produce a thumbnail, as done here, takes more subtle human judgement than the case with the Flickr picture in the first example, where the weakness of automatic production is obvious. With cropping, there’s inevitably a trade-off between showing the whole picture in the thumbnail or showing the most important subject of the picture. In cases such as this one from ContentDM, where most all of the detail in the picture will be lost in a small thumbnail, it seems better to focus on a central image that will show up in the thumbnail.

Finally, a few examples from Hardin MD, below, show how we have done cropping to improve the detail in our thumbnails. The thumbnails on the left in each of the three pairs are made by simply reducing the size of the full picture. On the right in each pair are the thumbnails we use, that we have made by cropping the full picture before making the thumbnail.

The biomedical, scientific pictures that we work with in Hardin MD are fairly easy to make thumbnails for, because they generally have a well-defined focus, that’s usually captured well by automatically-generated thumbnails. More artistic, humanities-oriented pictures, such as the ones discussed here from Flickr and ContentDM, however, often have more subtle subjects, that benefit from the human intelligent touch in the production of thumbnails.

As computers have become more powerful, many of the aspects of handling text that were formerly done by humans have been taken over by computers. Pictures, however, are much more difficult to automate — Recognizing patterns remains a task that humans do much better than computers. A human infant can easily tell the difference between a cat and a dog, but it’s difficult to train a computer to do this.

In pre-Google days, the task of finding good lists of web links needed the input of smart humans (and Hardin MD was on the cutting edge in doing this). Now, though, Google Web Search gives us all the lists we need.

Pictures are another story — on many levels, pictures require much more human input than text.

The basic, intractable problem with finding pictures is that they have no innate “handle” allowing them to be found. Text serves as its own handle, so it’s easy for Google Web Search to find it. But Google Image Search has a much more difficult task. It still has to rely on some sort of text handle that’s associated with a picture to find it, and is at loss to find pictures not associated with text.

The explosive growth of Hardin MD since 2001 (page views in 2008 are over 50 times larger) has been strongly correlated with the addition of pictures. This time period has also gone along with the growing presence of Google, with its page-rank technology, and this has come to make old-style list-keeping, as had been featured in Hardin MD, less important.

Though Google has accomplished much in the retrieval of text-based pages, it’s made little progress in making pictures more accessible. Google Image Search is the second most-used Google service, but its basic approach has changed little over the years.

The basic problem for image search is that pictures don’t have a natural handle to search for. Because of this it takes much more computer power for the Google spider to find new pictures, and consequently it takes much longer for them to be spidered, compared to text pages (measured in months instead of days).

Beyond the problem of identifying pictures there are other difficult-to-automate problems for image search:
• How to display search results most efficiently to help the user find the what they want — Do you rank results according to picture size, number of related pictures at a site, or some other, more subjective measure of quality?
• What’s the best way to display thumbnail images in search results?
• How much weight should be given to pictures that have associated text that helps interpret the picture?

So — Good news for picture people! — I would suggest that pictures are a growth sector of the information industry, and a human-intensive one. I would predict that text-based librarians will continue to be replaced, as computers become more prominent. But there will continue to be a need for human intelligence working in all areas relating to pictures, from indexing/tagging to designing systems to make them more accessible.