An earlier article, Color Pictures in Google Books, discussed a few examples of color pictures in full-view books in GBS. Below are more examples in the areas of botany, medical botany, and dermatology.

Google Books titles with color pictures – Botany, Medical Botany

[Examples below link to screenshots in Flickr of Overview : Selected Pages in GBS; links in Flickr go to actual GBS page.]

The Botanical Magazine, Or, Flower-garden Displayed
By William Curtis, vol 9, 1795, Harvard Univ

Curtis’s Botanical Magazine, Or, Flower-garden Displayed
By John Sims, vol 41, 1815, Harvard Univ

The Family Herbal
By John Hill, 1812, Oxford Univ

Flora Medica
By George Spratt, 1830, Oxford Univ

Vegetable Materia Medica of the United States, Or, Medical Botany
By William Paul Crillon Barton, 1818, Oxford Univ

Medicinal Plants (vol 2)
By Robert Bentley, Henry Trimen, David Blair, 1880, Harvard Univ

Medicinal Plants (vol 4)
By Robert Bentley, Henry Trimen, David Blair, 1880, Harvard Univ

Paxton’s Magazine of Botany, and Register of Flowering Plants (vol 1)
By Joseph Paxton, 1836, Oxford Univ

Strasburger’s Text-book of Botany
By Eduard Strasburger, Hans Fitting, Ludwig Jost, William Henry Lang, Heinrich Schenck, George Karsten, 1921, Univ California

Google Books titles with color pictures – Dermatology

Atlas and Epitome of Diseases of the Skin
By Franz Mraček, 1905, Stanford Univ

Atlas Der Hautkrankheiten, Mit Einschluss Der Wichtigsten Venerischen
By Eduard Jacobi, 1906, Stanford Univ

Atlas of Diseases of the Skin
By Franz Mraček, Henry Weightman Stelwagon, 1899, Stanford Univ

Illustrated Skin Diseases
By William Samuel Gottheil, 1902, Harvard Univ

An Introduction to Dermatology
By Norman Purvis Walker, 1906, Stanford Univ

On Diseases of the Skin
By Erasmus Wilson, 1865, Harvard Univ

Portfolio of Dermochromes (vol 2)
By Jerome Kingsbury, Eduard Jacobi, John James Pringle, William Gaynor States, 1913, Harvard Univ

Skin Diseases
By Melford Eugene Douglass, 1900, Univ Michigan

If you know of other areas that have books in Google Books with color pictures, please send comments.

This is excerpts from part 2 of Michael Nielsen’s seminal and long article, Is scientific publishing about to be disrupted?. Part 1 of Nielsen’s article is a general consideration of how industries fail, with particular discussion of the newspaper industry and blogs. Part 2 is the heart of Nielsen’s case (and has the same title as the article), so I’m excerpting it here to bring it to more certain attention …

Today, scientific publishers are production companies, specializing in services like editorial, copyediting, and, in some cases, sales and marketing. My claim is that in ten to twenty years, scientific publishers will be technology companies [3]. By this, I don’t just mean that they’ll be heavy users of technology, or employ a large IT staff. I mean they’ll be technology-driven companies in a similar way to, say, Google or Apple. That is, their foundation will be technological innovation, and most key decision-makers will be people with deep technological expertise. Those publishers that don’t become technology driven will die off.

What I will do … is draw your attention to a striking difference between today’s scientific publishing landscape, and the landscape of ten years ago. What’s new today is the flourishing of an ecosystem of startups that are experimenting with new ways of communicating research, some radically different to conventional journals. Consider Chemspider, the excellent online database of more than 20 million molecules, …. Consider Mendeley, a platform for managing, filtering and searching scientific papers, …. Or consider startups like SciVee (YouTube for scientists), the Public Library of Science, the Journal of Visualized Experiments, vibrant community sites like OpenWetWare and the Alzheimer Research Forum, and dozens more. And then there are companies like WordPress, Friendfeed, and Wikimedia, that weren’t started with science in mind, but which are increasingly helping scientists communicate their research. This flourishing ecosystem is not too dissimilar from the sudden flourishing of online news services we saw over the period 2000 to 2005.

Let’s look up close at one element of this flourishing ecosystem: the gradual rise of science blogs as a serious medium for research. It’s easy to miss the impact of blogs on research, because most science blogs focus on outreach. But more and more blogs contain high quality research content. Look at Terry Tao’s wonderful series of posts explaining one of the biggest breakthroughs in recent mathematical history, the proof of the Poincare conjecture. Or Tim Gowers recent experiment in “massively collaborative mathematics”, using open source principles to successfully attack a significant mathematical problem. Or Richard Lipton’s excellent series of posts exploring his ideas for solving a major problem in computer science, namely, finding a fast algorithm for factoring large numbers. Scientific publishers should be terrified that some of the world’s best scientists, people at or near their research peak, people whose time is at a premium, are spending hundreds of hours each year creating original research content for their blogs, content that in many cases would be difficult or impossible to publish in a conventional journal. What we’re seeing here is a spectacular expansion in the range of the blog medium. By comparison, the journals are standing still.

This flourishing ecosystem of startups is just one sign that scientific publishing is moving from being a production industry to a technology industry. A second sign of this move is that the nature of information is changing. Until the late 20th century, information was a static entity. The natural way for publishers in all media to add value was through production and distribution, and so they employed people skilled in those tasks, and in supporting tasks like sales and marketing. But the cost of distributing information has now dropped almost to zero, and production and content costs have also dropped radically [4]. At the same time, the world’s information is now rapidly being put into a single, active network, where it can wake up and come alive. The result is that the people who add the most value to information are no longer the people who do production and distribution. Instead, it’s the technology people, the programmers.

If you doubt this, look at where the profits are migrating in other media industries. In music, they’re migrating to organizations like Apple. In books, they’re migrating to organizations like Amazon, with the Kindle. In many other areas of media, they’re migrating to Google: Google is becoming the world’s largest media company. … How many scientific publishers are as knowledgeable about technology as Steve Jobs, Sergey Brin, or Larry Page?

… Being wrong is a feature, not a bug, if it helps you evolve a model that works: you start out with an idea that’s just plain wrong, but that contains the seed of a better idea. You improve it, and you’re only somewhat wrong. You improve it again, and you end up the only game in town. Unfortunately, few scientific publishers are attempting to become technology-driven in this way. The only major examples I know of are Nature Publishing Group (with Nature.com) and the Public Library of Science. …

Opportunities

So far this essay has focused on the existing scientific publishers, and it’s been rather pessimistic. But of course that pessimism is just a tiny part of an exciting story about the opportunities we have to develop new ways of structuring and communicating scientific information. These opportunities can still be grasped by scientific publishers who are willing to let go and become technology-driven, even when that threatens to extinguish their old way of doing things. … Here’s a list of services I expect to see developed over the next few years. A few of these ideas are already under development, mostly by startups, but have yet to reach the quality level needed to become ubiquitous. The list could easily be continued ad nauseum – these are just a few of the more obvious things to do.

Personalized paper recommendations: Amazon.com has had this for books since the late 1990s. You go to the site and rate your favourite books. The system identifies people with similar taste, and automatically constructs a list of recommendations for you. This is not difficult to do: Amazon has published an early variant of its algorithm, and there’s an entire ecosystem of work, much of it public, stimulated by the Neflix Prize for movie recommendations. If you look in the original Google PageRank paper, you’ll discover that the paper describes a personalized version of PageRank, which can be used to build a personalized search and recommendation system. …

A great search engine for science: ISI’s Web of Knowledge, Elsevier’s Scopus and Google Scholar are remarkable tools, but there’s still huge scope to extend and improve scientific search engines [5]. With a few exceptions, they don’t do even basic things like automatic spelling correction, good relevancy ranking of papers (preferably personalized), automated translation, or decent alerting services. They certainly don’t do more advanced things, like providing social features, or strong automated tools for data mining. Why not have a public API [6] so people can build their own applications to extract value out of the scientific literature? Imagine using techniques from machine learning to automatically identify underappreciated papers, or to identify emerging areas of study.

High-quality tools for real-time collaboration by scientists: Look at services like the collaborative editor Etherpad, which lets multiple people edit a document, in real time, through the browser. They’re even developing a feature allowing you to play back the editing process. Or the similar service from Google, Google Docs, which also offers shared spreadsheets and presentations. Look at social version control systems like Git and Github. Or visualization tools which let you track different people’s contributions. …

Scientific blogging and wiki platforms: With the exception of Nature Publishing Group, why aren’t the scientific publishers developing high-quality scientific blogging and wiki platforms? … On a related note, publishers could also help preserve some of the important work now being done on scientific blogs and wikis…. The US Library of Congress has taken the initiative in preserving law blogs. Someone needs to step up and do the same for science blogs.

The data web: Where are the services making it as simple and easy for scientists to publish data as it to publish a journal paper or start a blog? A few scientific publishers are taking steps in this direction. But it’s not enough to just dump data on the web. It needs to be organized and searchable, so people can find and use it. …

In his recent New Yorker article, The Cost Conundrum, Atul Gawande compares McAllen Texas, which is one of the most expensive health-care markets in the US, with Rochester, Minnesota, home of the Mayo Clinic, which has a relatively low expenditure, and a high-quality health system. Below are long excerpts from a much longer article [Boldface added].

McAllen

There’s no evidence that the treatments and technologies available at McAllen are better than those found elsewhere in the country. The annual reports that hospitals file with Medicare show that those in McAllen and El Paso offer comparable technologies—neonatal intensive-care units, advanced cardiac services, PET scans, and so on. Public statistics show no difference in the supply of doctors. Hidalgo County actually has fewer specialists than the national average.

Nor does the care given in McAllen stand out for its quality. Medicare ranks hospitals on twenty-five metrics of care. On all but two of these, McAllen’s five largest hospitals performed worse, on average, than El Paso’s. McAllen costs Medicare seven thousand dollars more per person each year than does the average city in America. But not, so far as one can tell, because it’s delivering better health care. … The primary cause of McAllen’s extreme costs was, very simply, the across-the-board overuse of medicine.

… Then there are the physicians who see their practice primarily as a revenue stream. They instruct their secretary to have patients who call with follow-up questions schedule an appointment, because insurers don’t pay for phone calls, only office visits. They consider providing Botox injections for cash. They take a Doppler ultrasound course, buy a machine, and start doing their patients’ scans themselves, so that the insurance payments go to them rather than to the hospital. They figure out ways to increase their high-margin work and decrease their low-margin work. This is a business, after all.

In every community, you’ll find a mixture of these views among physicians, but one or another tends to predominate. McAllen seems simply to be the community at one extreme.

The real puzzle of American health care, I realized on the airplane home, is not why McAllen is different from El Paso. It’s why El Paso isn’t like McAllen. Every incentive in the system is an invitation to go the way McAllen has gone. Yet, across the country, large numbers of communities have managed to control their health costs rather than ratchet them up.

One morning, I met with a hospital administrator who had extensive experience managing for-profit hospitals along the border. … “In El Paso, if you took a random doctor and looked at his tax returns eighty-five per cent of his income would come from the usual practice of medicine,” he said. But in McAllen, the administrator thought, that percentage would be a lot less. He knew of doctors who owned strip malls, orange groves, apartment complexes—or imaging centers, surgery centers, or another part of the hospital they directed patients to. They had “entrepreneurial spirit,” he said. They were innovative and aggressive in finding ways to increase revenues from patient care. “There’s no lack of work ethic,” he said. But he had often seen financial considerations drive the decisions doctors made for patients—the tests they ordered, the doctors and hospitals they recommended—and it bothered him. Several doctors who were unhappy about the direction medicine had taken in McAllen told me the same thing. “It’s a machine, my friend,” one surgeon explained.

Mayo

The core tenet of the Mayo Clinic is “The needs of the patient come first”—not the convenience of the doctors, not their revenues. The doctors and nurses, and even the janitors, sat in meetings almost weekly, working on ideas to make the service and the care better, not to get more money out of patients. I asked Cortese how the Mayo Clinic made this possible.

“It’s not easy,” he said. But decades ago Mayo recognized that the first thing it needed to do was eliminate the financial barriers. It pooled all the money the doctors and the hospital system received and began paying everyone a salary, so that the doctors’ goal in patient care couldn’t be increasing their income. Mayo promoted leaders who focused first on what was best for patients, and then on how to make this financially possible.

“When doctors put their heads together in a room, when they share expertise, you get more thinking and less testing,” Cortese told me.

Concluding thoughts

When you look across the spectrum from Grand Junction to McAllen—and the almost threefold difference in the costs of care—you come to realize that we are witnessing a battle for the soul of American medicine. Somewhere in the United States at this moment, a patient with chest pain, or a tumor, or a cough is seeing a doctor. And the damning question we have to ask is whether the doctor is set up to meet the needs of the patient, first and foremost, or to maximize revenue.

Providing health care is like building a house. The task requires experts, expensive equipment and materials, and a huge amount of coördination. Imagine that, instead of paying a contractor to pull a team together and keep them on track, you paid an electrician for every outlet he recommends, a plumber for every faucet, and a carpenter for every cabinet. Would you be surprised if you got a house with a thousand outlets, faucets, and cabinets, at three times the cost you expected, and the whole thing fell apart a couple of years later? Getting the country’s best electrician on the job (he trained at Harvard, somebody tells you) isn’t going to solve this problem. Nor will changing the person who writes him the check.

When it comes to making care better and cheaper, changing who pays the doctor will make no more difference than changing who pays the electrician. The lesson of the high-quality, low-cost communities is that someone has to be accountable for the totality of care. Otherwise, you get a system that has no brakes. You get McAllen.

Dramatic improvements and savings will take at least a decade. But a choice must be made. Whom do we want in charge of managing the full complexity of medical care? We can turn to insurers (whether public or private), which have proved repeatedly that they can’t do it. Or we can turn to the local medical communities, which have proved that they can. But we have to choose someone—because, in much of the country, no one is in charge. And the result is the most wasteful and the least sustainable health-care system in the world.

Something even more worrisome is going on as well. In the war over the culture of medicine—the war over whether our country’s anchor model will be Mayo or McAllen—the Mayo model is losing. In the sharpest economic downturn that our health system has faced in half a century, many people in medicine don’t see why they should do the hard work of organizing themselves in ways that reduce waste and improve quality if it means sacrificing revenue.

[concluding paragraph ...]
As America struggles to extend health-care coverage while curbing health-care costs, we face a decision that is more important than whether we have a public-insurance option, more important than whether we will have a single-payer system in the long run or a mixture of public and private insurance, as we do now. The decision is whether we are going to reward the leaders who are trying to build a new generation of Mayos and Grand Junctions. If we don’t, McAllen won’t be an outlier. It will be our future.

Last week New York Times reporter David Carr paid a visit to the GooglePlex, to learn more about Google Book Search. His article on this got little attention, maybe because the title and lead paragraphs didn’t communicate that the subject was, in fact, Google Book Search and the Settlement. So I’m excerpting it here:

Years after cracking the very code of the Web to lucrative ends, Google may be in the midst of trying to conjure the most complicated algorithm yet: to wit, can goodness … scale along with the enterprise? Among other adventures, Google’s motives were called into question after it scanned in millions of books without permission, prompting the Authors Guild and publishers to file a class-action suit. The proposed $125 million settlement will lead to a book registry financed by Google and a huge online archive of mostly obscure books, searched and served up by Google. So is that a big win for a culture that increasingly reads on screen — or a land grab of America’s most precious intellectual property?

[Google] was happy to accommodate my visit [last Tuesday] because the founders, Sergey Brin and Larry Page, along with Mr. Schmidt, have come to believe that part of their job is explaining themselves. … Google is, broadly, the Wal-Mart of the Internet, a huge force that can set terms and price — in this case free — except Google is not selling hammers and CDs, it is operating at the vanguard of intellectual property.

The Justice Department and a number of state attorneys general, have taken an acute interest in the proposed book settlement that Google negotiated over its right to scan millions of books, many of them out of print. Revenue will be split with any known holders of the copyright, but it is the company’s dominion over so-called orphan works that has intellectual property rights advocates livid.

“It’s disgusting,” said Peter Brantley, director of access for the Internet Archive, which has been scanning books as well. “We all share the general goal of getting more books online, but the class-action settlement gives them a release of any claims of infringement in using those works. For them to say that is not a barrier to entry for other people who might scan in those works is a crock.”

The scanned book project is certainly consistent with the company’s mission, which is “to organize the world’s information and make it universally accessible and useful.” … “What I think is great about books is that people just don’t go to libraries that much, but they are in front of the computer all day,” Mr. Schmidt said. “And now they have access. If you are sitting and trying to finish a term paper at 2 in the morning, Google Books saved your rear end. That is a really oh-my-God kind of change.”

The government has not yet made this argument — filings are due in the case in September — but others have pointed out that Google has something of a monopoly because the company went ahead and scanned seven million books without permission. “To be very precise, we did not require permission to make those copies,” Mr. Schmidt said, suggesting that by scanning and making just a portion of those works available, the company was well within the provisions of fair use.

“People are bringing old narratives to this discussion rather than understanding the unique aspects of the Internet,” he added. “We are one click away from losing you as a customer, so it is very difficult for us to lock you in as a customer in a way that traditional companies have.”

In a later meeting, Mr. Brin waved his hand when it was suggested that the company’s decision to scan books and then reach a settlement had created a barrier to entry for others. (Google also has a separate commercial initiative to work with publishers to sell more current works.) “I didn’t see anyone lining up to scan books when we did it, or even now,” Mr. Brin said. “Some of them are motivated by near-term business disputes, and they don’t see this as an achievement for humanity.”

As with most matters involving Google, it is less about the specific activity than the scope of it. A company with Google’s wherewithal and ambition may have the ability to eventually seem like the only choice in all manner of endeavors.

When I told Mr. Schmidt I was worried about Google’s dominant presence in my digital life, he said: “It’s a legitimate concern. But the question is, how are we doing? Are our products working for you?” Why, yes they are. And if a book is ever written about all this, Google will probably be able to serve that up as well.

Why is the Library of Congress not more involved in discussions of Google Book Search and the impending Settlement? Google searching finds virtually no evidence that LC has had any voice at all in the recent flurry of talk on this. For example, these Google web searches pull up only incidental connections: < “library of congress” “google book” > < billington “google book” > < “library of congress” google settlement > (The main connection found here is a panel discussion of the Settlement that was held at LC in April, but none of the panelists were from LC.)

As the “de facto national library” of the US and “the largest library in the world,” wouldn’t it seem logical that LC be involved in thinking about GBS and the Settlement, which some say will change the way we read more than anything since the printing press?

I’ve been thinking about this idea for several months, but especially after writing an article in May on the apparently woeful state of Information Technology Strategic Planning at LC, as stated in a report by LC’s  Inspector General. Could there be a connection? Is this apparent lack of vision related to LC’s non-engagement with the momentous issues of the Settlement?

I was glad to discover, in doing research for this article, that someone else is thinking at least a bit along the same lines — Peter Eckersly, at the Electronic Frontier Foundation, suggested recently that Google put a copy of all books they scan at the Library of Congress — A fairly modest proposal, but maybe it will at least have the effect of bringing the Library of Congress at long last into the spotlight.

Eric Rumsey is at @ericrumsey

A few excerpts from Clive Thompson’s interesting thoughts on digitization last week:

Books are the last bastion of the old business model—the only major medium that still hasn’t embraced the digital age. … Literary pundits are fretting: Can books survive in this Facebooked, ADD, multichannel universe? … To which I reply: Sure they can. But only if publishers … stop thinking about the future of publishing and think instead about the future of reading. … Every other form of media that’s gone digital has been transformed by its audience. … The only reason the same thing doesn’t happen to books is that they’re locked into ink on paper. … Release them, and you release the crowd.

Thompson says that “the crowd” of readers is already at work transforming even print books. He reports on research done by e-Book researcher Cathy Marshall on students buying used textbooks — She has found that they examine books in the bookstore to find ones that have notes by previous readers — high-lighting and handwritten notes on the pages — and they prefer the ones that they judge to have the “smartest” notes. This rudimentary utilization of “the crowd,” says Thompson, is really nothing new: “Books have a centuries-old tradition of annotation and commentary, ranging from the Talmud and scholarly criticism to book clubs and marginalia.” Thompson cites current digital examples of the transformative use of the crowd:

BookGlutton, a site that launched last year, has put 1,660 books online and created tools that let readers form groups to discuss their favorite titles. Meanwhile, Bob Stein, an e-publishing veteran from the CD-ROM days, put the Doris Lessing book The Golden Notebook online with an elegant commenting system and hired seven writers to collaboratively read it.

Thompson closes with this: “Books have been held hostage offline for far too long. Taking them digital will unlock their real hidden value: the readers.”

Eric Rumsey is at @ericrumsey

In a prophetic passage written in 1990, Salman Rushdie paints a vivid word picture of the Ocean of the Streams of Story that I’ve suggested is an uncanny envisioning of the yet-to-be-created Web. Right now, the evolution of the Web seems to be speeding up, and two recent commentaries, one on the Twitter/Facebook world, and one on Google Book Search, suggest that the Web may be fast growing into the sort of place imagined in Rushdie’s Stream metaphor. I’ve described and excerpted Rushdie’s passage and the two commentaries in other articles, so in this article, I’ll bring the three “streams” together, by excerpting a few lines from each.

Salman Rushdie, describing the Ocean of the Stream of Stories:

It was made up of a thousand thousand thousand and one different currents, each one a different color, weaving in and out of one another like a liquid tapestry of breathtaking complexity … it was much more than a storeroom of yarns. It was not dead but alive.

Nova Spivack suggests that the new metaphor for the Web will be the Stream:

Something new is emerging … I call it the Stream … The Web has always been a stream of streams. … with the advent of blogs, feeds, and microblogs, the streamlike nature of the Web is becoming more readily visible.

Peter Brantley, writing on Google Book Search, says:

We stride into a world where books are narratives in long winding rivers; drops of thought misting from the sundering thrust of great waterfalls; and seas from which all rivers and rain coalesce, and which carry our sails to continents not yet imagined.

What an interesting story the Web itself is! — The Stream imagined by Rushdie 19 years ago looks like it might finally be flowing together with Spivack’s Stream and Brantley’s long winding river.

Related article:

Eric Rumsey is at @ericrumsey

Nova Spivack, in his article Is The Stream What Comes After the Web? suggests that the new metaphor for the Web will be the Stream. He says that especially with advent of Twitter and microblogging, the streamlike nature of the Web has become more apparent:

Just as the Web once emerged on top of the Internet, now something new is emerging on top of the Web: I call this the Stream. … The Stream is what the Web is thinking and doing, right now. It’s our collective stream of consciousness. … Perhaps the best example of the Stream is the rise of Twitter and other microblogging systems including the new Facebook. These services are visibly streamlike — they are literally streams of thinking and conversation.

The Web has always been a stream. In fact it has been a stream of streams. … with the advent of blogs, feeds, and microblogs, the streamlike nature of the Web is becoming more readily visible.

The Web is changing faster than ever, and as this happens, it’s becoming more fluid. Sites no longer change in weeks or days, but hours, minutes or even seconds. if we are offline even for a few minutes we may risk falling behind, or even missing something absolutely critical. The transition from a slow Web to a fast-moving Stream is happening quickly. And as this happens we are shifting our attention from the past to the present, and our “now” is getting shorter.

The era of the Web was mostly about the past — pages that were published months, weeks, days or at least hours before we looked for them. … But in the era of the Stream, everything is shifting to the present — we can see new posts as they appear and conversations emerge around them, live, while we watch. … The unit of change is getting more granular. … Our attention is mainly focused on right now: the last few minutes or hours. Anything that was posted before this period of time is “out of sight, out of mind.”

The Web has always been a stream — it has been happening in real-time since it started, but it was slower … Things have also changed qualitatively in recent months. The streamlike aspects of the Web have really moved into the foreground of our mainstream cultural conversation. … And suddenly we’re all finding ourselves glued to various activity streams, microblogging manically … to catch fleeting references to things … as they rapidly flow by and out of view. The Stream has arrived.

Spivack’s vision of the future Web as a Stream resonates with other commentaries, as I’ve discussed in related articles:

Eric Rumsey is at @ericrumsey