Wednesday, July 29, 2009

Chubby Knowledge

For those of you going though Fat Knowledge withdrawal pains, check out Chubby Knowledge. It has the same great taste as Fat Knowledge with half the calories.

Long time reader Rebelfish created it as a "sequel":

Chubby Knowledge is a continuation of the Fat Knowledge blog looking at our planet's and country's most important scientific, technological, and hilarious issues.
I wish Rebelfish the best of luck with the new blog and that all the knowledge he drops be fat chubby.


Tuesday, July 07, 2009

That's All Folks

That's it for this blog.

I want to thank all the loyal Fat Knowledge readers out there who took the time to read all the posts that I wrote. Special thanks to all those who took time to leave a thoughtful comment or send an email or link to a post on your own blog.

I never thought I would have written 1920 posts or had over 100 RSS readers, or had over 1/3 of a million visitors and over 1/2 a million page views. While those numbers are dwarfed by the many other blogs out there, my goal was never to get lots of readers. Instead the blog was a way to force me to think through interesting and important issues and share it with others. I was glad if anyone wanted to read along and amazed at how many of you did.

May all your knowledge be fat.


On Web and iPhone, a Tool to Aid Careful Shopping

These days, every skin lotion and dish detergent on store shelves gloats about how green it is. How do shoppers know which are good for them and good for the earth?

Hence GoodGuide, a Web site and iPhone application that lets consumers dig past the package’s marketing spiel by entering a product’s name and discovering its health, environmental and social impacts.

“What we’re trying to do is flip the whole marketing world on its head,” said Mr. O’Rourke. “Instead of companies telling you what to believe, customers are making the statements to the marketers about what they care about.”

The next version of the iPhone will enable people to scan bar codes to get scores, rather than type in the product’s name.
I wrote about GoodGuide before and I think giving social and environmental information to people at the point of purchase (ideally by mobile phone via bar scanner) is a great idea. I have been writing about this concept since way back in April of 06 and all the posts with a Labeling label are about this concept.
Although the GoodGuide Web site, which started in September, had only 110,000 unique visitors in April, Mr. O’Rourke is encouraged that it is growing about 25 percent a month. Lately, interest in GoodGuide has begun to extend beyond techies and the Whole Foods crowd to the Wal-Mart crowd, as Mr. O’Rourke put it. One sign of that broader appeal: Apple recently featured the app in its iPhone ads.

GoodGuide’s office, in San Francisco, has 12 full-time and 12 part-time employees, half scientists and half engineers. They have scored 75,000 products with data from nearly 200 sources, including government databases, studies by nonprofits and academics, and the research by scientists on the GoodGuide staff. There are still holes in the data that GoodGuide seeks to fill.

Some companies, including Clorox and SC Johnson, have agreed in recent months to reveal more about the ingredients in their products because of gathering consumer concern. That will enable GoodGuide to fill gaps in its data. Federal law does not require makers of household products to list all ingredients.

This summer, GoodGuide will add a deeper database for users who want more detail by, for example, reading the academic studies on which ratings may be based.
It is going to take a while to catch on, and only a fraction of the population will ever use such an application, but even that will be enough to get manufacturers concerned about it and work toward higher ratings.

The more data that companies make available the better the accuracy of the ratings will be. And more detail on how scores were set would be greatly appreciated by those of us that really want to dig in and understand what the scores mean.

The article talks about how their business model is still evolving but it is run for profit and currently gets its money by linking to Amazon and in the future by charging companies like Whole Foods that want to put the ratings in their store. I am curious if maybe a non-profit model might work better here with funding from philanthropists, and much of the work done by volunteers. Or maybe a Consumer Reports model where end users pay for a subscription.

via NY Times


Sunday, July 05, 2009

Chris Anderson on the Future of Free

Chris Anderson speaks about his latest book Free: The Future of A Radical Price:

“What it says is that anything that becomes digital will become free — not to say that everything online will be free, but that everything will be available in a free version, so that fundamentally, you’re either going to be competing with free or you’re going to be making a product free and selling something else, because the marginal cost for these products is the same for everybody — which is to say zero.”

Anderson sees many areas of digital content as obeying this law, including music, video, and video games (the big three “shiny disc” industries), news, books, and e-mail. Under Anderson’s model, people will continue to pay good money to save time (that is, those who have more money than time will), lower their risk (such as paying to assure that their Second Life land will still be there, or that their operating system will be supported), because they love something (such as buying virtual items in free videogames), or to increase their status in a community.
I think the topic of digital economics is very interesting and I looked at it in depth earlier in my 8 funding models to support digital goods creation post.

I agree with him about paying to save time (as I wrote long time ago that time is more valuable than money in the attention economy) but that opens up a big loophole where people will still use iTunes, Netflix and Amazon to purchase media because it is a lot faster to use them than trying to find a free version somewhere. You can also use this to justify in game purchases (saves time buying things rather than earning them) and software (better to pay for standard one everyone knows, than the new one you would have to learn).

I am not so sure about his other three reasons. I think the Second Life and virtual item purchases show that in a closed system you can still charge for digital goods, as you have complete control of who can get what and how (iPhone apps being another example). And paying for support is not really a digital good as you are paying for someone's time.
Anderson comes up with the following rules for media companies trying to figure out how to make money online:

1. The best model is a mix of free and paid
2. You can’t charge for an exclusive that will be repeated elsewhere,
3. Don’t charge for the most popular content on your site,
4. Content behind a pay wall should appeal to niches, the narrower the niche the better

This is somewhat counterintuitive because it means media sites that want to charge for content should charge for their niche stuff instead of their most popular content. But that is exactly the right way to look at it if you want to maximize your advertising revenues. Let the popular content be paid for by advertising, and the niche, exclusive content can be sold to fewer people at a higher price. Anderson, whose last book was the Long Tail, predicts in media: “The head of the curve will be free and the tail of the curve will be paid.”
Interesting take on giving the head away for free and charging for the tail (hmm, that sounds dirty). The hardcore fans, those with a great interest and those that need the information for work will be willing to pay for additional content that is very specific and isn't commonly available (I am thinking of music artists' blogs, ESPN's insider and WSJ content).

I also think you could charge for earlier access. Give those that subscribe access to content hours or days before you release it for free. Let them see the exclusive material before everyone else gets it.

I think this analysis also misses the fact that the free market might not be the best way to support digital content production. Funding from government, donations and having content created by volunteers might be the better way to go in many cases.

via Wired and TechCrunch


Saturday, July 04, 2009

Satellite Detects Red Glow To Map Global Ocean Plant Health

Researchers from Oregon State University, NASA and other organizations said today that they have succeeded for the first time in measuring the physiology of marine phytoplankton through satellite measurements of its fluorescence – an accomplishment that had been elusive for years.

With this new tool and the continued use of the MODIS Aqua satellite, scientists will now be able to gain a reasonably accurate picture of the ocean's health and productivity about every week, all over the planet.

Data such as this will be critically important in evaluating the effect on oceans of global warming, climate change, desertification and other changes, the researchers said. It will also be a key to determining which areas of the ocean are limited in their productivity by iron deficiency – as this study just showed the Indian Ocean was.

"Until now we've really struggled to make this technology work and give us the information we need," said Michael Behrenfeld, an OSU professor of botany. "The fluorescence measurements allow us to see from outer space the faint red glow of tiny marine plants, all over the world, and tell whether or not they are healthy. That's pretty cool."

To grow, however, these phytoplankton absorb energy from the sun, and then allow some of that energy to escape as red light that is called fluorescence. The new measurements of fluorescence, literally the dim glow that these plants put off, will help complete the understanding of ocean health on a much broader and more frequent basis.

Some surprises are already in.

It was known, for instance, that parts of the equatorial Pacific Ocean, some regions around Antarctica and parts of the sub-Artic Pacific Ocean below Alaska were limited in production by the poor availability of iron. The newest data, however, show that parts of the northern Indian Ocean during the summer are also iron limited – a phenomenon that had been suggested by some ocean and climate models, but never before confirmed.
Amazing that they can measure the physiology of one of the smallest organisms on the planet from a satellite way above the Earth. This data will help to determine where fertilizing the oceans with iron will be most applicable.

via ScienceDaily


New Twitter Research: Men Follow Men and Nobody Tweets

We examined the activity of a random sample of 300,000 Twitter users in May 2009 to find out how people are using the service. We then compared our findings to activity on other social networks and online content production venues. Our findings are very surprising.

We found that an average man is almost twice more likely to follow another man than a woman. Similarly, an average woman is 25% more likely to follow a man than a woman. Finally, an average man is 40% more likely to be followed by another man than by a woman. These results cannot be explained by different tweeting activity - both men and women tweet at the same rate.

twitter research 3.jpg

These results are stunning given what previous research has found in the context of online social networks. On a typical online social network, most of the activity is focused around women - men follow content produced by women they do and do not know, and women follow content produced by women they knowi. Generally, men receive comparatively little attention from other men or from women. We wonder to what extent this pattern of results arises because men and women find the content produced by other men on Twitter more compelling than on a typical social network, and men find the content produced by women less compelling (because of a lack of photo sharing, detailed biographies, etc.).

Interesting. I wonder how this compares with blogs.

Twitter's usage patterns are also very different from a typical on-line social network. A typical Twitter user contributes very rarely. Among Twitter users, the median number of lifetime tweets per user is one. This translates into over half of Twitter users tweeting less than once every 74 days.

twitter research 2.jpg

At the same time there is a small contingent of users who are very active. Specifically, the top 10% of prolific Twitter users accounted for over 90% of tweets. On a typical online social network, the top 10% of users account for 30% of all production. To put Twitter in perspective, consider an unlikely analogue - Wikipedia. There, the top 15% of the most prolific editors account for 90% of Wikipedia's edits ii. In other words, the pattern of contributions on Twitter is more concentrated among the few top users than is the case on Wikipedia, even though Wikipedia is clearly not a communications tool. This implies that Twitter's resembles more of a one-way, one-to-many publishing service more than a two-way, peer-to-peer communication network.

twitter research 1.jpg

While all follow a long tail, some long tails are longer than others.

via Harvard Business Blog (and some more Twitter stats from TechCrunch here)


Interesting Articles of the Week

How the US surplus became a deficit.

Why e-books are stuck in a black-and-white world.

Know Thyself: Tracking Every Facet of Life, 24/7/365.

For first time over 1/2 of energy funding goes to clean energy (which has quadrupled in last 4 years).

Giving up my iPod for a Walkman.


Friday, July 03, 2009

First Hybrid Solar Power Plant Opens in Israel

Concentrating Solar Power (CSP) plants are an amazing, wonderful, renewable energy technology, as long as the sun is shining. However solar power alone cannot provide on-demand power, especially in the case of off-grid applications. Aora Solar, out of Yavne, Israel, is almost complete with the world’s first ever solar hybrid plant, which will combine concentrated solar power with a hybrid-microturbine to generate power 24 hours a day. This technology could help provide off-grid communities the necessary power without having to run miles of costly transmission lines.

Installation of the new hybrid system is less than 10 days away from being completed at the Kibbutz Sammar in Israel. Once up and running on June 24th, the plant will generate 100 kW of on-demand power plus 170 kW of thermal power. The plant consists of 30 heliostats (mirrors) that track the sun and direct its rays up to the 30-meter tall tower, where all the sunlight from the heliostats is concentrated. This concentrated sunlight heats compressed air, which drives an electric turbine. The tower itself is a welcome change from other power towers we have seen in the past - it actually looks good with its tulip flower shape.

The hybrid part of the plant allows for on-demand power due to its inline microturbine. When the sun has set for the day or if it is cloudy, biodiesel, natural gas, or bio fuels can be used to run the microturbine, which then drives the electric turbine. The hybrid system has the capacity to power 70 homes 24/7.

A hybrid system like this has the potential to provide distributed generation or off-grid power to communities, companies or factories. As production of bio fuels becomes more efficient and sustainable, we’re hoping to see more and more hybrid solar concentrating systems.

I like how it has the ability to use other fuels when the sun isn't shining. This solves the intermittency problem of solar power. I wonder how much it cost to build and how much they will charge per kWh?

via Inhabit and Tree Hugger


Sylvia Earle's TED Prize Wish To Protect Our Oceans

Sylvia Earle explains her TED Prize Wish:
To bring knowledge of our oceans to a wide audience and galvanize support in favor of marine protected areas.

We invite a variety of responses from TEDsters in pursuit of this goal:

* Development of technologies that would permit deep sea exploration in order to make the invisible visible
* Supporting (or organizing) expeditions to explore proposed “hope spots”
* Helping make the scientific case for a network of MPAs
* Identifying and exploring candidate MPAs
* Creating a media campaign in support of MPAs
* Backing the upcoming Oceans documentary to ensure wide viewership
I think this is a great wish and hope she is successful in her endeavor. As loyal Fat Knowledge readers know, I am a huge fan of deep sea exploration (mapping the bottom of the ocean is #4 on my list of scientific achievements I am looking forward to) and wish all of NASA's funds would be redirected here. Her Deep Search Foundation is a worthy cause as well.

Aside: It pains me to see that another of the TED prizes went to Jill Tarter who wants to improve SETI. If there is a bigger waste of brain power in the world than SETI, I don't know what it is. While the chance of finding new strange and fascinating terrestrial life at the bottom of the ocean is almost 100%, these guys are spending their time trying to find ET which has worse odds than the lottery.

The more MPAs the better. I believe that MPAs allow fishermen to have larger catches as well, (I am sure I have read this, but can't find the link right now), which gives it an economic rational as well. The World Database on Protected Areas is a cool Google Maps mashup that shows where all the MPAs in the world are located. Hopefully we can go from 1% of the sea being protected to 5% in the next 50 years.


Deforestation Causes 'Boom-and-bust' Development In The Amazon

Clearing the Amazon rainforest increases Brazilian communities' wealth and quality of life, but these improvements are short-lived, according to new research published today (12 June) in Science. The study, by an international team including researchers at the University of Cambridge and Imperial College London, shows that levels of development revert back to well below national average levels when the loggers and land clearers move on.

Since 2000, 155 thousand square kilometres of rainforest in the Brazilian Amazon have been cut down for timber, burnt, or cleared for agricultural use. Forest clearance rates have averaged more than 1.8 million hectares per year (roughly the area of Kuwait), and the deforestation frontier is advancing into the forest at a rate of more than four football fields every minute.

The researchers' analysis revealed that the quality of local people's lives –measured through levels of income, literacy and longevity, as mentioned above – increases quickly during the early stages of deforestation. This is probably because people capitalise on newly available natural resources, including timber, minerals and land for pasture, and higher incomes and new roads lead to improved access to education and medical care, and all round better living conditions.

However, the new results suggest that these improvements are transitory, and the level of development returns to below the national average once the area's natural resources have been exploited and the deforestation frontier expands to virgin land. Quality of life pre- and post-deforestation was both substantially lower than the Brazilian national average, and was indistinguishable from one another.
This article shows that greed isn't the problem here but rather short sightedness. That even from a selfish perspective, it is not in anyone's long term interest to cut down the trees. Instead of using an environmental argument to stop those that would cut down the forest, it would be more effective to show how it is not in their selfish long term interest to do so.

via ScienceDaily


Thursday, July 02, 2009

Netflix Ratings and The Napoleon Dynamite Problem

Bertoni says it’s partly because of “Napoleon Dynamite,” an indie comedy from 2004 that achieved cult status and went on to become extremely popular on Netflix. It is, Bertoni and others have discovered, maddeningly hard to determine how much people will like it. When Bertoni runs his algorithms on regular hits like “Lethal Weapon” or “Miss Congeniality” and tries to predict how any given Netflix user will rate them, he’s usually within eight-tenths of a star. But with films like “Napoleon Dynamite,” he’s off by an average of 1.2 stars.

The reason, Bertoni says, is that “Napoleon Dynamite” is very weird and very polarizing. It contains a lot of arch, ironic humor, including a famously kooky dance performed by the titular teenage character to help his hapless friend win a student-council election. It’s the type of quirky entertainment that tends to be either loved or despised. The movie has been rated more than two million times in the Netflix database, and the ratings are disproportionately one or five stars.

Worse, close friends who normally share similar film aesthetics often heatedly disagree about whether “Napoleon Dynamite” is a masterpiece or an annoying bit of hipster self-indulgence. When Bertoni saw the movie himself with a group of friends, they argued for hours over it. “Half of them loved it, and half of them hated it,” he told me. “And they couldn’t really say why. It’s just a difficult movie.”

Mathematically speaking, “Napoleon Dynamite” is a very significant problem for the Netflix Prize. Amazingly, Bertoni has deduced that this single movie is causing 15 percent of his remaining error rate; or to put it another way, if Bertoni could anticipate whether you’d like “Napoleon Dynamite” as accurately as he can for other movies, this feat alone would bring him 15 percent of the way to winning the $1 million prize. And while “Napoleon Dynamite” is the worst culprit, it isn’t the only troublemaker. A small subset of other titles have caused almost as much bedevilment among the Netflix Prize competitors. When Bertoni showed me a list of his 25 most-difficult-to-predict movies, I noticed they were all similar in some way to “Napoleon Dynamite” — culturally or politically polarizing and hard to classify, including “I Heart Huckabees,” “Lost in Translation,” “Fahrenheit 9/11,” “The Life Aquatic With Steve Zissou,” “Kill Bill: Volume 1” and “Sideways.”
I wonder if the problem isn't the movies as much as the rating system itself. Instead of a single 1-5 star rating, maybe they should also include a score for the variance or list the probabilities that you will rank the movie from 1-5 stars.

When you looked at Napoleon Dynamite it could say you have a 30% chance of rating it 1 star, 10% chance for 2 star, 20% 3 star, 10% 4 star and 30% 5 star. Whereas X-Men might look like 5% 1 star, 20% 2 star, 50% 3 star, 20% 4 star and 5% 5 star. While on average you are likely to give both a 3 star rating, you could choose whether to take the gamble on a movie that you might love, or stick with seeing on that you are likely to find very average.

More interesting information on the Netflix Prize and how people are trying to solve it in the article.

I am also curious how much the recommendations for any one person differ between the various engines. Does one engine do a significantly better job of recommending movies for some people than the other engines? If that is the case, then Netflix should allow you which engine you want to use, similar to how you could choose which of staff members on the staffs' picks you wanted to go with (but think twice before going with a Gene pick over a Vincent).

via NY Times


Nintendo Wii Vitality Sensor

And it looks like Nintendo's answer to Microsoft's Project Natal is... a pulse detector. Yep, Ninty's just announced the Wii Vitality Sensor, a finger sensor which attaches to the Wiimote to read your pulse. Details on how the accessory is going to be used in games are pretty vague, but it appears the idea is to check stress, help you relax, and just generally chill out and be groovy.
I am curious to see how Nintendo integrates this into games. I used the emWave before and it was kind of cool to see you heart beat show up on the computer screen. The biofeedback made it easier to relax. Other interesting takes on this include the Simmer Down Sprinter, a game by Philips Design and the StressEraser. But, I bet Nintendo will do a much better job of making it fun to use.

I am not a big fan of the wired finger clip though. I hope Nintendo makes this thing wireless and attaches it to your arm or ear instead.

via Engadget


Culture Requires a Dense Population

Mark Thomas and his colleagues at University College, London, suggest that cultural sophistication depends on more than just the evolution of intelligence. It also requires a dense population. If correct, this would explain some puzzling features of the archaeological record that have hitherto been put down to the arbitrary nature of what has survived to the present and what has not.

They are trying to explain the pattern of apparent false-starts to modern human culture. The species is now believed to have emerged 150,000-200,000 years ago in Africa and to have begun spreading to the rest of the world about 60,000 years ago. But signs of modern culture, such as shell beads for necklaces, the use of pigments and delicate, sophisticated tools like bone harpoons, do not appear until 90,000 years ago. They then disappear, before popping up again (and also sometimes disappearing), until they really get going around 35,000 years ago in Europe.

The team drew on an earlier insight that it requires a certain number of people to maintain skills and knowledge in a population. Below this level, random effects can be important. The probability of useful inventions being made is low and if only a few have the skills to fabricate the new inventions, they may die without having passed on their knowledge.

In their model, Dr Thomas and his colleagues divided a simulated world into regions with different densities of human groups. Individuals in these groups had certain “skills”, each with an associated degree of complexity. Such skills could be passed on, more or less faithfully, thus yielding an average level of skills that could vary over time. The groups could also exchange skills.

The model suggested that once more than about 50 groups were in contact with one another, the complexity of skills that could be maintained did not increase as the number of groups increased. Rather, it was population density that turned out to be the key to cultural sophistication. The more people there were, the more exchange there was between groups and the richer the culture of each group became.

Dr Thomas therefore suggests that the reason there is so little sign of culture until 90,000 years ago is that there were not enough people to support it. It is at this point that a couple of places in Africa—one in the southernmost tip of the continent and one in eastern Congo—yield signs of jewellery, art and modern weapons. But then they go away again. That, Dr Thomas suggests, corresponds with a period when human numbers shrank. Climate data provides evidence this shrinkage did happen.

According to Dr Thomas, therefore, culture was not invented once, when people had become clever enough, and then gradually built up into the edifice it is today. Rather, it came and went as the population waxed and waned.
via The Economist


Wednesday, July 01, 2009

The Ecological Disaster That is Dolphin Safe Tuna

Is Dolphin Safe Tuna really better for the environment? Southern Fried Science makes the case that switching from following dolphins to using floating objects to catch tuna is actually worse overall for sea life.

Let’s compare the bycatch rates of floating object associated tuna and dolphin associated tuna.
“Ten thousand sets of purse seine nets around immature tuna swimming under logs and other debris will cause the deaths of 25 dolphins; 130 million small tunas; 513,870 mahi mahi; 139,580 sharks; 118,660 wahoo; 30,050 rainbow runners; 12,680 other small fish; 6540 billfish; 2980 yellowtail; 200 other large fish; 1020 sea turtles; and 50 triggerfish.”
“Ten thousand sets of purse seine nets around mature yellowfin swimming in association with dolphins, will cause the deaths of 4000 dolphins (0.04 percent of a population that replenishes itself at the rate of two to six percent per year); 70,000 small tunas; 100 mahi mahi; 3 other small fish; 520 billfish; 30 other large fish; and 100 sea turtles. No sharks, no wahoo, no rainbow runners, no yellowtail, and no triggerfish and dramatic reductions in all other species but dolphins.”
In other words… the only species that “dolphin safe” tuna is good for is dolphins! The bycatch rate for EVERY OTHER species is lower when fishing dolphin-associated tuna vs. floating object associated tuna! The reason for this is obvious- floating objects attract everything nearby, while dolphins following tuna doesn’t attract any other species.

If you work out the math on this (and you don’t have to, because the environmental justice foundation did) , you find that 1 dolphin saved costs 382 mahi-mahi, 188 wahoo, 82 yellowtail and other large fish, 27 sharks, and almost 1,200 small fish.

By trying to help dolphins, groups like Greenpeace caused one of the worst marine ecological disasters of all time. Few other fisheries are as bad for groups like sharks and sea turtles as the purse seine fishery, and none are as large in scale.
More information in the post about how the types of fishing actually work.

Unless you have a great love for dolphins over other types of sea life, following dolphin to catch tuna is the preferable way to go. Of course this also calls out the need for a replacement to floating object fishing. Maybe autonomous robotic submarines could find them? Or high powered satellites? Or GPS tagging? The other solution is to become a Sardinista and switch from eating tuna to smaller fish that can be caught with fewer bycatch.

This blog's whole dolphins are actually jerks section is great reading as well.


Wearable Patch Will Count Calories Burned And Consumed

It could be a dieter's best friend or worst nightmare: technology that knows how much a person has just eaten, knows how many calories he has burned off, offers suggestions for improving resolve and success, and never lets him cheat. And it's all done by a small, stick-on monitor no bigger than a large Band-Aid.

The calorie monitor, which is being developed by biotech incubator PhiloMetron, uses a combination of sensors, electrodes, and accelerometers that--together with a unique algorithm--measure the number of calories eaten, the number of calories burned, and the net gain or loss over a 24-hour period. The patch sends this data via a Bluetooth wireless connection to a dieter's cell phone, where an application tracks the totals and provides support.

PhiloMetron won't yet reveal exactly what makes its patch tick, but the company says that it consists of a single chip surrounded by numerous sensors, electrodes, and accelerometers, embedded in a foam adhesive patch. The system, which is designed to be replaced once a week, measures a variety of things (temperature, heart rate, respiratory rate, skin conductivity, possibly even the amount of fluid in the body), then throws the data into an algorithm to calculate the number of calories consumed, the number burned, and the net yield. Caloric-intake measurements are accurate only to about 500 calories--about two Snickers candy bars. But PhiloMetron CEO Darrel Drinan says that it is much more accurate in determining net gain or loss and is most useful for measuring trends over the course of a week or a month. In fact, the system only provides users with rolling 24-hour totals and no instantaneous data.
Cool concept. I am curious how it determines the number of calories eaten. The accuracy of plus or minus 500 calories doesn't seem too good, but hopefully that can be improved in future releases.

Now that smart phones with internet access are becoming commonplace, the next big wave in mobile devices will to interact with the body, or what I call "The Human APIs". This will be of benefit to those with chronic diseases such as diabetes but also healthy people that are looking for ways to improve their health, fitness and concentration even more.

via Technology Review via FuturePundit


IBM Invests in Lithium-Air Batteries

IBM Research is beginning an ambitious project that it hopes will lead to the commercialization of batteries that store 10 times as much energy as today's within the next five years. The company will partner with U.S. national labs to develop a promising but controversial technology that uses energy-dense but highly flammable lithium metal to react with oxygen in the air. The payoff, says the company, will be a lightweight, powerful, and rechargeable battery for the electrical grid and the electrification of transportation.

Lithium metal-air batteries can store a tremendous amount of energy--in theory, more than 5,000 watt-hours per kilogram. That's more than ten-times as much as today's high-performance lithium-ion batteries, and more than another class of energy-storage devices: fuel cells. Instead of containing a second reactant inside the cell, these batteries react with oxygen in the air that's pulled in as needed, making them lightweight and compact.

"With all foreseeable developments, lithium-ion batteries are only going to get about two times better than they are today," he says. "To really make an impact on transportation and on the grid, you need higher energy density than that." One of the project's goals, says Narayan, is a lightweight 500-mile battery for a family car. The Chevy Volt can go 40 miles before using the gas tank, and Tesla Motors' Model S line can travel up to 300 miles without a recharge.
10 times the energy in the next 5 years sounds good to me. Best of luck to them.

In related news:
The Cleantech Group’s numbers show an uptick in venture-capital funding for batteries in the first quarter, even as overall US venture investments fell to the lowest level since 1997, according to the National Venture Capital Association.

Spurred by federal cash, electric cars, and demand for ever more powerful gadgets, investment in advanced batteries has bucked the recessionary slump and, energy analysts say, could help the economy recover.

Cleantech tracked $94 million in advanced-battery investments in the previous quarter, up substantially from a recession-affected $29 million in the last quarter of 2008 and up slightly from $90 million in the first quarter of that year.
The more battery research the better.

via Technology Review


Global Warming Skeptics Responsible for Collapse of Economy

I just finished reading Green Hell, and the obvious conclusion from reading this is that global warming skeptics are responsible for the current collapse of the economy. Why you ask?

First, the highest priority of global warming skeptics like Steven Milloy is to keep the economy strong. Their biggest concern is that greens will enact global warming and other environmental regulations that will "destroy the economy".

Aside: Steven Milloy not only missed what really destroys economies, he missed what destroys companies as well. In his book, he singles out Ford's CEO and the former CEO of Goldman Sachs as greens who harmed their shareholders with their beliefs. Yes, if only these CEOs had acted more like their competitors at Chrysler, GM, Bear Sterns and Lehman Brothers, their shareholder would be much better off. Oh, wait...

Second, they pride themselves in being able to find flaws in climate models.

Why then are they responsible for the current destruction of the economy? Because the same skills that can find flaws in climate models can also find flaws in financial models. Had they focused in on this hockey stick graph of housing prices to household income,

rather than this hockey stick graph of temperature risings,

they could have warned us about the housing bubble and stopped it from destroying the economy. Instead of focusing in on real threats to the economy, they spent their time fighting against hypothetical environmental regulations that have never been passed and in so doing are responsible for allowing our economy to crash.


Tuesday, June 30, 2009

People are Altruistic Because They Are Militaristic

From a Darwinist perspective, altruism is hard to explain. The more selfish someone is, the more likely they are to have their genes passed on to the next generation. Within a few generations altruistic tendencies should be lost. And yet they exist. How can this be explained?

Samuel Bowles of the Santa Fe Institute in New Mexico believes altruism can be explained by war.

To gather his data, Dr Bowles trawled through ethnographic and archaeological evidence about warfare between groups of hunter-gatherers. This is rarely war in the modern sense of planned campaigns. It is more a matter of raids, ambushes and fights between groups who have met accidentally. It is, nevertheless, quite lethal. Dr Bowles identified eight ethnographic and 15 archaeological studies that met his criteria of reliability and abundance of data. They suggest that 12-16% of mortality is the result of such low-level warfare. This is a figure much higher than, for example, the mortality caused in Europe by two world wars, and is certainly enough to drive evolution. But the question remained of whether it could drive group selection.

It was to test that idea that Dr Bowles devised his model. Although it pitches group against group, it is strictly based on the idea of selfish genes. It looks at the benefit to a notional gene that promotes self-sacrifice. The question is, does such a gene do well if individuals having it belong to a group that takes over the territory and resources of a similar, neighbouring group, but at the risk of some of those individuals losing their life in the process? What is the maximum self-sacrificial cost that can evolve in these circumstances?

In the absence of war, a gene imposing a self-sacrificial cost of as little as 3% in forgone reproduction would drop from 90% to 10% of the population in 150 generations. Dr Bowles’s model, however, predicts that much higher levels of self-sacrifice—up to 13% in one case—could be sustained if warfare were brought into the equation. This, he contends, allows the evolution of collaborative, altruistic traits that would not otherwise be possible. Moreover, although warfare is an extreme example, other, less martial forms of self sacrifice may have similar group-strengthening virtues.
via The Economist


Education and Health Care Spending

Amazing how much more is spent on higher education and health care in the US than in the rest of the world. The gap between the US and other countries would be even larger if shown as spending per person, as GDP per capita is larger in the US than in the other countries.

At least with higher education spending the US has created the highest quality system in the world. With the health care system it seems like much of that extra spending is just wasted. But, spending on higher education is getting out of hand and needs to be reigned in as well.

Also interesting how over half of the spending on higher education and health care comes from private sources in the US, while in the rest of the world the government picks up the majority of the tab.

via The Economist and The Economist


I Take Back My Reznor Being a Genius Comment

The two goals of an artist releasing an album (or any other digital good) should be to maximize the amount of money made and to maximize the amount of people that can listen to it (see previous Radiohead analysis). I propose a 2 part system to accomplish these goals:

Part 1: Give away a free "basic" version, trying to maximize the amount of people who can have access to it.
Part 2: Auction off a limited quantity "special edition" version, trying to maximize the amount of money.

This system would allow a limited number of rich people (or poor hardcore fans) to purchase the special edition as a status symbol which in turn supports the artist and allows everyone else to get access to the music for free.

A while back I had called Trent Reznor a genius for the way he released his Ghost album, giving away the .mp3s for free but charging for a limited edition box set. I am now taking back my praise because while I think the free version maximized the number of people who could listen to it, I think he left money on the table by not auctioning off the limited edition on eBay.

The one part I haven't figure out is how many copies should be put up for bid. Producing the good is a trivial cost, so you are just trying to maximize the total amount of revenue that you take in. This gets into the interesting concept of "virtual scarcity", where something derives its value from being exclusive, and more money can be made by artificially restricting production.

If you sold just one copy, would it be so exclusive that someone would pay more than double what you could get for two copies? Or would 10 copies be better? Or one million? It economic jargon this would be the elasticity of demand. But, I have no idea in practice what that demand curve looks like or how you could determine this before hand.

The elasticity of demand is being tested out with Apple's new pricing scheme:

These are the results labels were hoping for when Apple relented and began selling music at three price tiers: 69 cents, 99 cents and $1.29. While variable pricing made sales volume decline, higher prices compensate for that to create more revenue.

Sales of the weekly top 40 tracks -- most of which now have the higher wholesale rate -- fell about 11% in the six weeks after the launch of variable pricing. But retailer revenue from those tracks rose about 10% after the price hike. That means labels took in 20% more revenue for those songs.
While the higher prices are leading to more revenue which is a good thing, it also means that fewer get the music is a bad thing. Which means artists have to choose between being rich or being popular. With the system proposed above, you get the best of both worlds.


Cap and Trade Bill Passes House, Carbon Tax Would Be Better

The house passed the American Clean Energy and Security Act 219 to 212. While this bill included a lot, the primary piece was setting up a cap and trade system for carbon dioxide emissions. It aims to reduce emissions 17% below the 2005 level in 2020 and by 80% in 2050.

While I like the target of a 17% reduction in 2020, I think a carbon tax would be preferable to this cap and trade system for three reasons.

First, a carbon tax would shift from taxing work to taxing fossil fuels and energy. The carbon tax would allow for a lowering of the payroll tax. If set at a level of $20 a ton of CO2, this would bring in approximately $120 billion a year. This revenue could also be generated in a cap and trade system by auctioning off permits but that is not what has happened:

On May 15th Henry Waxman and Edward Markey, the Democratic point-men on climate change in the House of Representatives, unveiled a bill that would give away 85% of carbon permits for nothing, with only 15% being auctioned.

First, it generates no money, thereby royally messing up Mr Obama’s budget. Second, it means that the permits go not to those who value them most (as in an auction) but to those whom the government favours. Under Waxman-Markey, electricity-distributors would get the largest share, with the rest divided between energy-intensive manufacturers, carmakers, natural-gas distributors, states with renewable-energy programmes and so on. Oil firms, with only 2% of the permits, feel hard done by.

The grand handout to shareholders is meant to last until around 2030, by which time all permits will be auctioned.
Second, a carbon tax has a fixed price on carbon, while the cap and trade system will have a variable price based on the carbon market. The fixed price makes it easier for businesses to plan. The carbon price is likely to vary wildly in the market and be at the mercy of the same sort of financial issues we have seen in the stock market and housing markets recently.

One supposed advantage of the cap and trade system is that it sets a limit on overall emissions while the emissions from a carbon tax will vary. But, the current plan has ceilings and floors put in it so that if it gets too expensive to cut emissions then the cap will rise.

Third, the current cap and trade system is very complex. This explanation of how allocations to regulated utilities work system made my head spin. Yes, it is true that the IRS tax code is extremely complicated as well and many of the exemptions for the cap and trade would be in a carbon tax as well. But, a carbon tax is still a lot easier to explain to people than this system, and it in practice this tax would be similar to the federal gasoline tax that doesn't have as many exemptions. More benefits of the carbon tax over cap and trade can be found at the Carbon Tax Center.

Overall then, would I like to see this pass or would it be better to try for a carbon tax in a year or two?

My guess is that in two years the US will be out of the recession and will need to focus on reducing large budget deficits. Adding a carbon tax at this point would be easier to do as new revenue will need to come from somewhere. While there is talk from environmentalists that a carbon tax is not politically possible, I am not convinced given that Canada has been able to pass one and that the opponents of the current bill have already labeling it a "tax on everything" (apparently conservatives don't realize that the current income tax is already a tax on everything).

I think the caps from 2020 to 2050 are irrelevant as they will be rewritten later. If it is too expensive or if a new administration has other priorities the caps will rise. I am concerned that the 2020 goal will be "hit" but that shenanigans in the way exemptions and offsets are handled will mean that the reductions are very much at all. But, if they are hit without to much gaming of the system I think it is a great accomplishment.

While ambivalent, I would like to see it pass as I will take this with its warts to what might be possible in a couple of years. But, once the cap and trade system is passed, a carbon tax is unlikely and a great opportunity to shift from taxing work to carbon emissions will have been lost.


Customer Service? Ask a Volunteer

HERE’S the job description: You spend a few hours a day, up to 20 a week, at your computer, supplying answers online to customer questions about technical matters like how to set up an Internet home network or how to program a new high-definition television.

The pay: $0.

A shabby form of exploitation? Not to Justin McMurry of Keller, Tex., who spends about that amount of time helping customers of Verizon’s high-speed fiber optic Internet, television and telephone service, which the company is gradually rolling out across the country.

Mr. McMurry is part of an emerging corps of Web-savvy helpers that large corporations, start-up companies and venture capitalists are betting will transform the field of customer service.
I wrote previously about the Hybrid Economy and how Digg was like a for-profit non-profit organization as most of the work on their site was done by volunteers. Looks like this business model is now moving to customer service.

What motivates these volunteers?
Such enthusiasts are known as lead users, or super-users, and their role in contributing innovations to product development and improvement — often selflessly — has been closely researched in recent years. These unpaid contributors, it seems, are motivated mainly by a payoff in enjoyment and respect among their peers.

The mentality of super-users in online customer-service communities is similar to that of devout gamers, according to Mr. Fong. Lithium’s customer service sites for companies, for example, offer elaborate rating systems for contributors, with ranks, badges and “kudos counts.”

“That alone is addictive,” Mr. Fong said. “They are revered by their peers.”
Being able to successfully create an environment where people will work for free will be the key to success for many companies.
Natalie L. Petouhoff, an analyst at Forrester Research, said that online user groups conform to what she calls the 1-9-90 rule. About 1 percent of those in the community, she explained, are super-users who supply most of the best answers and commentary. An additional 9 percent are “responders” who mainly reply and rate Web posts, she said, and the other 90 percent are “readers” who primarily peruse and search the Web site for useful information.

“The 90 percent will come,” Ms. Petouhoff said, “if you have the 1 percent.”
Another long tail, and similar to what happens at Wikipedia.

via NY Times


Monday, June 29, 2009

Island Nation to Produce 5X its Energy Needs with Geothermal Energy

The tiny two island Caribbean nation of Saint Kitts and Nevis recently discovered several large geothermal reservoirs that will allow it to produce an estimated 50 megawatts (MW) of clean energy. With a need of only 10MW, Saint Kitts and Nevis is poised to become one of the most carbon-neutral nations in the world.

In addition to becoming virtually carbon-neutral, Saint Kitts and Nevis plans to export the excess geothermal energy it produces. This economic boost, combined with the construction of a new 2,500 acre beach resort on Saint Kitts has made the small 40,000 person nation hopeful for a greatly improved future and higher quality of life.

Formal exploration for geothermal resources began in 2007 after the government granted the West Indies Power Company the right to drill and develop facilities (this explains the recent "discovery"). Construction on the first plant, known as the Spring Hill facility, began earlier this year. It will initially produce 10 MW of electrical power using two turbines. It is hoped that the plant will be operational by this time next year, and that the facility can be upgraded soon to expand its capacity by an estimated 40 MW. What's amazing is that this 50MW is only a portion of the geothermal potential thought to exist on Nevis. It's generally agreed by experts that above 200 MW could eventually be produced.

The project is gaining support throughout the region. The Caribbean Community (CARICOM) recently provided a $38,000 grant to aid with technical assistance on the geothermal project, with the goal of helping to develop alternative energy resources that mitigate climate change effects. The World Bank has also shown interest in the project's importance to the region.
Geothermal makes a lot of sense for the Caribbean islands with their high levels of volcanic activity and high costs for importing energy.

I spent a couple of months on St. Kitts and quite enjoyed myself (minus one unfortunate crustacean manslaughter incident). Glad to see that they are getting in the news for something other than alcoholic monkeys.

via celsias


World’s Largest Solar Project Planned for Saharan Desert

If just 0.3% of the Saharan Desert was used for a concentrating solar plant, it would produce enough power to provide all of Europe with clean renewable energy. That is why 20 blue chip German companies are gathering together next month to discuss plans and investments to create such a massive project. Both the meeting and project are being promoted by the Desertec Foundation, which is proposing to erect 100 GW of concentrating solar power plants throughout Northern Africa.
The red squares in the above map represent the land area necessary to meet the energy demand of the world, the EU and MENA in 2005. The last square represents the land necessary for the proposed project to generate 100 GW of concentrating solar power. The project being proposed by Desertec would not all be situated in one location, but scattered throughout politically stable countries. Taken as a whole, the project qualifies as the world’s largest solar installation - 80 times larger than the PG&E and BrightSource project planned for the Mojave Desert. The power generated would be transported over high-voltage DC lines across the Mediterranean Sea to Europe, where it would supply 15% of the energy demand. The project is still 10-15 years from going online, but that’s why major players are getting started now. To build the 100 GWs worth of solar power a total of €400bn investment is needed.

The project hopes to combine desalination plants and agriculture along with the solar plants to provide fresh drinking water and grow crops in arid desert region. Concentrated solar power will provide energy and waste heat to create freshwater from seawater. Some of that water would then be used to irrigate nearby crops, while the rest would supply fresh drinking water to local populations. This concept is very similar to the Sahara Forest Project, which we explored last year.
via Inhabit


Technology Quarterly

The Economist's Technology Quarterly is out, and as always has many interesting articles.

My favorites:
Solar-thermal technology
Powering hybrid cars with compressed air
Building the smart grid


Sunday, June 28, 2009

The Winner of the $1 Million Netflix Prize

After nearly three years and entries from more than 50,000 contestants, a multinational team says that it has met the requirements to win the million-dollar Netflix Prize: It developed powerful algorithms that improve the movie recommendations made by Netflix’s existing software by more than 10 percent.

On Friday, a coalition of four teams calling itself BellKor’s Pragmatic Chaos — made up of statisticians, machine learning experts and computer engineers from America, Austria, Canada and Israel — declared that it has produced a program that improves the accuracy of the predictions by 10.05 percent.

Under the rules of the contest, Netflix said that other contestants now have 30 days to try to do even better. If they cannot, BellKor’s Pragmatic Chaos will collect the $1 million.

BellKor’s Pragmatic Chaos is a pretty elite crowd. The group is a collection of the 2007 and 2008 winners of the Netflix Progress Prizes — $50,000 a year for the teams that made the most progress toward the 10 percent improvement — and a pair of engineers from Montreal who have long been near the top of the contest’s leaderboard.

The team includes Bob Bell and Chris Volinsky of the statistics research department at AT&T Research (members of the 2007 and 2008 Progress Prize-winning teams); Andreas Toscher and Michael Jahrer, machine learning experts at Commendo research and consulting in Austria (members of the 2008 winning team); Martin Piotte and Martin Chabbert, engineers and founders of Pragmatic Theory in Montreal; and Yehuda Koren, a senior scientist at Yahoo Research in Israel (a member of the 2007 and 2008 winning teams).
Congrats to the winners. I hope other companies will adopt this contest method of innovation as well.

via Bits


Interesting Articles of the Week

Microsoft debuts power conservation website.

The patient capitalist.

Can you get fit in six minutes a week?

Why do Chinese save? Boys want to marry.

Meditation may increase gray matter.


Exercise Ball Backflip

via YouTube


Immersion Demos New TouchSense Multitouch, Haptic Keyboard

Immersion (known for creative input experiences) demoed a fairly interesting new haptic experiment its working on dubbed TouchSense -- a virtual, iPhone-like keyboard that not only responds with sound and vibration, but some kind of feedback that recreates the feeling of actually moving your fingers across a keyboard. Details were scarce on the technology used, but during the demo at D7 the company showed off multitouch typing, and a new form of feedback which seems to create the sensation that there is a physical keyboard beneath your fingers. The functionality sounds eerily similar to the Haptikos technology that Nokia showed off way back in 2007.
I wonder how well this works? While currently the debate rages as to whether the Palm Pre with its physical keyboard is better than the iPhone with its virtual one, I can't wait for the day when haptic feedback technology like this will make typing on a touch screen feel like a physical keyboard (or at least allow you type just as fast) and end the debate once and for all.

via Engadget


Is GM Even An American Company?

It has 463 subsidiaries and employs 234,500 people, 91,000 of them in America, where it also provides health-care and pension benefits for 493,000 retired workers.

For all his sometimes plodding approach at home, Mr Wagoner had proved surprisingly fleet of foot abroad, where GM was making 65% of its sales. GM had long been big in Latin America, but in China and Russia it was reaping the rewards from being among the first foreign firms to set up factories. In China, with its joint-venture partner, SAIC, GM now has 12% of a market that will soon surpass America’s.
Over 1/2 of GM's customers and employees are outside the US. With numbers like those, it makes you wonder what exactly makes GM an American company?

Maybe it is the owners. I wonder what percentage of GM was owned by Americans before the bankruptcy?


Saturday, June 27, 2009

They Can Fly?

This flying stingray was trying to avoid the attentions of the aptly-named killer whale, which was ready to take a bite out of the fish when the stingray made its leap for safety.

While stingrays seem most content to spend their days lying at the bottom of the sea-bed, occasionally sticking their stingers into unassuming human feet, this one proved they can be moved to flights of fancy when needed.
via Luciole Press Blog


Cooper's Law

While leading this smart-antenna company, where he is now the chairman, Mr Cooper coined Cooper’s law, which notes that spectral efficiency—the amount of information that can be crammed into a given slice of radio spectrum—has doubled every 30 months since Guglielmo Marconi patented the wireless telegraph in 1897. Modern devices have a spectral efficiency more than one trillion times greater than Marconi’s original device did 112 years ago (it broadcast in Morse code over a very wide frequency range). Smart antennas, Mr Cooper believes, will help to ensure that this progress continues, and his law continues to hold.
Not quite at Moore's Law doubling time of 24 months, but quite impressive and given the importance of mobile phones and technology at this point in history probably more important.

via The Economist


First Solar Touts Falling Costs

In February, First Solar tooted its horn about breaking the $1-per-watt barrier for making solar modules in the last months of 2008. This week, the company said costs had fallen again to 93 cents per watt, down 5% in three months and down 28% in a year. (The full presentation is here.)

First Solar executives also say to expect more falling costs. By 2014, it expects to drive down cost per watt to make solar modules to fall to between 52 and 63 cents by 2014. The biggest driver of the lower costs is better efficiency, it said. Production per fabrication line is expected to nearly double over the next five years.
Good to hear that prices are coming down, and I hope they can hit their future goals.

And one for the "I did not know that" file:
The Walton family of Wal-Mart fame owns about 39% of First Solar stock – and the retailer’s legendary penchant for driving down costs is rubbing off on the renewable-energy company.
via WSJ


Friday, June 26, 2009

Prisoners vs. Farmers Revisited

Are there more prisoners than farmers in the US?

When I previously looked into it I found that:

The prison population of 2.1 million is larger than the EPA's number of 960,000 persons claiming farming as their principal occupation, the BLS's number of nearly 1.3 million farmers, ranchers, and agricultural managers, or the EPA's number of 1.9 primary and secondary occupation farmers.
This USDA report gives another way to look at this. It reports the total amount of labor performed on farms by operators, spouses, unpaid workers, hired workers and contract workers. In 2004 each of the 2.1 million farms had an average of 1.59 annual person equivalents of labor (2,000 hours per person) for a total 3.2 million workers.

But, much of that work is done by people that wouldn't consider themselves farmers. A better estimate of farmers would look at the number of hours worked by just operators, hired workers and contract workers excluding retirement and lifestyle farms. Work on farms classified as retirement or lifestyle (1.2 of 2.1 million farms) accounts for 900,000 workers. Work by spouses (who likely have a job off farm and don't see themselves primarily as farmers) accounts for 12.4% of all work or 400,000 workers. The amount of labor performed by unpaid workers isn't specified, but contract and unpaid workers together comprise 16% of all work, and if 1/3 of that is from unpaid workers (the other 2/3 from contract workers) that would be 175,000 workers. Removing all retirement and lifestyle farm work, as well as all work from spouses and unpaid workers leaves 1.9 million farmers.

The 2.1 million prisoners is 2/3rds of the 3.2 million workers on farms and slightly higher than the 1.9 million farmers. Either way, it amazes me that it is even close.


Optimistic Thoughts Can Do More Harm Than Good

“I CAN pass this exam”, “I am a wonderful person and will find love again” and “I am capable and deserve that pay rise” are phrases that students, the broken-hearted and driven employees may repeat to themselves over and over again in the face of adversity. Self-help books through the ages, including Norman Vincent Peale’s 1952 classic, “The Power of Positive Thinking”, have encouraged people with low self-esteem to make positive self-statements. New research, however, suggests it may do more harm than good.

Wondering if the same tendencies could apply to making positive self-statements, Joanne Wood of the University of Waterloo in Canada and her colleagues designed a series of experiments. They questioned a group of 68 men and women using long-accepted methods to measure self-esteem. The participants were then asked to spend four minutes writing down any thoughts and feelings that were on their minds. In the midst of this, half were randomly assigned to say to themselves “I am a lovable person” every time they heard a bell ring.

Immediately after the exercise, they were asked questions such as “What is the probability that a 30-year-old will be involved in a happy, loving romance?” to measure individual moods using a scoring system that ranged from a low of zero to a high of 35. Past studies have indicated that optimistic answers indicate happy moods.

As the researchers report in Psychological Science, those with high self-esteem who repeated “I’m a lovable person” scored an average of 31 on their mood assessment compared with an average of 25 by those who did not repeat the phrase. Among participants with low self-esteem, those making the statement scored a dismal average of 10 while those that did not managed a brighter average of 17.

Dr Wood suggests that positive self-statements cause negative moods in people with low self-esteem because they conflict with those people’s views of themselves. When positive self-statements strongly conflict with self-perception, she argues, there is not mere resistance but a reinforcing of self-perception.
via The Economist


Anti-Fog Glass

Take a hot shower and the chances are that the bathroom mirror will mist up. Glasses and camera lenses can also suffer in humid conditions. And it can be dangerous when a car’s windscreen clouds over. Various methods, including sprays, materials and heating, have been used with varying degrees of success to deal with the problem. Now a Chinese team has come up with a new idea.

Junhui He of the Chinese Academy of Sciences, Beijing, and his colleagues have created a cheap anti-mist coating. They estimate one square metre of glass will cost only a few cents to treat.

Glass mists up because of sudden condensation when warm, humid air comes into contact with a cold surface. Water vapour condenses to form thousands of tiny water droplets which scatter light. Dr He and his colleagues knew that when certain nanoparticles (which have diameters of only a few billionths of a metre), are spread over glass they break the surface tension of the droplets as they try to form. The result is a thin, transparent film of water which, unlike droplets, does not scatter light.

But what size and shape of nanoparticles is most effective and can be produced cheaply? Dr He’s team experimented with different shapes and found a simple one-step method using polystyrene spheres treated with oxygen and then coated with silica to build raspberry-like shapes. These shapes have proved to be the most effective in preventing a surface from misting over. Dr He and his colleagues hope to commercialise the process quickly.
via The Economist


Wednesday, June 17, 2009

The Long Tail of Farming

The USDA's Structure and Finances of U.S. Farms: Family Farm Report, 2007 Edition tells us that there are 2.1 million farms in the US, the average farm generates around $75,000 in revenue, and the average farming family derives 82.5% of its income from off-farm sources.

While true, these statistics are misleading because just like like Digg and Wikipedia, church donations and income tax payments, and book writing, farms follow a long tail power distribution. Instead of farms following a bell curve, with a few large farms, a few small farms and lots of medium sized farms in the middle, farms follow a power distribution with a few large farms, a few more medium sized farms, and lots and lots of small farms. This long-tail can be seen in the chart at left and the table below.

Google Spreadsheet

5 points on the long tails of farms.

1) With regards to output, farms follow the 80/20 rule (actually the 85/15 rule) with 16% of farms accounting for 86.2% of total revenue and the other 84% account for just 13.8%.

Of the 2.1 million farms in the US, just 338,000 account for almost 7/8 of all production. The entire output of the 1.7 million small farms could be generated by 55,000 farms producing at the average medium/large farm. This means that 400,000 farms at that level of production could produce as much food as our current 2.1 million farms.

2) The long tail of farming is getting longer. On page 30 of the report it shows that over the last 10 years the number of large farms has been increasing and the the number of very small farms has been increasing (although this might just be due to a change in methodology) but the number of medium sized farms has been decreasing. I calculate that the exponent of the long tail of farming to be between 1.1-1.3 which would place it somewhere between the net worth of Americans and the population of U.S. cities.

3) Averages don't mean a whole lot with long tails. The average farmer works a small farm and makes little to no more money from farming. This is a meaningless statement as can be seen by adding home gardens to the analysis. This would greatly expand the number of farmers while adding little to the overall amount of food produced. But, it would greatly reduce the average farm size and the average revenue generated from a farm.

By breaking the analysis into small, medium and large farms we see a much different picture. The average small farmer gets 105% of his family revenue from off-farm (meaning on average they lose money farming) while medium farmers get 45.2% of their income off farm and large farms just 17.4%. It is just small farms that are getting the majority of income from off farm, while those medium and large farms (that produce 86.2% of output) are getting most of their income from farming. Instead of looking at the average income from farming, it makes more sense to look at the total number of farmers that get the majority of their income from farming.

Just as most book writers sell few books and do not make much revenue from their writing, so to are most farm small and make little money. But just as many of book writers have non-financial motivations, so too are many small farmers in the business for non monetary reasons. This can be seen by the fact that most small farms are categorized as either retirement" or "residential/lifestyle".

4) Output shows a much longer tail than land. Small farms account for just 13.8% of output but hold 40.9% of land. Large farms account for 45.4% of output with just 22.7% of the land.

5) When it comes to meat production there is a great divide as many small farmers raise cattle, but almost none raise pigs or chickens.

Calculations and Caveats

Small farms are defined as limited resource, retirement, residential/lifestyle, and low-sales farms. Medium farms are made up of medium sales, large scale family farms and nonfamily farms. Large farms are very large family farms.

Revenue is used to determine output. This over weighs the production of meat producers that purchase feed for their animals. It would be better if there was a way to subtract off the animal feed and look at just the amount of value that each farmed added.


High-Altitude Wind Machines Could Power New York City

In the future, will wind power tapped by high-flying kites light up New York? A new study by scientists at the Carnegie Institution and California State University identifies New York as a prime location for exploiting high-altitude winds, which globally contain enough energy to meet world demand 100 times over. The researchers found that the regions best suited for harvesting this energy match with population centers in the eastern U.S. and East Asia, but fluctuating wind strength still presents a challenge for exploiting this energy source on a large scale.

Using 28 years of data from the National Center for Environmental Prediction and the Department of Energy, Ken Caldeira of the Carnegie Institution's Department of Global Ecology and Cristina Archer of California State University, Chico, compiled the first-ever global survey of wind energy available at high altitudes in the atmosphere. The researchers assessed potential for wind power in terms of "wind power density," which takes into account both wind speed and air density at different altitudes.

"There is a huge amount of energy available in high altitude winds," said coauthor Ken Caldeira. "These winds blow much more strongly and steadily than near-surface winds, but you need to go get up miles to get a big advantage. Ideally, you would like to be up near the jet streams, around 30,000 feet."

Jet streams are meandering belts of fast winds at altitudes between 20 and 50,000 feet that shift seasonally, but otherwise are persistent features in the atmosphere. Jet stream winds are generally steadier and 10 times faster than winds near the ground, making them a potentially vast and dependable source of energy. Several technological schemes have been proposed to harvest this energy, including tethered, kite-like wind turbines that would be lofted to the altitude of the jet streams. Up to 40 megawatts of electricity could be generated by current designs and transmitted to the ground via the tether.

"We found the highest wind power densities over Japan and eastern China, the eastern coast of the United States, southern Australia, and north-eastern Africa," said lead author Archer. "The median values in these areas are greater than 10 kilowatts per square meter. This is unthinkable near the ground, where even the best locations have usually less than one kilowatt per square meter."

Included in the analysis were assessments of high altitude wind energy for the world's five largest cities: Tokyo, New York, Sao Paulo, Seoul, and Mexico City. "For cities that are affected by polar jet streams such as Tokyo, Seoul, and New York, the high-altitude resource is phenomenal," said Archer. "New York, which has the highest average high-altitude wind power density of any U.S. city, has an average wind power density of up to 16 kilowatts per square meter."

Tokyo and Seoul also have high wind power density because they are both affected by the East Asian jet stream. Mexico City and Sao Paulo are located at tropical latitudes, so they are rarely affected by the polar jet streams and just occasionally by the weaker sub-tropical jets. As a result they have lower wind power densities than the other three cities.

"While there is enough power in these high altitude winds to power all of modern civilization, at any specific location there are still times when the winds do not blow," said Caldeira. Even over the best areas, the wind can be expected to fail about five percent of the time. "This means that you either need back-up power, massive amounts of energy storage, or a continental or even global scale electricity grid to assure power availability. So, while high-altitude wind may ultimately prove to be a major energy source, it requires substantial infrastructure."
Instead of building up the electrical grid to allow wind power from the Midwest to be transmitted to the large population areas on the coasts, it might make more sense to capture the high altitude wind energy locally.

via ScienceDaily and Wired


Interesting Articles of the Week

The no-stats all-star.

What it's like to spend five months in silence.

'Warrior Gene' linked to gang membership, weapon use.

The New Socialism: Global collectivist society is coming online.

God sending mixed messages: Same-sex behavior seen in nearly all animals.


Tuesday, June 16, 2009

The Sweet Taste Of Uncertainty

You've just won a prize. Would you like to find out what it is right away, or wait until later? A new study in the Journal of Consumer Research says most people are happier waiting.

People who know they've won a prize enjoy the anticipation of wondering what they will win, especially if they have clues about what it might be, explain authors Yih Hwai Lee (National University of Singapore) and Cheng Qiu (University of Hong Kong). Prize winners spend time imagining using the potential prizes, and such "virtual consumption" prolongs positive feelings, making them receptive to marketing messages.

The authors conducted two studies where participants played and won simulated lucky-draw games. Some learned what their prizes were immediately; others were told they had won something from a pool of prizes. "We find that consumers will be more delighted after winning a lucky draw when they do not know immediately the exact prize they will receive than when they do," the authors write.

Participants who got clues about the nature of the possible prizes (such as knowing it was an electronic product) responded even more favorably. They also favored prizes that were capable of eliciting mental imagery, like sensory-stimulating products such as chocolates or aromatherapy candles. (Apparently, functional items like cutlery and digital clocks failed to stimulate.)

via ScienceDaily


Monday, June 15, 2009

Milk Goes 'Green': Today's Dairy Farms Use Less Land, Feed And Water

Dairy genetics, nutrition, herd management and improved animal welfare over the past 60 years have resulted in a modern milk production system that has a smaller carbon footprint than mid-20th century farming practices, says a Cornell University study in the Journal of Animal Science (June 2009).

The study shows that the carbon footprint for a gallon of milk produced in 2007 was only 37 percent of that produced in 1944. Improved efficiency has enabled the U.S. dairy industry to produce 186 billion pounds of milk from 9.2 million cows in 2007, compared to only 117 billion pounds of milk from 25.6 million cows in 1944. This has resulted in a 41 percent decrease in the total carbon footprint for U.S. milk production.

Efficiency also resulted in reductions in resource use and waste output. Modern dairy systems only use 10 percent of the land, 23 percent of the feedstuffs and 35 percent of the water required to produce the same amount of milk in 1944. Similarly, 2007 dairy farming produced only 24 percent of the manure and 43 percent of the methane output per gallon of milk compared to farming in 1944.
The same researchers also found that cows treated with rbST needed less feed and emitted less CO2. Let the debate of whether organic or regular milk is greener commence.

via ScienceDaily