.comment-link {margin-left:.6em;}

Mutualist Blog: Free Market Anti-Capitalism

To dissolve, submerge, and cause to disappear the political or governmental system in the economic system by reducing, simplifying, decentralizing and suppressing, one after another, all the wheels of this great machine, which is called the Government or the State. --Proudhon, General Idea of the Revolution

My Photo
Name:
Location: Northwest Arkansas, United States

Tuesday, February 02, 2010

Cross-posted from P2P Blog: Abundance Creates Utility But Destroys Exchange Value

What’s variously called the “cognitive capitalism” model, or Paul Romer’s New Growth Theory, assumes that technological progress and increased efficiency will lead to “economic growth” in the sense of the total volume of monetized economic activity. But this presumes the use of “intellectual property” and other forms of artificial scarcity to capitalize efficiency improvements as a source of rents, rather than allowing market competition to pass reduced costs on to the consumer in the form of lower prices.

But similar assumptions are found, in a weaker form, even among people who aren’t exactly friends of the proprietary content industries. This includes Chris Anderson’s “Freemium” model, and similar arguments by Mike Masnick at Techdirt. Their basic idea, which is great as far as it goes, is to use free content to piggyback monetized auxiliary services: Linux distros offering tech support and customization, music companies selling certified authentic copies available at a convenient location, Phish selling concert tickets, etc.

One thing they fail to adequately address, though, is that the total amount of cash available from such auxiliary services is less than what proprietary content brought in. Or to take Anderson’s example, Encarta sales didn’t bring in money equivalent to the exchange value it destroyed for Britannica et al. And Wikipedia destroyed billions in net monetized value for both hard copy encyclopedias and Encarta.

In Masnick’s and Anderson’s model, though, the total size of the monetized economy overall still continues to increase. A reduction in the total money expenditures (and hence labor) required to obtain a consumer good will simply free up purchasing power and increase demand for new goods in some other sector.

The problem is, this assumes that total demand is infinitely, upwardly elastic.

Jeff Jarvis sparked a long chain of discussions by arguing that innovation, by increasing efficiency, results in “shrinkage” rather than growth. The money left in customers’ pockets, to the extent that it is reinvested in more productive venues, may affect the small business sector and not even show up in econometric statistics.

Anton Steinpilz, riffing off Jarvis, suggested that the reduced capital expenditures might not reappear as increased spending anywhere, but might (essentially a two-sided coin) be pocketed by the consumer in the form of increased leisure and/or forced on the worker in the form of technological unemployment.

And Eric Reasons, writing about the same time, argued that innovation was being passed on to consumers, resulting in “massive deflation” and “less money involved” overall.

Reasons built on this idea, massive deflation resulting from increased efficiency, in a subsequent blog post. The problem, Reasons argued, was that while the deflation of prices in the old proprietary content industries benefited consumers by leaving dollars in their pockets, many of those consumers were employees of industries made obsolete by the new business models.

Effectively, the restrictions that held supply in check for IP are slowly falling away. As effective supply rises, price plummets. Don’t believe me? You probably spend less money now on music than you did 15 years ago, and your collection is larger and more varied than ever. You probably spend less time watching TV news, and less money on newspapers than you did 10 years ago, and are better informed.

I won’t go so far as to say that the knowledge economy is going to be no economy at all, but it is a shrinking one in terms of money, both in terms of cost to the consumer, and in terms of the jobs produced in it.

And the issue is clearly shrinkage, not just a shift of superfluous capital and purchasing power to new objects. Craigslist employs fewer people than the industries it destroyed, for example. The ideal, Reasons argued, is for unproductive activity to be eliminated, but for falling work hours to be offset by lower prices, so that workers experience the deflation as a reduction in the ratio of effort to consumption:

Given the amount of current consumption of intellectual property (copyrighted material like music, software, and newsprint; patented goods like just about everything else), couldn’t we take advantage of this deflation to help cushion the blow of falling wages? How much of our income is dedicated to intellectual property, and its derived products? If wages decrease at the same time as cost-of-living decreases, are we really that bad off? Deflation moves in both directions, as it were….

Every bit of economic policy coming out of Washington is based on trying to maintain a status quo that can not be maintained in a global marketplace. This can temporarily inflate some sectors of our economy, but ultimately will leave us with nothing but companies that make the wrong things, and people who perform the wrong jobs. You know what they say: “As GM goes, so goes the country.”

Contrary to “Free” optimists like Chris Anderson and Kevin Kelley, Reasons suspects that reduced rents on proprietary content cannot be replaced by monetization in other areas. The shrinkage of proprietary content industries will not be replaced by growth elsewhere, or the reduced prices offset by a shift of demand elsewhere, on a one-to-one basis.

Mike Masnick, of Techdirt, praised Reasons’ analysis, but suggested—from a fairly conventional standpoint—that it was incomplete:

So this is a great way to think about the threat side of things. Unfortunately, I don’t think Eric takes it all the way to the next side (the opportunity side), which we tried to highlight in that first link up top, here. Eric claims that this “deflation” makes the sector shrink, but I don’t believe that’s right. It makes companies who rely on business models of artificial scarcity to shrink, but it doesn’t make the overall sector shrink if you define the market properly. Economic efficiency may make certain segments of the market shrink (or disappear), but it expands the overall market.

Why? Because efficiency gives you more output for the same input (bigger market!). The tricky part is that it may move around where that output occurs. And, when you’re dealing with what I’ve been calling “infinite goods” you can have a multiplicative impact on the market. That’s because a large part of the “output” is now infinitely reproduceable at no cost. For those who stop thinking of these as “goods that are being copied against our will” and start realizing that they’re “inputs into a wider market where we don’t have to pay for any of the distribution or promotion!” there are much greater opportunities. It’s just that they don’t come from artificial scarcity any more. They come from abundance.

Reasons responded, in a comment below Masnick’s post (aptly titled “The glass is twice the size it needs to be…,” that “this efficiency will make the economic markets they affect “shrink” in terms of economy and capital. It doesn’t mean that the number of variation of the products available will shrink, just the capital involved.”

He stated this assessment in even sharper terms in a comment under Michel Bauwens’s blog post on the exchange:

While I certainly wouldn’t want to go toe-to-toe with Mike Masnick on the subject, I did try to clarify in comments that it isn’t that I don’t see the opportunity in the “knowledge economy”, but simply that value can be created where capital can’t be captured from it. The trick is to reap that value, and distinguish where capital can and where it cannot add value. Of course there’s money to be made in the knowledge economy—ask Google or Craigslist—but by introducing such profound efficiencies, they deflate the markets they touch at a rate far faster than the human capital can redeploy itself in other markets. Since so much capital is dependent upon consumerism generated by that idled human capital, deflation follows.

Neoclassical economists would no doubt dismiss Reasons’ argument, and other theories of technological unemployment, as variations on the “lump of labor fallacy.” But their dismissal of it, under that trite label, itself makes an implicit assumption that’s hardly self-evident: that demand is infinitely, upwardly elastic.

That assumption is stated, in the most vulgar of terms, from an Austrian standpoint by a writer at LewRockwell.com:

You know, properly speaking, the “correct” level of unemployment is zero. Theoretically, the demand for goods and services is infinite. My own desire for goods and services has no limit, and neither does anyone else’s. So even if everyone worked 24/7, they could never satisfy all the potential demand. It’s just a matter of allowing people to work at wages that others are willing and able to pay.

Aside from the fact that this implicitly contradicts Austrian arguments that increased labor productivity from capital investment are responsible for reduced working hours (see, e.g., George Reisman–who also incidentally conflates capital accumulation with innovation, thus ignoring the fact that innovation is more likely to make investment capital superfluous), this is almost cartoonish nonsense. If the demand for goods and services is unconstrained by the disutility of labor, then it follows that absent a minimum wage people would be working at least every possible waking hour—even if not “24/7.” On the other hand if there is a tradeoff between infinite demand and the disutility of labor, then demand is not infinitely upwardly elastic. Some productivity increases will be lost through “leakages” in the form of increased leisure, rather than consumption of increased output of goods. That means that the demand for labor, even if somewhat elastic, will not grow as quickly as labor productivity.

Tom Walker (aka Sandwichman), an economist who has devoted most of his career to unmasking the “lump of labor” caricature as a crude strawman, confesses a degree of puzzlement as to why orthodox economists are so strident on the issue. After all, what they denounce as the “lump of labor fallacy” is based on what, “[w]hen economists do it, …is arcane and learned ceteris paribus hokus pokus.” Given existing levels of demand for consumer goods, any increase in labor productivity will result in a reduction in total work hours available.

Of course the orthodox economist will argue that ceteris is never paribus. But that demand freed up by reduced wage expenditures in one sector will automatically translate, on a one-to-one basis, into increased demand (and hence employment) in another sector is itself by no means self-evident. And an assumption that such will occur, so strong that one feels sufficiently confident to invent a new “fallacy” for those who argue otherwise, strikes me as a belief that belongs more in the realm of theology than of economics.

The cognitive capitalism and New Growth Theory models are an updated version of Daniel Bell’s “post-industrial” thesis. The problem is, post-industrialism is self-liquidating. Technological progress destroys the technical prerequisites for capturing value from technological progress.

1 Comments:

Blogger Eric Reasons said...

Mr. Carson -

Wow. Thank you very much for putting together such a cohesive narrative.

-Eric

February 04, 2010 4:21 AM  

Post a Comment

<< Home