Those of you familiar with my writings already know my opinion of the accounting concept of depreciation (“the dumbest concept in the history of management accounting”). I’ve been ranting against its use in decision making since an article of mine on the subject appeared in Management Accounting in November 1992. For the moment, however, let us pretend – for the purpose of argument only – that the concept of depreciation is valid (which it isn’t).
Depreciation expense is accounted for almost exclusively as a fixed cost. An asset’s depreciable base is determined, an appropriate life selected, and one of the approved methods chosen. An annual provision of depreciation is then calculated at treated as a fixed, or period, cost during the asset’s depreciable life. If a method other than straight-line depreciation is used the amount my vary from year to year, but during each of the earth’s orbits around the sun it is considered fixed.
The fundamental underlying assumption behind this method of recording depreciation is that chronological time is depreciation cost’s driver. An asset “depreciates” the same amount when it is used for 1,000 hours during a year as it does when it is used for 6,000 hours because it is assumed that time, not use, is depreciation’s driver. This may be true for assets that become obsolete before they wear out – like many high-tech, non-production capital assets – but it does not hold true for a vast majority of the capital assets used in manufacturing. Despite this mismatch between reality and accounting practice, the treatment of depreciation expense as an annual fixed cost continues to be followed blindly by most organizations resulting in internal management decisions that are based on a flawed view of economic reality.
Just to be clear, I’m not talking about depreciation for tax or GAAP accounting purposes. Neither tax or GAAP accounting reflects economic reality – they reflect tax or GAAP “one-size-fits-none” rule compliance. You can treat depreciation however you want to minimize tax liabilities or maximize reported profits. I’m talking about the impact of depreciation expense on the making of economically-sound business decisions.
Whether you look at depreciation as a means of spreading the cost of past capital acquisitions over the periods benefited or as a means of generating funds to maintain the current capital base, capital assets become non-productive and require replacement through two different phenomena: the passage of time and the wearing-out of the asset. That being the case, it might more closely match reality if capital assets were broken into two categories matching these two phenomena. Capital assets that will become obsolete before they wear out can still be depreciated using existing, time-based methods. Those that will wear out before they become obsolete might better be depreciated using a “units of production” or “hours used” method – both of which are legitimate methods of depreciation despite their infrequent usage.
Over the years I’ve seen many companies take on disastrous “incremental” jobs because they “had to cover the year’s depreciation expense,” over-cost (and therefore over-price) products because they were in the early, high-depreciation years after making major capital expenditures, or “run the heck” out of equipment to get the fixed depreciation cost per piece a low as possible during a given fiscal year. I’ve also seen companies under-cost, and thereby underprice, their products when their capital base is aging and depreciation is minimal; a time when they should be paying close attention to accumulating the needed capital to preserve that base.
If you insist on using depreciation expense at all in the cost information you use to support management decisions, you may want to determine whether the time-drive amounts recorded on your company’s books are causing management to make decisions that are at variance with the organization’s economic facts and consider using a usage-based method when making internal management decisions.