Weekend Musical Interlude — SHEL performing “Lost at Sea”

Hailing from Fort Collins, Colorado, the band SHEL stands for sisters Sarah, Hannah, Eva, and Liza.  Ranging from their late teens to early 20s, these classically trained musicians have carved out a niche for themselves in producing modern neo-folk music.  They’ve put out one EP and one full-length album thus far.  Below is a live performance of their haunting “Lost at Sea.”

Desolation Row — The Stagnation of IT Spending

Andrew McAfee has an interesting blog post on David Autor’s Jackson Hole Conference analysis of IT spending.  What it shows, I think, is the downward pressure on software pricing that is now driving the market.  This is a trend that those of us in the business are experiencing across the board.  The only disappointing aspects of the charts is that they are measures of private investment in IT and don’t address public expenditures.  Given frozen budgets and austerity it would be interesting to see if the same trend holds true on the public end, which I suspect may show a more dramatic level of under-investment in technology.  While I mostly agree with McAfee over Autor’s concerns, I believe that the factors related to technical stagnation that I raised in last week’s post are reinforced by the charts.  There seems to be some technological retrenchment going on that is solely focused on reducing overhead of existing capabilities to appease financial types.  This is reflected in record profits during a time of stagnant employment and employee incomes.  I think that the only hope for investment in technological innovation is going to have to come from the public sector, but the politics still seem to be aligned against it for some time to come.  Perhaps if we subjected financial managers, lawyers, doctors, pharmaceutical companies, entertainment, and insurance companies to the same kind of international competition that manufacturing and technology have been exposed to though “free-trade” agreements and the abandonment of patent monopolies, then perhaps we could have the equivalent of “affording” projects similar to the moon program and ARPANET again.

Let’s Get (Technical) — The Crux of Predictive Measures

For many years since the publication of my various papers on technical performance measurement, I have been asked to update my perspectives.  Over the years I largely declined, mostly this was due to the fact that I had nothing of importance to add to the conversation.  I had staked out what I believed to be a reasonable method of integration between the measurement of technical achievement in human effort and the manner in which the value of that achievement could be documented, along with a reasonable model of technical risk to inform us of our ability to achieve success in the next increment of our technical baseline.  A little background may be helpful.

The development of the model was a collaborative one.  I was the project manager of a team at the Naval Air Systems Command (NAVAIR) that had spent several years attempting to derive “value” from technical achievement, including technical failure.  (More on this later).  The team had gone down many blind allies and had attempted various methods of integrating technical performance measurement into earned value metrics.  Along the way, however, I grew dissatisfied with both the methods and the lack of progress in the project.  Thus, the project team changed course, with the expected turnover of personnel, some of whom did not agree with the change.  As the project manager I directed my team to end the practice of NIH–“Not Invented Here”–and to scour the literature to determine what the efforts already directed at the problem could teach us.  I was also given the resources of a team of mathematicians, physicists, and statisticians familiar with the issues regarding systems engineering who were organized into a reserve unit at NAVAIR, and was assisted by Matt Goldberg of Luri-Goldberg risk fame, who was then working at the Institute for Defense Analysis (IDA).

Among the published literature was a book that had been recommended to me by senior engineering personnel at the (at that time) newly formed Lockheed Martin Corporation.  The book was Technical Risk Management by Jack V. Michaels, who also collaborated on a book called Design to Cost.  Both of these works, but in particular the former one, were influential in my selected approach to technical performance management.  I was also strongly influenced by the work of Daniel Dennett at Tufts in his philosophical and technical observations of strong AI.  It seemed (and seems) to me that the issue of measuring both the intrinsic and practical value of technical achievement requires an understanding of several concepts: of cognition that includes the manner in which we define both progress and learning; the concept of systems, their behavior and the manner in which they respond to stimulus–thus, the evolutionary nature of human systems and the manner that they adapt; the fallacy of reification in measurement and the manner that we overcome it; an understanding of technical risk in the systems engineering process; an understanding of the limitations in measurement of those technical planning systems both in terms of fidelity and horizon; and the way the universe works on the level of the system being developed, given the limitations of the inputs, outputs, and physics involved in its development.

While our understanding needed to be interdisciplinary and comprehensive, conversely, the solution needed to be fairly simple and understandable.  It needed to pass what I called the “So-What? Test.”  That is, would the addition of complexity that improved accuracy pass a test in which there was no good answer to the question “so what?” when looking at the differences in the results.  I have applied this test to software development and other efforts.  For example, is it really worth the added complexity to add that additional button for some marginal advantage if its absence makes no difference in the performance and acceptance of the product?

In the end I selected an approach that I felt was coherent and directed the team to apply the approach to several retrospective analyses, as well as one live demonstration project.  In every case the model demonstrated that it would have been a better predictor than cost and schedule indicators alone in project performance.  In addition, integration of technical achievement which, after all, was one of the cornerstones of the WBS approach in its adoption in the Department of Defense in the early ’90s, improved the early warning capabilities in predicting the manifestation of technical risk, which was then reflected in both cost and schedule performance.

I then published several papers on our findings in collaboration with my team, but then decided to publish my own perspectives separately at the end of my role in the project, which is linked in the first paragraph to this post.  Greatly assisting me with criticism and suggestions during the writing of my paper was Jim Henderson, a colleague whom I respect greatly who was a senior cost analyst at the time at NAVAIR.  He resisted my efforts to credit his assistance at the time for reasons of his own, but I suspect that he would not object now and it is only fitting that I give him the credit that he is due in influencing my own thinking; though I take full responsibility for the opinions and conclusions expressed in it.

Underlying this approach were several, to some new, artifacts deemed essential to the approach.  First among these was the establishment of a technical performance baseline.  The purpose of this baseline is to drive an assessment of current capabilities and then to break down the effort into increments that involve assessments of progress and testing.  These increments should be tied to the WBS and the resources associated with the system being developed.  Second, was the realization that technical achievement was best assessed in increments of short duration through a determination of the risk involved in successfully achieving the next milestone in the technical performance baseline.  This approach was well documented and in use in systems engineering technical risk assessments (TRAs) and, as such, would not cause major changes to a process that was reliable and understood.  Finally, and most (as it turns out) controversially, was the manner in which we “informed” cost performance in deriving value from the TRA.  The last portion of the model was, admittedly, its most contingent portion, though well within the accepted norms of assessment.

It is at the point of applying the value of technical achievement that we still find the most resistance, and so the reason why I have decided to reenter the conversation.  What is the value of failure?  For example, did Space X derive no value from its Falcon 9 rocket exploding?  Here is the video:

 

From the perspective of an outside non-technical observer, the program seems to have suffered a setback, but not so fast.  Space X later released an announcement that indicated that it was reviewing the data from the failed rocket, which had detected an anomaly and self-destructed.  Thus, what we know is that the failed test has caused a two week delay (at least).  Additional resources will need to be expended to study the data from the failure.  The data will be used to review the established failure modes and routines that the engineers developed.  There is both value and risk associated with these activities.  Some of them increase risk in the achievement of the next planned milestone, some handle and mitigate future risks, additional time and resources–perhaps anticipated or perhaps not–will actually be expended as a result of the test.

So how would the model I put forth–we’ll call it the PEO(A) Model for the organization that sponsored it–handle this scenario.  First we would assess the risk associated with achieving the next milestone based on the expert opinion of the engineers for the system.  The model is a simple one–10, 50, 90, 100%.  We trace the WBS elements and schedule activities associated with the system and adjust them appropriately, showing the schedule delay and assigning additional resources.  We inform earned value by the combination of risk and resources.  That is, if our chance of achieving our next milestone is 50%, that is at about the level of chance, then the resources dedicated to the effort may need to be increased an appropriate amount and is reflected in the figure for earned value.

There are two main objections to this approach.

The first objection encounted challenges the engineers’ assessment as being “subjective.”  This objection is invalid on its face.  Expert opinion is both a valid and accepted approach in systems engineering and it seems there is a different bias that underlies the objection.  For example, I have seen this objection raised where the majority of a project is being managed using percent complete as an earned value method, which is probably the most subjective and inaccurate method applied, oftentimes by Control Account Managers (CAMs) that are removed one or two levels (or, at least, degrees of separation) from the actual work.  This goes back to the occasional refrain that I often heard during program reviews by the PM: “How do I make that indicator green?”  Well, there are only two direct ways with variations: game the system, or actually perform the work to the level of planned achievement and then accurately measure progress.  I suspect the fear here is that the engineers will be too accurate in their assessments, and so the fine game of controlling the indicators to avoid being micromanaged and making enough progress to support the indicators is undermined.  Changing this occasionally encountered mindset is a discussion for a different blog post.

There is certainly room for improvement in using this method.  Since we are dealing with a time-phased technical performance plan with short duration milestones, providing results based on an assessment of achieving the next increment, we can always apply simulated Monte Carlo at our present position to get the most likely range of probable outcomes.  Handicapping the handicapper is an acceptable way of assessing future performance with a great deal of fidelity.  Thus, the model proposed could be adjusted to incorporate the refinement.  I am not certain, however, if it would pass the “so-what?” test.  It would be interesting to find out.

The second objection encountered challenges the maximum achievement at 100%.  I find this objection both amusing at one level and understandable given the perspective of the individuals raising the objection.  First the amusing (I hope) response: in this universe the best you can ever do is 100%.  You can’t “give 110%,” you can’t “achieve 150%,” etc.  This sounds catchy in trendy books on management, in get rich quick schemes, and is heard too often by narcissists in business.  But in reality the best you can do is 100%–and if you’ve lived long enough with your feet on the ground you know that 100% is a rare level of achievement that exists in limited, well-defined settings.  In risk, 100% probability is even rarer, and so when we say that our chance of achieving the next level in a technical performance plan is 100%, we are saying that all risks have been eliminated and/or mitigated to zero.

The serious part of this objection, though, surrounds the financial measurement of technical performance, to wit: what if we are ahead of plan?  In that case, the argument goes, we should be able to show more than 100% of achievement.  While understandable, this objection demonstrates a misunderstanding of the model.  The assessment from one milestone to the next is one based on risk.  The 100% chance of achievement allows the project to claim full credit for all effort up to that point.  The actual expenditures and the resulting savings in resources and schedule, documented through normal cost and schedule performance measurement, will reflect the full effect of being on plan or ahead of plan.  So while it is true that technical performance measurement can only, in the words of one critic, “take away performance,” this does not pass the “so-what?” test–at least not in its practical application.

What the objection does do is expose the underlying weakness and uncertainty held with any model that concerns itself with deriving “value” from success and failure in developmental efforts.  It is a debate that has raged in systems engineering circles for quite some time, not to mention the expansive definition of value during discussions of investment and its return.  But that is not what the PEO(A) model of technical performance measurement (TPM) was about.  The intent of proposing a working model of technical achievement was geared specifically to defining our terms and aims with precision, and then to come up with a model to meet those aims.  It measures “value” only insomuch that the value is determined by the resources committed to achieving the increment in time.  The base “value” of a test failure, as in the Space X example, is derived from the assessment of technical risk in the wake of the postmortems.

It should not be controversial to advocate for finding a way of integrating technical performance into project management measures, particularly in their role in determining predictive outcomes.  We can spend a great deal of time and money building a jet that, in the end, cannot land on an aircraft carrier, a satellite that cannot communicate after it achieves orbit, a power plant that cannot achieve full capacity, or a ship that cannot operate in the conditions intended, if we do not track the technical specifications that define the purpose of the system being developed and deployed.  I would argue that technical achievement is the strategic center of measurements upon which all others are constructed.  The PEO(A) model is one model that has been proposed and has been in use since its introduction in 1997.  I would not argue that it cannot be improved or that alternatives may achieve an equal or better result.

In the days prior to satellite and GPS navigation was the norm, when leaving and entering port–to ensure that we avoided shoal water and stayed within the channel–ships would take a fix on a set of geographical points known as bearing measurements.  Lines were drawn from the points of measurement and where they intersected was the position of the ship.  This could be achieved by sight, radar, or radio.  What the navigation team was doing was determining the position of the ship in a three dimensional world and representing it on a two dimensional chart.  The lines formed in the sighting are called lines of position (LOP).  Two lines of position are sufficient to establish a “good” fix, but we know that errors are implicit due to variations in observation, inconvenient angles, the drift of the ship, and the ellipse error in all charts and maps (the earth not being flat).  Thus, at least three points of reference are usually the minimum acceptable from a practical perspective in establishing an “excellent” fix.  The more the merrier.  As an aside, ships and boats still use this method to verify their positions–even redundantly–since depending too heavily on technology that could fail could be the difference between a routine day and tragedy.

This same principle applies to any measurement of progress in human endeavor.  In project management we measure many things.  Oftentimes they are measured independently.  But if they do nothing to note the position of the project in space and time–to allow it to establish a “fix” on where it is–then they are of little use.  The points must converge in order to achieve value in the measurement.  Cost, schedule, qualitative risk, technical performance, and financial execution are our bearing measurements.  The representation of these measurements on a common chart–the points of integration–will determine our position.  Without completing this last essential task is like being adrift in a channel.

Finding Wisdom — Ralph Ellison

38000_38345_ellison

“I am an invisible man. No, I am not a spook like those who haunted Edgar Allen Poe; nor am I one of your Hollywood-movie extoplasms. I am a man of substance, of flesh and bone, fiber and liquids—and I might even be said to possess a mind. I am invisible, understand, simply because people refuse to see me…” — the nameless protagonist in Ellison’s novel Invisible Man

The Time Magazine essayist Roger Rosenblatt said that “Ralph Ellison taught me what it is to be an American”” and upon reading the book for the first time in my twenties as a young Navy officer I came to the same conclusion for myself.  From that first paragraph with its initial line that grabs you by the collar, the story’s narrator takes you for the ride of your life, opening your eyes to those things hiding in plain sight, revealing uncomfortable truths that the cowardly and dull among our fellow citizens refused–and continue to refuse–to see.

Ralph Waldo Ellison was born in Oklahoma in 1914 and it is through his experience in a pioneer state that had no history of slavery–a large part of what had been known as Indian Territory just eight years earlier–where though he grew up the “poorest among the poor,” he was given access to interact with white people and attend a good school; opportunities not even open to African Americans in the northern states of the time.  It is through his experiences in this western part of the American Midwest that he learned to see the interplay and interconnections of white and black culture, though strictures still existed.  He father sold ice and coal but died in an accident when Ellison was a child.  His raising was left to his mother, Ida, who was an activist and was arrested several times for violating segregation laws.

Young Ellison was a talented young man and saw many mentors–both black and white–during his developing years.  Among these was Ludwig Hebestreit, the conductor of the Oklahoma City Orchestra, who saw great promise in the young musician.  Ellison was thus accepted to Tuskegee Institute in Alabama on an Oklahoma state scholarship to study music.  Wishing to play jazz trumpet, he faced opposition by the more conservative-minded faculty who judged the music base and reflecting poorly on the “race.”  At this point stories diverge.  What is clear is that Ellison traveled to New York either to find summer employment with the intent of returning to Tuskegee, or to pursue a different career in the visual arts (photography) or sculpture.  In either case the artifacts from these interests show a man of multiple and considerable talents.

While in New York he came across Langston Hughes, Richard Wright, and other influential members of the “Harlem Renaissance” and switched his energies to writing for the Federal Writers’ Project of the Works Progress Administration (WPA).  There he worked in the Living Lore Unit of the project where he gathered materials of–and was influenced by–black folklore and culture.  Until he joined the Navy during the Second World War he contributed essays and stories to various publications, eventually becoming editor of The Negro Quarterly.  Unpublished stories from this period prior to the publication of Invisible Man, posthumously published in 1996 under the title Flying Home and Other Stories show the development of a unique and powerful voice about to enter American letters, .

Invisible Man is a fully modern novel and Ellison’s influences–Hemingway, W.E.B. DuBois, T.S. Elliott, Joyce, Richard Wright, and Cervantes–are apparent both in his ability to incorporate their literary devices and to transcend them.  His ability to move the novel far beyond its time and methods is what makes the work as readable and understandable today, over 62 years since its publication.  At the heart of Invisible Man is the desire of the individual to overcome not only the strictures that society in its various incarnations has imposed and wishes to impose on on him, but also his struggle to overcome his own base desires and limitations.  Much has been made of this last point, some literary critics going so far as to have Ellison hearken back to the American Transcendentalists.  But I find this contention too simplistic and–frankly–ridiculous.  This judgment does the work much disservice and ignores its modernism.

Ellison himself called his work “a novel about innocence and human error, a struggle through illusion to reality.”  The protagonist writes his story from an underground room that is illuminated by 1,369 light bulbs, the power for which are stolen from the Monopolated Light & Power Company.  He recounts his misadventures, from growing up in the south with a talent for public speaking, which is used by his white benefactors for their own amusement, forcing him to agree to fight in a “battle royal” in the ring of blindfolded black men.  Nonetheless, he secures a scholarship to attend Tuskegee.  While there he helps to make ends meet by working as a driver for one of the college’s white benefactors.  While driving in the country the benefactor becomes transfixed by the intimation of incest in one of the local black families.  The narrator soon finds himself in trouble with the college for “encouraging” the white man’s mistaken impression of black culture and is expelled.  He is told, however, that the college will write letters of recommendation for the young man to the benefactors of the college in New York City.  When he arrives there he finds that rather than recommendations, the letters describe the young man as unreliable and untrustworthy.  The son of one of the benefactors, feeling the man’s injustice, helps him secure a low paying job at Liberty Paints where their claim to fame is “optic white.”  He works for the senior mixer who makes the paint and is also black.  Suspecting, however, that the narrator is engaged in union activities the older man accosts him and the two men fight as the mixer containing the paint explodes.  The narrator, awaking in the company hospital, finds himself unable to speak and that he has lost his memory.  The hospital uses the opportunity of the appearance of an anonymous black patient to conduct experimental shock experiments on him.  Soon he regains his memory and leaves the hospital, albeit in poor condition from his mistreatment.  He collapses on the street and is taken in by a kindly black woman in Harlem by the name of Mary.  There he is nurtured back to health and black Harlem society.  While walking down the street he witnesses an eviction of an old black couple and speak eloquently in public in their defense.  This talent for speaking is noticed by the Brotherhood, an integrated organization to help the politically and socially oppressed.  He is recruited by them and given a new place to live and new clothes.  He is trained in rhetoric by the Brotherhood and used by them to advance their causes until he is accused of advancing his own fame at the expense of the organization, which causes him to be censured.  He is reassigned to the woman’s rights cause where he is seduced by a white woman who fantasizes about being raped by a black man.  The narrator’s best friend in the Brotherhood, Tod Clinton, another black man, leaves the organization, as do many other black members who feel the organization is using them as tools.  Increasingly Harlem is being influenced by Ras the Exhorter, who is a black nationalist and separatist who feels that the narrator and other blacks are betraying their best interests.  Soon the narrator sees his best friend, Tod Clinton, on a sidewalk in Harlem selling “Black Sambo” dolls.  Police stop Clinton for a license and when he attempts to flee he is shot and killed on the street.  The narrator organizes a funeral for his friend and speaks out in his defense against the police.  Despite this show of community solidarity his actions fail to serve the interests of any of the powers in Harlem.  He now finds himself isolated, pursued both by the now largely white Brotherhood, who consider his actions selfish and self-serving, and by Ras and his separatist followers, who consider him to be a traitor to his race.  The racial tension caused by the funeral and continued police brutality breaks out in a race riot.  The narrator finds himself pursued on the street by the police, who believe that he is a looter.  He falls down an open manhole and the police close it up on him.  From that point he vows to remain invisible to society and to live underground.

By turns tragic, horrifying, and hilarious, Invisible Man is a modern picaresque novel in the tradition of Don Quixote, told in prose by an exponent of the jazz form.  The narrator leads us along the path of the hero and, though African American, transcends his race to reveal his humanity in all of its fragile forms–bravery, selflessness, foolishness, stupidity, naivete, kindness, solipsism, lust, hope, and fear.  In the end he is bathed in light, though existing under the surface of the world.  As a result, he is anything but a character seeking transcendental enlightenment, which is illusion.  He is, instead, a character who has found the ability to see things as they are, including those uncomfortable truths about himself.

Ellison and his protagonist are fully modern in their views.  Ellison’s character is led down blind allies both through his own guilelessness, and the sometimes misguided and other times malicious intent of others.  Rather than a victim, in the tradition of the writing of Richard Wright, Ellison’s character overcomes the vicissitudes imposed on him by accepting what he is and what he can be.  We see that white society and black society in America are engaged in a dance, sometimes violent and sometimes in opposition, oftentimes spawning fear, that inevitably draws them closer together.  In this way the story is not so different from the struggle of others, each wave of immigrants and other traditionally disenfranchised groups working against the limitations placed on them by the powerful.  Each is rejected, abused, and manipulated.  In the end, though, each strives toward the ideal of freedom, not just for themselves, but for everyone.  To do that requires the clear eye of critical thinking and the ability to live life in reality, bathed in the unforgiving clarity of light.

 

Saturday Music Interlude — Ruthie Foster singing the blues

As a relatively young nation (still) the United States has few forms of music that it can claim as its own.  American folk, bluegrass, and country have their roots in Scots-Celtic and British folk forms of expression.  Many of the songs currently performed today even reprise traditional themes and melodies, but graft onto them American concerns and limitations for a rich fusion of the traditional and modern.

Two forms of music, however, that are uniquely American are blues and jazz, which eventually gave rise to Rhythm & Blues and early Rock & Roll.  The blues are the folk music of a people enslaved, given the hope of freedom, enslaved for all intents again, and–as the country has progressed–achieving full citizenship and freedom in law if not fully in practice.

Jazz, of course, which is based on the blues, is America’s classical music.  Despite attempts to straight-jacket it, as European classical music has been straight-jacketed–where variation from an accepted form based on the tastes of a privileged economic elite is the rule–jazz continues to develop and improvise.  This is to be somewhat expected.

The various forms of European classical music was financed and supported by royalty and robber barons–and continues to be financed by an economic elite which tends to expect uniformity.  The music, while among the greatest forms of human musical expression, has had over the years been allowed only so much freedom within the established boundaries of approval by a ruling class.  The genius found within it is to hear the rebellion under the surface, borrowing from folk forms where it can be masked from disapproving ears.  The subversive music from Mozart’s Barber of Seville, among others, comes to mind.

Jazz, however, is based on a democratic ideal–that the players working together, each given improvisational freedom within a structure, will create something new–a synthesis of old and new that drives the music forward.  Segregation allowed African Americans to freely express themselves and to do so in ways that ran under the surface of society.  The brilliance of the musical expression was soon realized and the mainstream of American society adopted many of its forms of expression and the lifestyle that often accompanied jazz and blues life.

Thus the core belief in both jazz and blues is progression–driving things forward, to a better day; not as individuals who work against each other and who strive against the success of the other–which would undermine and destroy the composition and the music–but together.  Only then can the music succeed.  Thus, while jazz is the music that speaks of the ideal of democratic society, blues speaks the story of the individual in society which can be cruel and unforgiving without love, compassion, decency, forgiveness, and more than a little bit of luck.

Ruthie Foster is an effective purveyor of the blues.  She started singing in her church choir and, leaving her rural town, continued to perform while on active duty in the U.S. Navy Band.  Since leaving the Navy she has taken the blues community by storm, winning multiple awards since her first release in 1997.

One can hear her background in her songwriting and singing.  On her newest album, Promise of a Brand New Day, the song “Let Me Know” contains the familiar call-and-response structure, though a chorus never enters into the song, the instruments providing an effective substitute for the anticipation of the response to the powerful instrument of her voice.  Here she is singing some selections from her new album.

My Generation — Baby Boom Economics, Demographics, and Technological Stagnation

“You promised me Mars colonies, instead I got Facebook.” — MIT Technology Review cover over photo of Buzz Aldrin

“As a boy I was promised flying cars, instead I got 140 characters.”  — attributed to Marc Maron and others

I have been in a series of meetings over the last couple of weeks with colleagues describing the state of the technology industry and the markets it serves.  What seems to be a generally held view is that both the industry and the markets for software and technology are experiencing a hardening of the arteries and a resistance to change not seen since the first waves of digitization in the 1980s.

It is not as if this observation has not been noted by others.  Tyler Cowen at George Mason University noted the trend of technological stagnation in the eBook The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will(Eventually) Feel BetterCowen’s thesis is not only to point out that innovation has slowed since the late 19th century, but that it has slowed a lot, where we have been slow to exploit “low-hanging fruit.”  I have to say that I am not entirely convinced by some of the data, which is anything but reliable in demonstrating causation in the long term trends.  Still, his observations of technological stagnation seem to be on the mark.  His concern, of course, is also directed to technology’s affect on employment, pointing out that, while making some individuals very rich, the effect of recent technological innovation doesn’t result in much employment

Cowen published his work in 2011, when the country was still in the early grip of the slow recovery from the Great Recession, and many seized on Cowen’s thesis as an opportunity for excuse-mongering and looking for deeper causes than the most obvious ones: government shutdowns, wage freezes, reductions in government R&D that is essential to private sector risk handling, and an austerian fiscal policy (with sequestration) in the face of weak demand created by the loss of $8 trillion in housing wealth that translated into a consumption gap of $1.2 trillion in 2014 dollars

Among the excuses that were manufactured is the meme that is still making the rounds about jobs mismatch due to a skills gap.  But, as economist Dean Baker has pointed out again and again, basic economics dictates that the scarcity of a skill manifests itself in higher wages and salaries–a reality not supported by the data for any major job categories.  Unemployment stood at 4.4 percent in May 2007 prior to the Great Recession.  The previous low between recession and expansion was the 3.9 percent rate in December 2000, yet we are to believe that suddenly in the 4 years since the start of one of the largest bubble crashes and resulting economic and financial crisis, that people no longer have the skills need to be employed (or suddenly are more lazy or shiftless).  The data do not cohere.

In my own industry and specialty there are niches for skills that are hard to come by and these people are paid handsomely, but the pressure among government contracting officers across the board has been to drive salaries down–a general trend seen across the country and pushed by a small economic elite and therein, I think lies the answer more than some long-term trend tying patents to “innovation.”  The effect of this downward push is to deny the federal government–the people’s government–from being able to access the high skills personnel needed to make it both more effective and responsive.  Combined with austerity policies there is a race to the bottom in terms of both skills and compensation.

What we are viewing, I think, that is behind our current technological stagnation is a reaction to the hits in housing wealth, in real wealth and savings, in employment, and in the downward pressure on compensation.  Absent active government fiscal policy as the backstop of last resort, there are no other places to make up for $1.2 trillion in lost consumption.  Combine this with the excesses of the patent and IP systems that create monopolies and stifle competition, particularly under the Copyright Term Extension Act and the recent Leahy-Smith America Invents Act.  Both of these acts have combined to undermine the position of small inventors and companies, encouraging the need for large budgets to anticipate patent and IP infringement litigation, and raising the barriers to entry for new technological improvements.

No doubt exacerbating this condition is the Baby Boom.  Since university economists don’t seem to mind horning in on my specialty (as noted in a recent post commenting on the unreliability of data mining by econometrics),  I don’t mind commenting on theirs–and what has always surprised me is how Baby Boom Economics never seems to play a role in understanding trends, nor as predictors of future developments in macroeconomic modeling.  Wages and salaries, even given Cowen’s low-hanging fruit, have not kept pace with productivity gains (which probably explains a lot of wealth concentration) since the late 1970s–a time that coincides with the Baby Boomers entering the workforce in droves.  A large part of this condition has been a direct consequence of government policies–through so-called ‘free trade” agreements–that have exposed U.S. workers in industrial and mid-level jobs to international competition from low-paying economies.

The Baby Boom, given an underperforming economy, saw not only their wages and salaries lag, but also saw their wealth and savings disappear with the Great Recession, when corporate mergers and acquisitions weren’t stealing their negotiated defined benefit plans, which they received in lieu of increases in compensation.  This has created a large contingent of surplus labor.  The size of the long-term unemployed, though falling, is still large compared to historical averages, is indicative of this condition.

With attempts to privatize Social Security and Medicare, workers now find themselves squeezed and under a great deal of economic anxiety.  On the ground I see this anxiety even at the senior executive level.  The workforce is increasingly getting older as people hang on for a few more years, perpetuating older ways of doing things. Even when there is a changeover, oftentimes the substitute manager did not receive the amount of mentoring and professional development expected in more functional times.  In both cases people are risk-averse, feeling that there is less room for error than there was in the past.

This does not an innovative economic environment make.

People who I had known as risk takers in their earlier years now favor the status quo and a quiet glide path to a secure post-employment life.  Politics and voting behavior also follows this culture of lowered expectations, which further perpetuates the race to the bottom.  In high tech this condition favors the perpetuation of older technologies, at least until economics dictates a change.

But it is in this last observation that there is hope for an answer, which does confirm that this is but a temporary condition.  For under the radar there are economies upon economies in computing power and the ability to handle larger amounts of data with exponential improvements in handling complexity.  Collaboration of small inventors and companies in developing synergy between compatible technologies can overcome the tyranny of the large monopolies, though the costs and risks are high.

As the established technologies continue to support the status quo–and postpone needed overhauls of code mostly written 10 to 20 years ago (which is equivalent to 20 to 40 software generations) their task, despite the immense amount of talent and money, is comparable to a Great Leap Forward–and those of you who are historically literate know how those efforts turned out.  Some will survive but there will be monumental–and surprising–falls from grace.

Thus the technology industry in many of its more sedentary niches are due for a great deal of disruption.  The key for small entrepreneurial companies and thought leaders is to be there before the tipping point.  But keep working the politics too.