Will driverless vehicles prevail?

Intelligent vehicles have been around for quite some time already. Researchers at Freie University in Berlin have been working on this since 2006 and recently exhibited a Volkswagen Passat driving and parking at the intended place without any driver. Google’s driverless car project has received a lot of attention and substantial investments are now being made not only by car manufacturers, but also truck companies such as Scania.

At such an early stage, it’s always difficult to assess whether a technology will remain as a niche product or gain widespread acceptance. Optimists point at the potential while pessimists happily explain the inherent weaknesses. Will driverless vehicles prevail or not? Below, I address some of the major skepticism related to this technology.


The technology can never fully replace the need for a human driver

Whether the technology will become clever enough or not is perhaps the greatest concern. The capability of a car to be self-regulating is ultimately related to the performance of hardware in terms of processors, sensors and communication technology and software in terms of sufficiently intelligent programming.

To understand the evolution of these technologies it is instructive to think in terms of Moore’s law. In 1965, Intel’s co-founder Gordon Moore stated that the amount of transistors on an integrated circuit will double in 18 months over the coming years. Moore’s original prediction:

Original prediction

Interestingly, this prediction has essentially remained accurate ever since. The graph below is thus a neat summary of what we now refer to as the information society (please note that the Y axis is logarithmic).


There is a sufficient amount of computing capacity in the world for each ant to own a small computer (a microprocessor). About 10000000000000000000 transistors are produced each year and these can store information by switching on and off 1,5 trillion times per second. If you’d press a light switch on and off 24/7 it would take you 25 000 years to reach 1,5 trillion. All this computing capacity as been absorbed by software, making these transistors increasingly intelligent and capable of performing more and more functions.

When marveling at historical forecasts it is striking to what extent those who have understood Moore’s law came up with more accurate predictions than those who didn’t.

IBM’s chairman Thomas Watson stated in 1943 that

“There is a world market for maybe five computers.”

As neither transistors nor integrated circuits had been invented back then this mistake is rather understandable. The following quotes, are, however, eloquent illustrations of what happens when you underestimate the explosive nature of digital technology:

 “…we believe the penetration by digital cameras of the installed base will be moderate for the next 10 years.”

// An analyst at Smith Barney, 1997

“We are upgrading Polaroid to Outperform from Neutral based on the company’s new product performance…”

// Morgan Stanley, January 2000, less than two years before the Polaroid goes bankrupt

At the same time, those who understood the dynamics of digital technology have been able to make surprisingly accurate forecasts:

“There’s going to be some gizmo that kids carry around in their back pocket that has everything in it – including our books, if they want.”

// Michael Hart, 1998

History since the mid 1960s can be regarded as a continuous process where digital technology not only displaces other technologies, but also removes the need for human control and intellectual work. As hardware becomes cheaper and better while software makes all this technology progressively more intelligent at an exponential rate, electronics becomes increasingly capable of performing functions that were previously conducted by humans.


But is digital technology sufficiently reliable?

It would be strange if all those 10000000000000000000 transistors produced each year are only used for consumer gadgets that are not critical for the workings of complex systems. Electronics is, and becomes, more reliable every year and in fact, its original nursing market was largely related to applications where reliability was crucial. Another quote from Gordon Moore (1965) is enough to close this argument:

“Such programs as Apollo, for manned moon flight, have demonstrated the reliability of integrated electronics by showing that complete circuit functions are as free from failure as the best individual transistors.”


Current legislation does not allow for the emergence of driverless vehicles

This is true, in most settings. But things can change. In fact, they’re already changing.

The state of Nevada passed a law in 2011 that permits the operation of driverless cars. In May last year, the first license for a self-driven car was issued last year. California and Florida have also passed such laws while Michigan and New Jersey are currently working on it. While there are always special interests trying to block such changes, there are also companies benefiting from changing legislation. In this case, Google has played a key role in the process.

Gordon Moore’s colleague and co-founder, Robert Noyce (who also co-invented the integrated circuit), wrote in 1977:

“It has often been said that just as the industrial revolution enabled man to apply and control greater physical power than his muscle could provide, so electronics has extended his intellectual power.”

This quote provides a great summary of economic history over the last 300 years puts an interesting perspective on driverless vehicles. The industrial revolution created engines and technologies that made us capable of using physical power beyond our muscle, but the human brain was still needed to monitor the process. The next logical step is therefore that electronics will remove the need for human control. The question is therefore not if, but when and how.

Summing up, those who argue that driverless vehicles will never prevail for technological reasons may in the future be looked upon as the Thomas Watsons, Smith Barneys and JP Morgans of our time – they underestimated the dynamics of digital technology and therefore ended up making predictions that in retrospective look funny.


New technologies are frequently met with skepticism. Much of it is usually ill-founded.

The Economist Group – what can we learn from this rare digital success story?

Over the last years you’ve heard it everywhere – printed media and newspapers are collapsing, the internet is killing their business and there’s nothing they can do about it. Well think again.

We live in a world where the amount of information keeps growing at an annual rate of about 50 percent. In 2006, three million times the information available in all books ever written was created. Needless to say, this tsunami of content has imposed great challenges for newspapers all around the world. Simple microeconomics tell us that if supply increases at such a staggering rate while demand remains more or less constant, then prices will end up in the basement and profit margins decline.

The figures below illustrate the impressive financial performance of The Economist Group. In the period 2002-2012, revenues are up 59 percent while operating profit is more than seven times higher! Not even the financial crisis could stop these graphs from pointing upwards in 2008-2009. Who knows, perhaps demand increased in these years as it became obvious that very few actually understood economics.


Turnover and Operating profit (million £) The Economist Group 2002-2012.


Operating margin The Economist Group 2002-2012.

So what’s the secret behind such a rare and formidable success story? Further analysis is required and I will get back to this in later posts. But let me hint at one possible explanation.

The Economist is not in the information business, it’s in the knowledge business. Being bombarded with overloads of information everywhere, people struggle to prioritize and make sense of it all. While the information business is overcrowded and fiercely competitive, the knowledge business isn’t, and perhaps more importantly: the information overload has arguably increased demand for knowledge.

The value of a magazine that quickly distills and analyzes the most important contemporary events is higher today than ten years ago. Conversely, a newspaper that largely reiterates information already available online and via news agencies is in a cutthroat commodity business competing with Google and other IT giants for meager advertising revenues.

Quotes from Kodak’s annual report 2000

As we move further into the information age, historical documents are increasingly available online, giving us the opportunity to marvel at how the times are changing. The statement below comes from Kodak’s annual report in 2000 (the complete quote can be found here), which is the year when Kodak’s decline was about to accelerate.

“Last year, the US economy was red hot, and the so-called “new economy” was even hotter. Today, as you scan the business headlines, the key word is “slump”… consumer confidence is in a blue funk… and the NASDAQ couldn’t get much flatter.

The question for investors now becomes, “Where do you invest your money after the bubble bursts?”

Let me suggest three possible answers. First, it makes sense, now more than ever, to invest in strong brands. Because when time are tighter, consumers are less inclined to risk their money on a new or unknown name.

Second, invest in products and services that offer high satisfaction at a low price. In other words, value-for-money is king.

Third, it might be wise to seek companies that are adept at generating cash. Those are the firms that will continue to invest in themselves and prepare for growth, regardless of the economy.

And that as you might have already surmised, brings us straight to Kodak. However, if a great brand and a great balance sheet are not sufficiently compelling, there is something else investors should consider: this is a very smart time to be in the picture business.

Picture-taking is now at an all-time high worldwide. Amateur photographers took more than 80 billion snapshots last year, a new record. They ordered more than 100 billion prints, another milestone for the industry.

In the health imaging category (our second largest business), more records were shattered. Healthcare professionals last year ordered more than 1.5 billion Kodak radiological images.

For the past century, our business has been all about making it simpler for people to capture better images, first with film, and more recently, with digital technology. And, as we continue to make film and digital photography more accessible, picture-taking will continue to grow.”

As a rule of thumb, one should be cautious as investor when management recommends a company’s stock. Management’s job is not to analyze the stock market, their job is to run the business and candidly communicate the business’ performance.


Nokia’s decline in figures

I collected som key statistics on the performance of Nokia during the period 2004-2012. While these figures need to be analyzed in further detail, a glimpse at them still gives a good idea of what has happened.

The first graph depicts Nokia’s sold volumes, both in emerging markets (China, Asia Pacific, Middle East, Africa and Latin America) and their total sold volume. It is worth noticing how large share of their volumes were actually sold in developing countries.

Interestingly, the decline is much steeper in developed countries (Europe and the United States) where the company lost 47 percent of its volume from 2008 to 2012 as compared to 22 percent in emerging economies.


The following graph shows Nokia’s financial performance in terms of revenues and operating profit:


Needless to say, the company’s market share has declined significantly. It peaked around 40 percent in 2007. In the years up to the introduction of smartphones, Nokia gained market share on a growing market.


Decreasing volumes have implied collapsing operating margins, as illustrated below:

Another, perhaps more important explanation of Nokia’s problem is related to the decline in Average Selling Price. If each sold phone generated 110 euros of revenue in 2004, then getting only 45 euros per phone in 2012 is of course a tragedy for the company.


For sure, parts of the decline in Average Selling Price can be explained by increasing volumes in developing countries, but it is nevertheless clear that price competition has been fierce in these years.

Nokia has been squeezed from two ends – in emerging economies, cheaper, local manufacturers have eroded the company’s margins and in the Western world Nokia has been forced to cut prices due to an outdated product portfolio.

It is truly amazing how a company can appear so solid and competitive and then fall apart within only a couple of years.

Polaroid enters the video surveillance industry?!

Polaroid went bankrupt in late 2001 as digital imaging destroyed its profitable revenues from instant film photography. In 1997, the stock was traded around 60 dollars, four years later it was frozen at 28 cents.

Ever since, the brand has lived on in various shapes and in various settings, art being one of them. The company recently announced that
it will enter the commercial security market, something that at first appears to be an odd move.

Looking at the current structure of the video surveillance industry this event makes more sense. Video surveillance is undergoing a technological discontinuity where analog CCTV is increasingly replaced by digital, internet-based cameras. Such transitions usually create a temporary spike in firm entry as new companies with different competencies see abundant opportunities. Moreover, most of the components required for manufacturing a surveillance camera are readily available on the market, implying that entry barriers are rather low. And besides, in what other imaging application would Polaroid re-emerge? The regular camera business is fiercely competitive and demands huge economies of scale.

Whether the company will be successful or not remains to be seen. At present, its entry into video surveillance can be regarded as an indication of the hype and Klondike behavior that currently characterizes the security industry. Sooner or later, this must come to an end and the industry will become more consolidated.


No more Kodak moments in the Olympics

Looking back at the rise and fall of Kodak over the past century, one can make several observations about its role in society. Kodak’s hegemony was manifested through its strong presence in the Olympic Games. During these games, not only athletes compete – firms also compete for our attention. Tracking Kodak’s role in the Olympic Games is therefore a way to track its performance, and vice versa.

In fact, Kodak was present as a sponsor during the first Olympic Games in Athens 1896. Being only 16 years old by that time, Kodak had pioneered amateur photography and created a consumer market that it would thrive upon for more than a century.

At the Olympics, the most famous contemporary consumer brands are exposed. Globally recognized top brands such as Coca Cola and McDonald’s are the only ones that can afford and benefit from such major sponsorships. For many decades, Kodak’s presence in the Olympics was more or less taken for granted.
As the company grew in the 20th century and continued to dominate the photographic industry it became increasingly used to extensive market power. Kodak had built a global monopoly position and with such hegemony usually comes a certain arrogance and resistance to change.

By the early 1980s, a challenger named Fujifilm was gaining momentum. The first signs of Kodak’s decline could in fact be spotted at the Olympic Games in Los Angeles 1984. As Kodak controlled about 90 percent of their domestic market and the organizing committee preferred American sponsors, Kodak took its presence for granted. In doing so, they dictated the conditions and were generally very difficult to do business with.

In contrast, the Japanese challenger adopted a ‘name-your-price’-strategy and eventually became the official sponsor of the Olympics in L.A. Kodak now tried desperately to offset this loss through massive TV advertising but the harm couldn’t be undone – Fuji’s green box was now familiar to American consumers.

After 1984, Kodak made sure to have a strong presence at the Olympics and it seems like the company had learnt from its mistakes. This didn’t stop them from losing market share to Fujifilm, both in the United States and elsewhere.

The next blow to Kodak came with the shift to digital imaging. From the year 2000 and on, the company slid further into decay, the layoffs continued and the 2008 Olympics in Beijing was the last time Kodak entered this stage for global consumer brands. The official motivation for not sponsoring the Olympics in London last year was that Kodak wanted to focus its marketing efforts and get closer to its customers. Put differently: Kodak was no longer a global consumer brand.

As the torches were lit in London last year, the lights had gone out at Kodak.


Paper accepted: Facit and the Displacement of mechanical calculators

I was just recently informed that my article Facit and the Displacement of mechanical calculators has been accepted for publication in the journal Annals of the History of Computing. The paper seeks to explain why Facit – a Swedish manufacturer of calculators, typewriters and office furniture declined in the shift to electronics in the early 1970s. Getting an article published is usually a reason to celebrate, at least in the sense that it merits a small blog post.

Drawing on interviews and archival data, the article argues that a combination of factors put Facit in a very awkward position when the transition to electronic calculators came into motion by 1970-72:

– The shift to electronics rendered Facit’s technological competencies obsolete.

– In the mechanical era, the industry was well consolidated. Firms like Facit were vertically integrated, having their own manufacturing equipment and an extensive service network. The advent of electronics changed the industry structure in several regards. Integration backwards made no longer sense as manufacturers of semiconductors now offered integrated circuits to anyone who wished to put a plastic cover over them and sell a calculator. As prices declined, new sales channels (bookstores, discount retailers etc) also emerged, removing the value of Facit’s big sales organization. All in all, the industry therefore became much more competitive since entry barriers were reduced.

– Electronics was an insignificant part of the market in 1967, only a few years later the technology was better than mechanics in all regards. It is hard for any company to deal with such a pace of development and especially for Facit, a firm that had essentially thrived upon the same technology for four decades.

– To top it off, Sweden as an economy had little experience of electronics. While countries like Japan and the United States had either pioneered integrated circuits or used them from an early point, Sweden did not have the institutions or competencies required to successfully develop electronic calculators. There was a lack of skilled labor and a general lack of understanding and thus, Facit’s attempts to develop electronic calculators remained futile. This element of technological change and the response of established firms has been largely overlooked by previous literature and putting this argument forward is a key contribution of the paper.

Since virtually all manufacturers of mechanical calculators around the world collapsed in the shift to electronics, one cannot be too harsh when assessing Facit’s management. After 50 years of continued expansion and growth, the company collapsed, a long life by most standards.


Nokia quarterly presentations 2007-2010: “Nokia’s longer term strategy remains valid and intact”

Apple’s IPhone was first revealed in January 2007 and available for consumers in June the same year in the United States, then progressively launched globally in 2008. Out of curiosity I pondered through Nokia’s quarterly presentation slides in the years 2007-2010 in order to get a better idea about how they related to the ongoing shift from feature phones to smartphones. While such a brief and shallow review will not give the full picture of Nokia’s response, it might still reveal something.

Going through these slides it is striking to what extent Nokia emphasizes the strength of its ever larger product portfolio. In the years 2007-2008, virtually every slidestack contains at least two full slides crammed with all new product launches. The images below come from some of these presentations.





It is also noteworthy how little is said in these years concerning technological change and the emergence of smartphones. Most of the information is related to market growth, Nokia’s market share, gross margins etc. Towards the end of each presentation, a 2*2 matrix with threats and opportunities is presented. With no exceptions, this slide is very unspecific and the term ‘Competitive factors in general’ is frequently used.

Here, there seems to be a symbiotic and destructive relationship between established firms and the stock market. Financial analysts care about the big numbers (revenues, market growth, market share), these notions fit neatly into their spreadsheets and number crunching exercises. Technological change cannot be easily quantified, nor fully assessed in terms of its organizational impact. Top management therefore happily focuses on the aforementioned issues and this inevitably makes them pay less attention to changes of a more discontinuous nature. Associate Professor Mary Benner at Minnesota University has shown that stock market analysts virtually ignored Kodak’s and Polaroid’s digital efforts in the 1990s, while they paid a lot of attention to (and praised) their product launches based on photochemical film (read more here).

Also, a slight sense of (Finnish) optimism is communicated. In the Q1 report from 2008, the following statement is made:

“Nokia continues to expect industry mobile device volumes in 2008 to grow approximately 10% from the approximately 1.14 billion units Nokia estimates for 2007.” (the presentation can be found here).

In Q2 2008 the following is written about the future:

“Nokia mobile device market share: increase sequentially”

Being present in a growing market can be highly deceptive because growing demand might obscure the fact that a technology is about to become obsolete, thus sending a false message to incumbent firms that things are actually going well. This was indeed the case with film sales for Kodak in the late 1990s and for analog CCTV companies in the 2000s. The market kept growing and an increasing demand concealed the fact that the technology was going to die, thus reducing the incumbent firm’s sense of urgency.

In 2008, slides are still filled with launches of new feature phones. However, there is an increased focus on software services and specific phones:




At this point it seems that the company is recognizing that software is becoming more important and that phones are used for a wide range of different purposes. But the response is to do more of the same, i.e. launch more feature phones with different functionalities and hanging on to the by now outdated operating system Symbian.

By late 2008, special attention is of course given to the financial crisis. Still no statements about the shift to smartphones, except for this amazing piece of denial (Q4 2008):

“Nokia’s longer term strategy remains valid and intact”

In Q1 2009, the company announced that it will launch Ovi Store and that its music services have been launched in Australia and Singapore. Still no sense of urgency communicated.

Bearing all the above in mind and especially the quote from Q4 2008, the slide below from Q2 2009 comes as quite a surprise:


All of a sudden, it is stated that the industry is undergoing some major change, something that has not been communicated at all previously (at least not on the slides). In this presentation, even more attention is given to software and Ovi Store. For the first time, Nokia now also stated that it will try to “adjust our services businesses and open up for greater opportunities for third party partner services”. Thus, about two years after the launch of the IPhone and the shift towards an open model with application developers, the company finally announces that it is going to do something at all. In a digital world, two years is a very long time.

In Q4 2009, devices (mobile phones) and services (software) are no longer reported separately but rather on one slide where numbers and events are more aggregated:


As businesses, they still remained unintegrated mainly since Nokia still refused to do anything about its underperforming operating system.

The following quote can also be found in the slides from Q4 2009:

“4Q showed Nokia’s ability to ramp up new, more compelling offerings, despite a tough competitive environment”

In Q1 2010 (below), the company states:

“Nokia continued to show solid smartphone momentum in lower price points”.

While the word ‘momentum’ communicates something else than actual results, this is still a choice of words that clearly doesn’t mirror the seriousness of the situation – especially bearing in mind the results that were delivered in the coming years. This is also the first time the word smartphone is used at all:


In Q2 2010, Nokia is for the first time communicating that it is doing something about its old software. Words such as “increased speed” and “innovation” are used below, but the fact of the matter is that the company is still doing more of the same, they’re still trying to compete with cars by developing a faster horse.


By Q3 2010, the big slides featuring Nokia’s huge (and obsolete) product portfolio are gone. Sales declined 20 percent in Q2 2011 and went down 25 percent in Q3 the same year. In late 2010 and 2011, the presentations become so meager and dry that it hardly made sense to look at them.

Having researched similar cases in other industries and given speeches about how established firms respond to technological change, I usually argue that these firms often recognize the threat but that their responses are not the right ones. Firms do after all not act in isolation, they attend fairs where new products are exhibited and usually pay attention to what competitors are doing. I therefore frequently and rather easily make a point in busting the “oversleeping myth”.

Having reviewed the material above, one hardly gets the impression that this was the case with Nokia. The company clearly overslept and top management did not realize the urgency of the situation – the slides do not communicate any major issues until suddenly it is stated in Q2 2009 that the mobile industry is undergoing rapid technological change. This statement was made two quarters after the legendary quote “Nokia’s longer term strategy remains valid and intact”. And once the problems were recognized, the response was for many years the faster horse strategy – better feature phones and eventually an upgrade of Symbian, an operating system that was never designed to be used on smartphones.

One can only marvel at the amount of people who have lost their jobs and how much shareholder value has been destroyed by the complacency of Nokia’s top management in these years.

All the above material can be found at Nokia’s Investor Relations website (here).

Explaining the Collapse of Nokia

In 2005, Nokia was the fifth most valuable brand in the world. With a turnover of 51,1 billion euros in 2007 and an operating profit of 8 billion euros, the company’s market share had climbed well above 40 percent.

At this point, most mutual funds had invested significant shares of their funds in Nokia, probably in order to balance the portfolio vis-à-vis the stock market index. As a consequence, the company’s stock was traded around P/E 15-20, a rather high number considering the size of the firm and its meager growth prospects. When this is the case, and financial institutions keep buying a stock, it is usually the beginning of a collapse. This holds true for IBM in the late 1980s, Ericsson in the late 1990s and banks in 2006-2007.

Nokia stock

In these years, politicians around the world argued that their country needed “a new Nokia” – a company which does so well that it fuels an entire economy’s growth over many years. Well, today anyone can have a Nokia. The final sign of things to come was a statement in BusinessWeek 2007: ”Nokia’s dominance in the global cell-phone market seems unassailable.”

The firm collapsed in the coming years its new CEO Stephen Elop stated in 2007 that ”the first iPhone shipped in 2007, and we still don’t have a product that is close to their experience. Android came on the scene just over 2 years ago, and this week they took our leadership position in smartphone volumes”.

How can such a sharp decline be explained? And why has Nokia struggled to develop a competitive smartphone?

Technology S-curves provide a good starting point for addressing this issue. The S-curve theory posits that the advance of a technology is initially slow. Once a breakthrough occurs, performance increases rapidly until a particular solution has reached its limits of what is possible within a certain paradigm. At this point, the S-curve levels off and the technology becomes increasingly vulnerable and likely to be substituted by another technology S-curve.


Nokia essentially surfed on the S-curve related to hardware and feature phones. Over the years, new functions were crammed into thinner and cheaper phones: radios, cameras, music players, recording capabilities, etc were added. Along this S-curve, Nokia could fend off competitors easily. Being late with the introduction of camera phones, the company could still catch up thanks to its financial resources and engineering capabilities related to hardware.

While the pace of development has been stunning for feature phones in the 2000s, it all happened along an established technological trajectory. As long as competition centered around offering a broad portfolio of feature phones, Nokia was in control. In this sense, Business Week was right when stating in 2007 that “Nokia’s dominance in the global cell-phone market seems unassailable”.

Approximately at this point in time, the S-curve for feature phones started to level off. It now became increasingly difficult to add valuable technological features or create additional consumer benefits along this trajectory.
Why then has Nokia been so slow in its shift to the next S-curve related to smartphones?

Nokia’s main problem is probably related to the fact that the competencies it developed in the feature phone era were not only less useful for developing smartphones – they were probably downright destructive for such attempts:

Nokia had never been a software company. Competencies were more related to hardware. Its operating system, Symbian, was essentially designed for phones used for calling and sending text messages, the online functionality of Nokia phones remained poor. A smartphone is much more about software and online compatibility than a feature phone. Hence, Nokia was in fact ill equipped to develop smartphones as its competencies were more related to hardware.

Having an established set of competencies related to feature phones, it became financially rational to continue along this trajectory and capitalize on these skills. A large company with a large established market where customer needs are known usually struggle to allocate resources to breakthrough innovations. Such a firm faces a high opportunity cost and therefore it is not able to renew itself. Big companies think big, but all novelties, by
definition, start as something small.

Rumors abound that Nokia was considering the development of a touch screen smartphone five years ago. The inability of large companies to invest in unknown technologies and unknown needs is probably the reason why Nokia never did so.

Looking ahead, Nokia is facing formidable competition and the company is still not prepared for it, considering that its competencies are still largely stuck in the feature phone paradigm. At first glance, the collaboration with Microsoft makes a lot of sense as Nokia lacks the required software skills. However, Microsoft has never been a dominant player in mobile operating systems.

Bearing in mind that Nokia’s smartphone Lumia will have significantly fewer applications available for its customers also means that the company is in trouble. Small installed base of customers means that fewer companies are willing to develop applications for Lumia, which in turn means that fewer customers are willing to buy a Lumia, which in turn means… Vicious circle.

Warren Buffett once said “turnarounds don’t turn”. This is probably the case for Nokia.

Summing up: competencies have become incompetencies. A fish is very well suited for survival in the water. Now put it on land and see what happens.

The ongoing shift to IP Video Surveillance and predictions of future growth

Back in October 2009, I made some predictions regarding the growth rate of IP video surveillance. Video surveillance is currently undergoing a technological transition from analog CCTV to digital cameras that are connected over the internet.

The presentation below summarizes my predictions concerning growth rates that could be expected in the period 2009-2013. By drawing upon theory concerning the diffusion of innovations, I argued that growth might be faster than estimates suggested back then. In 2009, IMS Research suggested that about 50 percent might have switched to IP in 2013 whereas I was even more bullish. For a more detailed description of my rationale, please read this presentation:

Current estimates suggest that by the end of 2012, about 40 percent of the market had made the transition to IP. While this figure will continue to grow in 2013, it is still clear that I have been overly optimistic. The presentation above points out that technology is not adopted in a linear way. As the snowball comes into motion and more people become familiar with a technology, the adoption rate increases. How could this argument result in an inaccurate forecast then?

The answer is probably related to how the market is defined. Recall that S-shaped growth patterns occur in a homogenous population where the preferences of users are reasonably similar. If this is not the case, the S-curve of diffusion will be misleading.

Having learnt more about IP video and the technology transition, it has become clear to me that one needs to make a distinction between large and small installations. In larger installations, benefits of IP are more obvious (scalability, lower integration costs) whereas in smaller settings (3-4 cameras), IP has so far been less competitive compared to traditional CCTV. Growth has indeed been phenomenonal in the segment for larger installations and followed a traditional S-curve pattern.

Two conclusions can be drawn from this observation.

1. When making predictions about how technologies grow, one has to be careful about how the market is defined and delimited.

2. In the case of IP video surveillance, there are extensive growth opportunities in the segment for smaller installations.