Who Owns Your Car Generated Data?

By | Automotive, M2M, Telematics | 4 Comments

All cars manufactured beginning January, 1996 are equipped with sensor networks and computerized control systems to comply with the U.S. EPA’s OBD-II standard. An increasing number of today’s cars have built in GPS-based navigation, streaming music devices and advanced active safety features, and more cars rolling off the production lines are equipped with a wireless connectivity that transmits some of this information to automakers’ services such as GM’s OnStar and Mercedes Benz’s mBrace.

Combining and analyzing the various bits of information from different vehicle systems can yield rich information from driving habits and frequently visited locations to the occupants’ taste in music; modern airbag control systems can even sense if the person at the passenger seat is a child or an adult.

It should come as no surprise then that many companies, from insurance companies to advertisers and content providers, are interested in this information and are trying to convince automakers to share the data collected by connected cars. Many technology and services upstarts are also looking for novel ways to capitalize on this information, some by analyzing and selling it back to consumers and OEMs.

Who Owns Car Generated Data?

The position of the National Highway Transportation Authority (NHTSA) in regards to who owns the data generated by a car, whether retrieved from the car’s “black box”, or its electronic data recorder (EDR), or streamed to the OEM, is quite straightforward:

“Ownership of the EDR and EDR data is a matter of State law, and such provisions vary considerably. NHTSA considers the owner of the vehicle to be the owner of the data collected from an EDR. NHTSA will always ask permission from the owner of a vehicle before downloading any information for use in government databases.”

Of course, there may some further discussions when considering fleet cars or a car loaned to a family member.

Fifteen states have enacted statutes relating to event data recorders and privacy. These states provide that data collected from a motor vehicle event data recorder may only be downloaded with the consent of the vehicle owner or policyholder, with certain exceptions. You can review the data privacy provisions in state statutes in the National Conference of State Legislatures (NCSL)’s report.

Connected Car Services on the Cheap

Of course, automakers are in control of the storage and transmission of vehicle data and are actively using this data to provide telematic services. But while OEMs are building proprietary and closed systems, the aftermarket has been more diligent.

In recent years, a plethora of consumer-grade plug-in OBD II modules appeared in the market from companies like AutomaticZubieMetroMilePlex Devices and Dash. In essence, you plug one of these devices into the car’s OBD II port, and connect it, either wirelessly or via Bluetooth, to your cell phone and, voilà, a connected car with no help from the OEM, thank you very much.

The better known example of non-OEM service using vehicle data is usage based insurance (UBI), now offered by a handful of insurance companies, where saving benefits appear to trump concerns over privacy.

Automakers and Politicians Take a Stance

While consumers may forgo privacy in order to cut insurance costs, OEMs are fighting for what they perceive as their right and duty to safeguard data, even from car owners.

At last year’s Consumer Electronics Show in Las Vegas, Jim Farley, then head of Ford’s marketing said: “We know everyone who breaks the law. We know exactly when you do it because we have a GPS sensor in your car.” Notwithstanding the uproar this statement had caused that led Ford to issue a retraction, Farley did add: “By the way, we don’t supply that data to anyone.” Later, Ford clarified that the company did not track anyone without their permission.

More recently, Ian Robertson, BMW’s chief of sales and marketing, was quoted by the Financial Times at the North American International Auto Show (NAIAS): “There’s plenty of people out there saying ‘give us all the data you’ve got and we can tell you what we can do with it’ adding that this included “Silicon Valley” companies, as well as advertising groups. And we’re saying: ‘No thank you’.”

Over in Europe, Google is pushing Android Auto: an android based in-vehicle infotainment (IVI) system. Working with the Open Automotive Alliance and presumably ready to be deployed by nearly 50 OEMs, Google caught the attention of both OEMs and German politicians.

According to Bloomberg, Audi’s Chief Executive Officer Rupert Stadler said: “The data that we collect is our data and not Google’s data. When it gets close to our operating system, it’s hands off.” This sentiment was echoed in comments from Volkswagen’s CEO Martin Winterkorn and Daimler’s CEO Dieter Zetsche.

Bloomberg also quoted Joachim Pfeiffer, a spokesman for Merkel’s parliamentary bloc on economic and energy policy: “We mustn’t under any circumstances let our development become dependent on companies like Google.”

Connected Car Predictions for 2015 and Beyond

Connected car technologies questions concerning security, privacy and data use rights will continue to top of mind for OEMs, suppliers and service providers, as well as legislators and politicians in the years to come. Stay tuned to the next blog post for an overview and commentary on the state of connected cars and in-vehicle infotainment systems technologies, business ecosystem trends, and how they will shape over the next 12-24 months.


Product Innovation Congress

By | Cloud Computing, IT Strategy, M2M, PLM, Strategy | No Comments

Product Innovation Congress 2014, San Diego

Another very successful Product Innovation (PI) Congress was held last week in San Diego. Don’t let the very slow trickle of tweets from the event mislead you: the organizers put together a very full agenda that kept the delegates engaged throughout the two-day conference. Plus, I don’t think that engineers are very much into twitting anyway.

Instead of a detailed chronology of the event, which you can get from reviewing the agenda, I chose to highlight some key points and use them to offer commentary and observations about the state of our industry.

PLM Market Activity

There appears to be much activity in selecting, replacing and upgrading PLM software. Some were first time PLM buyers, but there were a surprising number of companies expressing dissatisfaction with the exiting solution and seeking a “better” PLM system. I did not conduct a structured survey, but anecdotally it appears that a good number of those in search of a PLM replacement are users of ENOVIA SmarTeam and ENOVIA MatrixOne.

My observations:

The quest search for a “better” PLM system will continue to drive activity and put pressure on PLM vendors to deliver greater value in enhanced functionality, lower cost, faster deployment, and new delivery and ownership models. The move of reluctant PLM vendors such as Oracle Agile to offer a cloud delivery model is but one recent example and I except other PLM vendors are in the process of following suit. This dynamic keeps the door open for vendors such as Aras PLM that continues to challenge the hegemony of the incumbents.

That being said, buyers should realize that the PLM software itself isn’t a substitute or remedy for flawed and suboptimal product development processes. For each dissatisfied PLM user company you will find many others who are highly successful and are able reap the full potential of the very same PLM software. It isn’t the software. It’s you. Don’t blame the vendor.

PLM Implementation

Most of the conference presentations made by PLM practitioners from product companies were interesting, but more often than not the insight and recommendations offered by the speakers were what one might consider General Project Management 101; not even PLM 101.

My observations:

The growing complexity of product development processes and the commensurate expectations from PLM software are challenging product organizations. The recent interest in incorporating the embedded control software development process under the PLM umbrella further complicates the issue.

We need to elevate our view of PLM from a product data management (PDM) and business process automation software to a portfolio of processes, best practices and tools that rich create context for optimizing complex multidisciplinary product related decisions.

Data Migration

PLM and CAD data migration is typically an unpleasant part of engineering software replacement and major upgrades. Judging by the number of delegates interested in the topic, the issue of data interoperability and backward compatibility continues to plague the space.

My observations:

The ability to retain and access product data is critical. However, product organizations are challenged to maintain data compatibility and interoperability across engineering and business tools with negative consequences in many business critical activities, from data retention for compliance to design reuse.

Application Lifecycle Management (ALM)

Chairing the Application Lifecycle Management (ALM) track, I spent much time in ALM discussions. While the density of embedded software in a broad range of products in most industries continues to increase, software development tools and practices have not kept up with the need to handle with increasingly stringent time, budget and quality goals.

A panel discussion with Ford, Hologic, BigLever and IBM (Rational) seemed to gravitate towards the realization that engineering methods and practices, coupled with organizational culture, keep ALM as a separate engineering discipline with its dedicated task specific tools. The panelists also explored, albeit briefly, the notion of a federated software development process environment and ALM functionality delivered as a collection of services.

My observations:

We are likely to continue the PLM vs. ALM debate, partly because of the highly visible position PTC is taking with its Integrity software. But I do not expect to see any significant change in the way organizations use contemporary ALM tools to coordinate and synchronize the software, hardware and mechanical development cycles.

My current research interest is in using product line engineering (PLE) as an alternate way to decompose product architecture and realize a federated software development process environment that maximizes the utility of  task-specific software development tools.

Internet of Things (IoT)

As in many conferences today there was the obligatory Internet of Things (IoT) keynote address. I suspect I am in the minority here, but I found the “The Silent Intelligence: The Internet of Things” presentation to be plagued by the all-too-common trivial use cases, generalizations and inaccuracies, and insufficient real-world reality checks.

On the other hand, the review of the Masdar City project: “Smart Cities in Advancing Global Renewable Developments” was fascinating.


See you next year in Boston.


Wisdom of Things

By | Internet of Things, M2M, Social, Strategy | No Comments

On a recent Internet radio show I discussed the potential business value of the Internet of Things (IoT) and my views about the accelerators and, the more critical, inhibitors to the evolution of this much advertized concept.

Wisdom of Things

The potential value of the IoT is not in the ability to connect and access remote devices via the Internet. It is in the information gathered and shared by a large number of connected devices in a meaningful fashion that provides a new business value, new business models. To borrow a phrase from social networking, in Wisdom of Crowds, connectedness (in this case among people) enables new value; think of the Internet of Things as facilitating the “Wisdom of Things.”

In his book The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, James Surowiecki argues that the aggregation of information in groups results in decisions that are often better than could have been achieved by any individual member of the group. Truth be told, I am not a big fan of the Wisdom of Crowds theory when it comes to people. But I do believe that the collective information stemming from multiple information producers: sensors, measurement devices, decision support systems and, yes, to some extent, people, can be aggregated to improve decision making and extract greater business value.

IoT narratives often talks about smart connected devices. Some devices, from factory robots to home thermostats are designed to have localized decision making capabilities, and could be considered smart. But many other devices are not much more than connected sensors, for example the Fitbit personal activity tracker or even the proverbial IoT refrigerator that notifies the homeowner to add milk and eggs to the grocery shopping list.

The true “smarts” of the IoT does not happen at the individual device level.  It is a result of aggregating data from multiple heterogeneous devices, both “smart” and “dumb”, and is therefore available and discoverable at the edge of the network.

Conduit vs. Content

The concept of “Wisdom of Things” leads to another observation about the Internet of Things. The potential impact of the IoT is much less a function of how many things are connected and how, but rather it is in exploiting the information they share. In other words, new business models need to focus not on the IoT “pipes”, or the conduits, but rather on the content that flows through them.

Therefore, I do not consider remotely accessible devices in a manner that until recently we used to refer to as machine to machine (M2M) communication as being synonymous with the IoT. M2M applications have been shown to provide tangible and meaningful value, but they tend to focus on individual devices and narrow domains and not on the aggregation of heterogeneous information to gain deeper insight into more complex decision-making domains.

If we accept the notion that IoT business models are predicated on information aggregation, then we begin to better understand and address the barriers to maturing from M2M to IoT, to realize the business impact of the IoT.


The Emergence of Application-Specific IoT

By | Internet of Things, M2M, Strategy | 4 Comments

General IoT vs. Application-Specific IoT

2014 Gartner Hype Cycle Special Report

2014 Gartner Hype Cycle

Gartner recently published the 2014 Gartner Hype Cycle Special Report, which evaluates the market perception and penetration for over 2,000 technologies, services and technology trends. Of particular interest to many was the placement of the Internet of Things (IoT) at the top of the Peak of Inflated Expectations. Gartner defines the Peak of Inflated Expectations as: “Early publicity produces a number of success stories—often accompanied by scores of failures. Some companies take action; many do not.”

And, indeed, inflated market size forecast and grandiose visions of IoT reign supreme. Morgan Stanley forecasts that by that by 2020 there will be75 billion connected devices, whereas Harbor Research estimates the same market will consists of a meager 8 billion connected devices. Gartner estimates that by 2020 there will be 26 billion connected devices, IDC counters that with 30 billion connected devices, Cisco says 50 billion… you get the picture.

Do you think they really know? The typical rationale offered by industry analysts is that they use different market and technology taxonomies, hence the vast differences.

On the other hand, does it really matter that much?

What matters is the data these devices can collect and the business value they provide when connected to each other and to advanced analytics and decision-support applications.

Gartner estimates a time span of 5-10 years from the Peak of Inflated Expectations to the Plateau of Productivity, or, simply put, technology and market maturity, i.e. the ability to achieve a sustainable business value. As I commented elsewhere, when it comes to the IoT Gartner may be somewhat optimistic about the time to maturity.

But it appears that many in the press as well as in industry do not subscribe to Gartner’s assessment and timeline. The IoT gets much attention these days, ranging from breathless headlines to long term strategy decisions by companies that span the gamut from onboard data acquisition and wireless communication hardware to enterprise software.

Regrettably, there are still as many trivial IOT scenarios as there are serious ones and, by and large, we are still awaiting the realization of the exponential growth in IoT revenues the pundits are promising. Even investors that too often pursue early technologies and unsubstantiated business ideas, sometimes inexplicably, are lukewarm about the potential value of the dozens of companies in the IoT space, most of whom are generating less than $10M in revenue.

There are certainly interesting and valuable business solutions based on device connectivity. Until recently we simply called them Machine to Machine (M2M) communication.

How will IoT fans, pundits and especially company CEOs betting their company business on IoT explain the disparity between the vision and the reality? We are already witnessing the introduction of new “classes” of IoT such as the “industrial IoT”. I expect that we will witness the emergence of “application-specific IoT” and “industry-specific IoT”, which, in so many ways, will not be that different from the many M2M technology implementations that address narrowly defined yet no less valuable business opportunities such as the solutions implemented, for example, by Axeda at Diebold, EMC, GE Healthcare, and Philips Healthcare.

What does the IOT need in order to make faster progress towards realizing the bold vision of “billions of connected devices”?

Conduit vs. Content

We often think of the evolution of the World Wide Web as a model that the IoT will follow. In fact, there is an assumption that with the proliferation of instrumented devices and pervasive commutation, the adoption rate of the IoT will be much faster than that of the Internet. However, the WWW model fostered a culture of collaboration and open standards such as hypertext, HTML and common page browsers; the W3C consortium develops and maintains standards and connects developers and users.

The IOT industry does not seem to converge in this direction; almost the opposite. The space is inundated by numerous of communication standards and data protocols that aren’t interoperable, and companies offering as many one-off solutions to attempt to connect devices using incompatible communications methods and interfaces.

What do we mean when we say “connected devices?” Connect to what? How? What for?

The business value proposed by the IoT does not end in enabling connectivity among devices. We are approaching the point where, thanks to the WWW, everything and everyone is a roaming IP node. Connectivity itself isn’t the point – it is in exploiting the information devices generate, collect and transmit.

The business potential is not in the conduit, or the “plumbing” of the IoT; it is in the content.

However, the data streams of disparate devices cannot be simply connected. Nor can the portfolio of dissimilar applications communicating with these devices be simply daisy-chained as depicted by those futuristic scenarios.

Data interoperability is critical to harvest the potential value of the IoT’s content and to enable new meaningful business models that utilize its data. That means not only compatible and interoperable data protocols, but also, more critically, data models and common semantics, so that disparate devices and services can be linked, aggregated and harmonized to form an IoT solution.


Security is an obvious concern, especially when considering the plethora of consumer devices from obscure, untested and potentially rogue sources. These scenarios are all over the IOT-of-the-future space: cars hacked, manufacturing lines shut down, utilities brought down through malicious acts, and many others.

Related to this topic are questions related to data privacy and data use rights.

Although there isn’t an immediate response to these challenges, it’s logical to assume that it will be easier to implement and control both technical solutions and legislation if embodied in a small set of open standard IoT interoperability mechanisms.

In the meantime, security and privacy concerns will slow down the broad adoption of the “general IoT.”

Reaching a Critical Mass

Although connecting devices to the Internet is getting easier technologically, there are several additional factors to consider, such as the interoperability and security points I discussed above. But even if we focus on the industrial “application specific IoT” (M2M), in which those aspects are easier to manage, there are still forces at work that hinder the overnight explosion in connectivity some predict.

Many interesting IoT scenarios assume that facilities and assets such as factory equipment, cars, buildings, railroad tracks, highways and many other “things” become connected overnight. In reality, it will take many years for older assets to be replaced or retrofitted to support connectivity. A few examples will illustrate that point.

The average age of passenger and light duty-vehicles in the U.S. is 12.4 years and it continues to rise. Even if all new cars sold are Internet-of-Things-capable tomorrow, it will take several long years until there are enough connected cars on the road to reach the promised safety and traffic management benefits.  On the manufacturing side, the average age of industrial equipment in the U.S. has risen above 10 years—the highest since 1938. At the same time, the growth in capital spending by manufacturing companies is growing at a slow pace. Let’s look at energy. 50% of the U.S. power generating capacity is at least 30 years old.

Looking Ahead

The Internet of Things is not a new Internet. For there to be an Internet of Things, someone has to put the Internet in those things, and then those things have to be placed on the Internet and connected in a manner that realizes a meaningful and worthwhile business value. And, by definition, this value does not occur until there are many devices that connect and communicate effectively and securely.

Until we put the Internet into things—actually, until we put the business into things—there will be no real “Internet of Things”; just a lot of “things” that connect over the Internet, but not necessarily to each other: M2M all over again. (With attribution to Mike Elgan.)

Learn More

Kitsch (Shawn McNulty)

TED: Trivializing the Internet of Things

By | Internet of Things, M2M | 3 Comments

Nicholas Negroponte, the MIT Media Lab founder spoke at TED Vancouver this week, sharing his vision of the Internet of Things. He was quoted by by Liz Gannes to say “I look today at some of the work being done around the ‘Internet of Things’ and it’s kind of tragically pathetic.”

Negroponte was an early astute observer of our digital world. I have always liked Negropont’s observation how backwards fax machines are: we take digital information, such as a document or a spreadsheet and and convert it to a static analog representation in order for it to be transmitted via an (analog) fax machine. Read More