The Internet of Things: The Death of General Purpose Computing?

Ever try to send a text from your laptop while you’re on the go?  Theoretically you could with the right hardware and software, but why would you?  Laptops aren’t meant to be that mobile or that convenient.  The text message, with its 140 character limit, was the quintessential application, and for a while the only one, for cell phones.  Similarly, the thought of writing a ten page document on a smart phone, while technically feasible, is a chore few could endure. 

In their zeal to capture as large a market as possible, smart phone and tablet developers have gone down the same road as their predecessors did with desktop and laptop computers.  They created bloated and buggy operating systems meant to do everything the original personal computer could do, plus a lot more.  On top of this were built applications, which laudably included many single purpose applications to hail a cab, order a pizza, or get directions.  However, many of the more popular apps were still bloated tools meant to extend an entire social media or gaming ecosystem to the device.  In most cases, a use case’s demand for speed was largely satisfied but still with a general purpose feel.  One was still left wanting for a more intelligent and nimble app that leveraged overall context to produce better results faster.  The commercials for Microsoft’s smart agent called Cortana imply that previous generations of this technology have left users disappointed. 

But all this begs the question of whether, in our efforts to extend these operating systems and applications to ever more devices, we’ve made it harder to solve the challenges we face.  While a common set of application programming interfaces (API) implied by a standard operating system sounds perfectly logical, extending bloat to specialized tools and environments that don’t need it is a recipe for failure or, at the very least, higher costs for the end user.

The promise of the Internet of Things is that we can finally get to the heart of the problem.  The essence of this concept is the notion that highly specialized devices can be marshaled together to provide all the functionality needed of general purpose computing, but only when it is needed.  Like an army, this ecosystem possesses specific components for gathering intelligence, others for taking actions, and still others for coordinating all the disparate activities.  Each part knows only what is necessary to do its part.  It’s the same division of labor that brought us the spectacular innovation of the assembly line in the early twentieth century.  To be successful, the designers of the assembly line had to intimately know what tasks had to be done by each component and, through painstaking efforts, linked all parts together in real time to produce some truly magnificent products that are now created almost entirely by machines that are part of the Internet of Things. 

However, the problem with the knowledge industry, or what we consider the typical office environment, was that hardware and software vendors had little idea how people did their jobs.  Unlike like the factory, office workers operated with much more discretion and could not be easily pinned down to a particular set of rote tasks to be completed at a specific time.  Of course, that didn’t mean the jobs were extremely complicated.  Most people were still doing a lot of fairly predictable tasks involving data entry, workflow, and some transformation.  But the implication was that it was too hard to insert fit-for-purpose applications, devices, and operating systems into the mix.  Instead, this task was left to integrators to customize incredibly bloated financial and management systems to fit the organization rather than build things from the ground up.

The Internet of Things, combined with advances in artificial intelligence and some truly innovative thinking, offers to change this formula.  Through substantial cost reductions, sensors, software agents, and other distributed devices can better capture all the data necessary to understand all the tasks to be performed and then design and build only the hardware and software needed for the task.  Open source software and its ability to let developers take only the code they need offers a growing library to draw from.  When combined with 3D printing technology, we could finally see the innovation necessary to lift enterprise information technology (IT) from the cost center doldrums that it has been saddled with for the last twenty years or more.  Technology that is truly one with the business process can now be possible without the tremendous costs that those customizations have often entailed.  That means that IT workers can be left to choreographing this diverse symphony rather than be relegated to troubleshooting and tinkering.

And what of cybersecurity?  That’s the best part.  Systems are most secure when their components are bloat free, when they do only what they are supposed to do and no more.  In cybersecurity speak we talk of reducing the attack surface.  For the last few years we’ve sought to rectify the problem by instituting application whitelisting, technology meant to limit what that bloated software could do.  A simpler solution would be to not have that unneeded functionality at all.  The theory of evolution tells us that needed functions develop while ones no longer needed disappear.  It’s surprising how long it’s taken us to recognize that principle in the technology business.  But then again, we haven’t had millions of years of trial and error.  But if Moore’s Law is correct, we’ll only need a few years if we’re going in the right direction.

Posted on September 29, 2014

Gib Sorebo

by Gib Sorebo

Chief Cybersecurity Technologist, Leidos

← View more Blogs

This document was retrieved from on Fri, 21 Oct 2016 10:49:38 -0400.
© 2016 EMC Corporation. All rights reserved.