Skip to main content

Information technology is no more

(Image source: datwyler.com)

As many have during the summer, I have taken a break and did some reading. To my amazement, I have seen the term of “IT” or information technology in much of the literature that I read over the summer. And I started to wonder if we’re doing ourselves any favors by still insisting on using this term. I tried to answer that question for myself, and have come to the following realization: 

The term information technology has its origin in a world where computers were largely absent – it was first mentioned in a 1958 Harvard Business Review article. The article singled out three distinct elements: “technique of processing large amounts of information rapidly”, “statistical and mathematical methods to decision-making problems” and finally, “the simulation of higher-order thinking through computer programs”. The article further explained that information technology would have its greatest impact on middle- and top management. To be very clear, the article was not wrong and in fact given that the outlook was meant for the 1980s – it was in fact spot on. 
But it was also during the 1980s and early 90s that we saw a change in the computing paradigm. We moved away from centralized mainframe computers, that were in fact only accessible by a few, to personal computers soon to be found on everyone’s desk. In the early 90s, the introduction of corporate computer networks gave rise to corporate email and business information systems were finally accessible to almost everyone with a personal computer inside the company. 
This was also the time, when many larger companies started to build their own business information systems and mission critical software applications. Others were adapting existing commercial software where it was available. 

But then something changed in the early 2000s, coinciding with the advent of the Internet, for the first time ever, over 50% of US households had a personal computer at home. This started the trend of consumerization of IT – or rather, if I had my way, the consumerization of computing technology, that allowed the use of personal devices for work. This trend was further accelerated with the adoption of laptops and finally, with the introduction of smart phones and tablets. This computing evolution has been foreshadowed as “ubiquitous computing”, a term which was coined by Mark Weiser from Xerox PARC in 1988. An important aspect of this new form of computing is that computing is accessible to anyone, regardless of skills or know-how. Mark explained it this way: “we are trying to conceive a new way of thinking about computers in the world, one that takes into account the natural human environment and allows the computers themselves to vanish into the background”. Smartphones have indeed fulfilled many of these promises. Long gone are the days when one had to learn cryptic commands for a command line interface. With smartphones and tablets, the need to understand the underlying technology is gone – thus, computers have “vanished into the background”.

The next step in this evolution is “contextual awareness” or “context-aware computing” as defined in the paper by Bill Schilit, Norman Adams, Roy Want in 1994. It stipulates that computers or software applications can “sense” the context in which the human is interacting and adapt or reconfigure accordingly. We have started to see several applications that are trying to “sense” the right context and provide the user with the relevant information; for example, by knowing the location or time, your device can alert you of changes in your local environment without you even asking for it. Smartphones can also detect when, for example, you’re driving or when your shopping (based on changes in geo-location); Soon our phone will suggest the proper credit card to use when in a particular store. This provides another entry into a “relatively” new computing paradigm, “AI” or artificial intelligence. The story of AI is one of many failures throughout history with little success until now. It dates to the 1940s with the famed computer scientist Alan Turing describing the requirements that would need to be met for an artificial intelligence to be indistinguishable from a human being. This would be later known as the “Turing test”. AI has been used in business since the early 90s but only recently has found a much broader footing because of increases in capacity, bandwidth, and computing power. In fact, today, several companies have started to create chips specifically for AI applications. 

To try and summarize all these changes, one could formulate the following: The shifting computing paradigm that we have lived through over the last 50 years must imply something else: today’s computing applications that we use for business activities, no longer (and have not for a long time) fit the definition of “IT” or “information technology” – computing has become much more and much more relevant to every aspect of our lives.

In fact, I would go so far and say that the term “IT” is now a limiting factor to businesses because technology is boxed-in (literally) and only “brought into” the discussion whenever the topic of computing arises. Thinking in terms of an “IT” organization limits the companies understanding of computing and it muddles the waters when to determine accountability. For example, to what extend does the product manager define the technology used if we refer to the product manager as “the business” but in fact the product is technology. Or who owns the topic of “API” (application programming interface) that is used like a product but is so technical that you need someone with the technology understanding. Is it a technology product management function or is it a business product management function? If companies distinguish between technology and business, there can never be clarity on the ownership and the greater purpose. But to drill even further on this point, nothing today happens in our lives or in commercial activities without computing. Everything we do, and I really mean everything, requires some form of computing. This, in fact, has been the largest paradigm shift over the last 50 years. 

And in many industries, what we used to call IT has become the business. For example, in financial services where most everything is now completely done via technology. This is a trend that we’re also seeing in other industries, for example, a colleague of mine argued that in current car manufacturing, technology or “IT” is only playing a part of the value chain. But I would argue that with Tesla and other car startups, the computing part is now significantly larger than at any other car manufacturer. Tesla is creating their own AI chips and writing their own software for the car, they are creating and designing their own supercomputer so they can run simulations – computing is intrinsically linked to their business. It’s often said Tesla cars are computers on wheels. Even car companies can no longer afford for IT to be IT because computing is becoming more and more central to their business and their cars. 

Companies, especially in industries being disrupted, would do well to understand this because it is foundational for topics such as digital transformation or agility and it may very well decide if the company survives or stagnates and dies.

Popular posts from this blog

Will smart phone cameras be better than digital mirrorless cameras?

  If you believe Terushi Shimizu or rather, the way the press is formulating it , then camera phones will have better image quality in 2024 than your trusty DSLR or mirrorless digital camera. He backs this up with sensor technology advancements and computational photography. He has a point.     However, as a digital camera enthusiast myself, I must strongly disagree with this point of view. The message might be interpreted in such way that its meaning reflects a view that we are no longer bound by physics to get the best image quality.     The thing is this, the bigger your camera sensor, the more photons it can capture. However, this comes at the realization that big sensors require big lenses which in turn makes the camera big and heavy. I’m simplifying of course, but that’s physics. For camera makers it is therefore always a question of tradeoffs: do you want better image quality or do you want a smaller and lighter camera. Camera phones or cameras in smartph...

Apples Vision Pro Headset strategy is all about its Arm-chips.

  Apple has given us a vision of what their VR and AR future might entail. But as have others pointed out numerous times, the whole point of the showcase at the WWDC 23 was to let people experiment, I’ve heard others say that it’s like the launch of the Apple Watch when Apple didn’t really know what would become of it. This is similar and yet different.  Just like the Apple Watch (and the iPad before it), Apple sought to porting its whole ecosystem onto a watch – granted, the Apple Watch can’t live on its own and a better comparison would probably be the iPad. The iPad can live without any other Apple device and unlike the iPhone, never really had a clearly defined function other than to watch movies and browse the web. It was not until it gained the ability to be used with a pencil that artists and designers started to explore the potential.  I’m trying to point out that Apple took 5 years from the first iPad in 2010 to the iPad Pro with pencil in 2015 to find its “kille...

The new shiny armor of AI

If we listen to the media, business leaders, and the press, we should be getting behind the AI wagon because of its potential to automate many of the processes everyday companies struggle with. I don’t dismiss this notion entirely because I think it’s true if you have the ability to integrate this technology in a meaningful way. For example, the startup company " scrambl " (full disclosure, I’m a minority investor) is making use of gen-AI by "understanding" CVs (curriculum vitae) from applicants and identifying the skills to match them to open positions. This works great – I have seen this in action, and while there are some misses, most of that "normalization of skills" works. There are other promising examples, such as Q&A systems to understand the documentation of a complex environment. When combined with RAG ( retrieval augmented generation ), this has the potential to significantly reduce the time it takes to make complexities understandable. But ...