Skip to main content

The promise of pen computing




Something exciting happened in the personal computing domain of the early 90's: the GO corporation launched their PenPoint OS. The computer industry had just adopted the graphical user interface - inspired by Xerox, brought to market by Apple and copied by Microsoft - this was the way how we would use computers from now on. Fueled by this magic, visionaries (among them Robert Carr) already saw the next logical evolution in the user input interfaces: the pen!
The promise of using a pen to interact with the computer was tempting to the point where Microsoft saw its nascent monopoly challenged and decided to copy once more an idea, so it went after the PenPoint OS with a special version of Windows 3.1 for pens. Similarly, Apple - not wanting to miss the boat - under the leadership of ex-Pepsi Cola CEO John Sculley, saw it's pen future in a device called the Newton. The pen computing idea was simple: why not use the handwriting that we have all known since we've entered primary school and use it as a way to interface with the computer. The metaphor is pen & paper - but with a smart computer screen.
Implementing this vision proved to be much more of a challenge than anyone had anticipated. There were limitations on processing performance and handwriting recognition dependent very much on how precise the handwriting was. But also the general notion of how to combine the now ubiquitous mouse input with pen was not really addressed. In short, none of these commercial attempts succeeded - the only successful device in that space was the Palm PDA which didn't even use handwriting recognition. And thus, every pen device disappeared from the market (Newton was killed by Steve Jobs himself - he hated Pens). However Microsoft kept at it - a bit in secret and a bit hidden (Bill Gates is a big fan), and while Apple killed the Newton, the handwriting recognition technology made it in fact into MacOs and (later) iOS.

Enter artificial intelligence and the 2010s and all the usual suspects are back at the table with their pen computing devices. Microsoft built it into their Windows 10 operating system and Apple uses it in its line of iPad. So how does it perform today?

Sadly, not great. The promise of pen computing from the early 90’s to the late 2010’s (a time span of 25 years), has not in fact changed the game in any meaningful way and AI has not been the white-knight of saving the “genre”. We could say that pen input has enjoyed some modest success among the creative type - designers, artists etc. but has not had any meaningful impact for the rest of us. I know, because I’ve been trying for 25 years to make it work. Today’s latest Microsoft Surface with pen, feels not that different from my Newton that I had 20 years ago. The challenge is still the same: my handwriting is not properly recognized. Just like autonomous driving - this is a zero sum game. Autonomous driving has to work under any circumstance because if it doesn’t people die. It’s not as dramatic with handwriting recognition but it's the same principle: if it doesn’t work 100% of the time, it is useless because the time it takes to fix your errors takes away from that promise of increased productivity and in the end you’re still better off with a keyboard (quiz question: how long would it have taken me to write these short paragraphs with a pen-input device? I suspect around 4 times longer). But the far bigger crime is the fact that nobody has ever thought beyond the graphical user interface that has relied on mouse- and keyboard input for the last 30 years because it doesn’t really work for pen input devices. My perfect example is Excel: try to enter a formula or a number in a cell with a pen. It’s impossible because the applications have not been conceptualized for pen input. Microsoft only has a single application that tries to show the promise of pen computing and that is OneNote - except that it doesn’t help. OneNote is such a monstrosity of application where you can enter text and graphics in a multitude of ways - inconsistent as hell.
At the end of this decade, I have to sadly admit that the promise of pen computing is still very much elusive. Don’t get me wrong, it works to capture notes and to draw or design something on screen - in short for niche applications. There is no keyboard replacement on the horizon for sure and it may in fact never arrive unless we completely rethink how we would interact with a computer if we didn’t have a mouse or a keyboard. But given all the investments into existing apps and operating systems, this may never be viable. I fear we will leapfrog straight into controlling computers with our minds. That’s a pity because the pen would be such a phenomenal tool.

Comments

Popular posts from this blog

Will smart phone cameras be better than digital mirrorless cameras?

  If you believe Terushi Shimizu or rather, the way the press is formulating it , then camera phones will have better image quality in 2024 than your trusty DSLR or mirrorless digital camera. He backs this up with sensor technology advancements and computational photography. He has a point.     However, as a digital camera enthusiast myself, I must strongly disagree with this point of view. The message might be interpreted in such way that its meaning reflects a view that we are no longer bound by physics to get the best image quality.     The thing is this, the bigger your camera sensor, the more photons it can capture. However, this comes at the realization that big sensors require big lenses which in turn makes the camera big and heavy. I’m simplifying of course, but that’s physics. For camera makers it is therefore always a question of tradeoffs: do you want better image quality or do you want a smaller and lighter camera. Camera phones or cameras in smartphones, have changed this

Apples Vision Pro Headset strategy is all about its Arm-chips.

  Apple has given us a vision of what their VR and AR future might entail. But as have others pointed out numerous times, the whole point of the showcase at the WWDC 23 was to let people experiment, I’ve heard others say that it’s like the launch of the Apple Watch when Apple didn’t really know what would become of it. This is similar and yet different.  Just like the Apple Watch (and the iPad before it), Apple sought to porting its whole ecosystem onto a watch – granted, the Apple Watch can’t live on its own and a better comparison would probably be the iPad. The iPad can live without any other Apple device and unlike the iPhone, never really had a clearly defined function other than to watch movies and browse the web. It was not until it gained the ability to be used with a pencil that artists and designers started to explore the potential.  I’m trying to point out that Apple took 5 years from the first iPad in 2010 to the iPad Pro with pencil in 2015 to find its “killer-app”. But th

The new shiny armor of AI

If we listen to the media, business leaders, and the press, we should be getting behind the AI wagon because of its potential to automate many of the processes everyday companies struggle with. I don’t dismiss this notion entirely because I think it’s true if you have the ability to integrate this technology in a meaningful way. For example, the startup company " scrambl " (full disclosure, I’m a minority investor) is making use of gen-AI by "understanding" CVs (curriculum vitae) from applicants and identifying the skills to match them to open positions. This works great – I have seen this in action, and while there are some misses, most of that "normalization of skills" works. There are other promising examples, such as Q&A systems to understand the documentation of a complex environment. When combined with RAG ( retrieval augmented generation ), this has the potential to significantly reduce the time it takes to make complexities understandable. But