Skip to main content

The new shiny armor of AI

If we listen to the media, business leaders, and the press, we should be getting behind the AI wagon because of its potential to automate many of the processes everyday companies struggle with. I don’t dismiss this notion entirely because I think it’s true if you have the ability to integrate this technology in a meaningful way. For example, the startup company "scrambl" (full disclosure, I’m a minority investor) is making use of gen-AI by "understanding" CVs (curriculum vitae) from applicants and identifying the skills to match them to open positions. This works great – I have seen this in action, and while there are some misses, most of that "normalization of skills" works.

There are other promising examples, such as Q&A systems to understand the documentation of a complex environment. When combined with RAG (retrieval augmented generation), this has the potential to significantly reduce the time it takes to make complexities understandable.

But this is where we have to pay attention to the magic, and I would like to make two points:

1. If you think you’re mostly there, think again

There is an old computer lore law called the “ninety-ninety rule” that puts a humorous spin on the following observation: if you think you’ve completed a software project up to 90%, it will take the same amount of time it took to reach 90% to finish the remaining 10%. I’ve seen this validated more times than I’d like to admit. In some cases, it’s more than the initial time taken, and there are many reasons for this (changes in scope, management turnover, unknown unknowns, etc.). With gen-AI, it could also point to something more difficult: it could take significantly more resources to go from 99.9% to 100%. And herein lies the crux of it all: at what point can we depend on these neural networks? Today’s backpropagation algorithms deal in probabilities, not certainties. It’s one thing to have a large language model hallucinate and something entirely different when lives are at stake, such as with autonomous driving. This has disappointed over and over because companies like Tesla simply have no idea how to get from 99.9% to 100% – that 0.01% could be the difference between being alive or dead. In practical terms, it means we can’t depend on these models 100%, and fail-safe mechanisms are required, which may in turn require human supervision and intervention. With this idea in mind, it’s a near-perfect transition to my other point:

2. The ROI of the use case

AI is being touted as a solve-it-all solution for operational challenges, especially in the context of automation and digital transformation, at least according to the press and various consultants who smell an opportunity. While it’s true that in certain use cases, such as the normalization of data from the "scrambl" example, AI shines, when it comes to existing legacy systems and incumbents, the problems attempted to be solved via AI might need a different solution altogether.

For example, you can attempt to automate your customer service interaction, but how do you integrate large language models into your existing setup and operation? Today, LLMs run in the cloud (until we have small language models that can be as effective), which requires that whatever ERP/CRM system you have will need to send data into the cloud, potentially bringing challenges such as data protection. More likely, you already have a multitude of systems and processes in place to ensure that existing complexities created over the years keep working. In short, you end up with architectural requirements that need to be addressed, raising questions such as:

  • How to access the data: This may not be trivial, especially when it involves sensitive data that is not centralized or lacks frameworks to protect it.
  • How to send it to the cloud: This is challenging when certain data elements need to be masked or if you need to ensure that the data is never seen by the cloud provider, particularly when it’s third-party data.
  • How to retrieve the data: If it's asynchronous or time-delayed, how will it be sent back to your system? Do you have the facility to handle this in a batch-driven environment?
  • How to use the AI-generated or transformed data: Where do you store it? Do you require specific data protection rules? Are you learning from it?

In short, leveraging AI for automation requires investments and infrastructure. There may be a need for additional middleware and business logic, which must be factored into the equation in terms of costs and resource needs. Or to put this in other words, it depends on the use case – in some scenarios, it may be brilliant, while in others, it could create more work than it’s worth to pursue at this point in time.

The illustration reflects these observations:


Conclusion

AI's potential for automating processes is significant, but it comes with complexities that need careful consideration. The ninety-ninety rule reminds us that the last mile of implementation can be the most challenging, requiring substantial resources and fail-safes. Meanwhile, the ROI of integrating AI into existing systems depends heavily on the specific use case, highlighting the importance of thorough evaluation and strategic planning.

Comments

Popular posts from this blog

Will smart phone cameras be better than digital mirrorless cameras?

  If you believe Terushi Shimizu or rather, the way the press is formulating it , then camera phones will have better image quality in 2024 than your trusty DSLR or mirrorless digital camera. He backs this up with sensor technology advancements and computational photography. He has a point.     However, as a digital camera enthusiast myself, I must strongly disagree with this point of view. The message might be interpreted in such way that its meaning reflects a view that we are no longer bound by physics to get the best image quality.     The thing is this, the bigger your camera sensor, the more photons it can capture. However, this comes at the realization that big sensors require big lenses which in turn makes the camera big and heavy. I’m simplifying of course, but that’s physics. For camera makers it is therefore always a question of tradeoffs: do you want better image quality or do you want a smaller and lighter camera. Camera phones or cameras in smartphones, have changed this

Apples Vision Pro Headset strategy is all about its Arm-chips.

  Apple has given us a vision of what their VR and AR future might entail. But as have others pointed out numerous times, the whole point of the showcase at the WWDC 23 was to let people experiment, I’ve heard others say that it’s like the launch of the Apple Watch when Apple didn’t really know what would become of it. This is similar and yet different.  Just like the Apple Watch (and the iPad before it), Apple sought to porting its whole ecosystem onto a watch – granted, the Apple Watch can’t live on its own and a better comparison would probably be the iPad. The iPad can live without any other Apple device and unlike the iPhone, never really had a clearly defined function other than to watch movies and browse the web. It was not until it gained the ability to be used with a pencil that artists and designers started to explore the potential.  I’m trying to point out that Apple took 5 years from the first iPad in 2010 to the iPad Pro with pencil in 2015 to find its “killer-app”. But th