If we listen to the media, business leaders, and the press, we should be getting behind the AI wagon because of its potential to automate many of the processes everyday companies struggle with. I don’t dismiss this notion entirely because I think it’s true if you have the ability to integrate this technology in a meaningful way. For example, the startup company "scrambl" (full disclosure, I’m a minority investor) is making use of gen-AI by "understanding" CVs (curriculum vitae) from applicants and identifying the skills to match them to open positions. This works great – I have seen this in action, and while there are some misses, most of that "normalization of skills" works.
There are other promising examples, such as Q&A systems to understand the documentation of a complex environment. When combined with RAG (retrieval augmented generation), this has the potential to significantly reduce the time it takes to make complexities understandable.
But this is where we have to pay attention to the magic, and I would like to make two points:
1. If you think you’re mostly there, think again
There is an old computer lore law called the “ninety-ninety rule” that puts a humorous spin on the following observation: if you think you’ve completed a software project up to 90%, it will take the same amount of time it took to reach 90% to finish the remaining 10%. I’ve seen this validated more times than I’d like to admit. In some cases, it’s more than the initial time taken, and there are many reasons for this (changes in scope, management turnover, unknown unknowns, etc.). With gen-AI, it could also point to something more difficult: it could take significantly more resources to go from 99.9% to 100%. And herein lies the crux of it all: at what point can we depend on these neural networks? Today’s backpropagation algorithms deal in probabilities, not certainties. It’s one thing to have a large language model hallucinate and something entirely different when lives are at stake, such as with autonomous driving. This has disappointed over and over because companies like Tesla simply have no idea how to get from 99.9% to 100% – that 0.01% could be the difference between being alive or dead. In practical terms, it means we can’t depend on these models 100%, and fail-safe mechanisms are required, which may in turn require human supervision and intervention. With this idea in mind, it’s a near-perfect transition to my other point:
2. The ROI of the use case
AI is being touted as a solve-it-all solution for operational challenges, especially in the context of automation and digital transformation, at least according to the press and various consultants who smell an opportunity. While it’s true that in certain use cases, such as the normalization of data from the "scrambl" example, AI shines, when it comes to existing legacy systems and incumbents, the problems attempted to be solved via AI might need a different solution altogether.
For example, you can attempt to automate your customer service interaction, but how do you integrate large language models into your existing setup and operation? Today, LLMs run in the cloud (until we have small language models that can be as effective), which requires that whatever ERP/CRM system you have will need to send data into the cloud, potentially bringing challenges such as data protection. More likely, you already have a multitude of systems and processes in place to ensure that existing complexities created over the years keep working. In short, you end up with architectural requirements that need to be addressed, raising questions such as:
- How to access the data: This may not be trivial, especially when it involves sensitive data that is not centralized or lacks frameworks to protect it.
- How to send it to the cloud: This is challenging when certain data elements need to be masked or if you need to ensure that the data is never seen by the cloud provider, particularly when it’s third-party data.
- How to retrieve the data: If it's asynchronous or time-delayed, how will it be sent back to your system? Do you have the facility to handle this in a batch-driven environment?
- How to use the AI-generated or transformed data: Where do you store it? Do you require specific data protection rules? Are you learning from it?
In short, leveraging AI for automation requires investments and infrastructure. There may be a need for additional middleware and business logic, which must be factored into the equation in terms of costs and resource needs. Or to put this in other words, it depends on the use case – in some scenarios, it may be brilliant, while in others, it could create more work than it’s worth to pursue at this point in time.
The illustration reflects these observations:
Conclusion
AI's potential for automating processes is significant, but it comes with complexities that need careful consideration. The ninety-ninety rule reminds us that the last mile of implementation can be the most challenging, requiring substantial resources and fail-safes. Meanwhile, the ROI of integrating AI into existing systems depends heavily on the specific use case, highlighting the importance of thorough evaluation and strategic planning.
Comments
Post a Comment