top of page
  • Writer's pictureMatan Ben-Ishay

The Future of FinOps: Unleashing the Potential of AI

We are in the GenAI era and its booming with companies fighting to not be left behind the AI revolution. The FinOps space is no different so here are a few predictions for how AI will accelerate FinOps and Cloud Optimizations:

Architecture As a Service

The best way to avoid needing to optimize? Build an efficient and optimized infrastructure to begin with. Architecture copilots are already here and AI will soon be designing the outmost efficient infrastructure, tailored specifically to users and use cases. Instead of relying on limited knowledge of engineers and architects, adoption of AI would utilize a broader net of public shared knowledge and resources in order to plan an environment that is designed for speed, reliability, and with minimum cost for the user.

With AI at the helm, the infrastructure and processes would also be constantly revaluated and adapted automatically making architecture and design changes as the company scales and launches new products and features. It would set the infrastructure for the future growth. It will be all seeing and knowing and won't have fear of breaking things - something most companies struggle with today and slows down updates. Using AI would streamline significant updates and changes that most companies are hesitant to execute and can be configured to prioritize efficiency, reliability, and growth plans. 

Anomaly Detection

Anomaly detection is by nature reactive. We react to unexpected or new increases and by the time they are mitigated - we already lost some chunk of costs. Success is where you minimize the cost incurred by the unplanned anomaly and really depends on reaction time and solution implementation (to avoid recurring anomalies).

Utilizing AI in this space could help determine patterns and identify areas of vulnerability where anomalies could materialize before they do. It could incorporate general learnings from what other companies have experienced in the past and combine it with specific knowledge of the existing customer and its development environment. If certain services fail often then the AI could identify those and alert (and even take action) immediately once detected. For example, a service that only runs during the day and suddenly the AI starts detecting a pattern of increase during night time.

Anomaly detection rules also tend to be very generic (e.g. daily cost is higher than the last 7 days average or new service turned for the first time). AI would learn the specific nuance of the environment and would not be limited to general rules - it could flag things that otherwise wouldn’t be viewed as anomalies. And even if at first it's just flagging false positives - it will learn from its mistakes by incorporating the feedback and evolve over time creating its own rules for anomalies.  

Imagine the AI as an always on audit function - reviewing rates and usage in real time and being able to automatically address any irregularities. 

Forecasting Accuracy

Forecasting is already heavily influenced by Machine Learning even today. Generative AI will take this to the next level in terms of accuracy, ability to see around corners, and predicting the future.

Utilizing AI in forecasting would allow users to forecast at the smallest granularity at a fraction of the time and with no effort resulting in higher accuracy.

It would also determine and incorporate external factors that would otherwise not be considered to have any impact on the forecast. Its the butterfly effect - analyzing how something supposedly irrelevant to one's business would ultimately have a trickle down effect that indirectly impacts the business forecast. For example - analyzing the diplomatic relations between 2 countries and how that might impact the interest rate for a business that has revenue streams that are heavily depended on interest rates.

Launching a new product or introducing a new feature? AI would not only help you launch and do it in a cost efficient way but would also function as a calculator that can forecast how much it will cost you. Furthermore, it will calculate the expected ROI on the cost investment and offer improvements so you can make the informed decision of how and when to launch. 

Resources management? with ability to accurately forecast long term, any CUD or RI purchasing decisions would be automated and efficient. Quota management and provisioning of resources would be addressed based on the user's desire balancing velocity and cost - it would minimize any approval processes and any uncertainties around which type and amount of resources to provision.

Automatically Addressing Ignored Optimizations

Informed ignored items are optimizations opportunities users are aware of but are pushed aside since they aren't worth the time and effort to optimize. They get deprioritized given the labor and time tradeoff compared to the opportunity. The best example? a single VM rightsizing opportunity that requires the FinOps team to reach out to an engineer and for that engineer to take time from his day to day in order to right size a VM that would only save the customer a few cents per month. The time and labor of all involved doesn’t justify the effort and outweighs the cost savings.

Even external tools that identify opportunities for you would require some intervention (not to mention the cost of the tool itself). For AI, no opportunity is too small and it can be automated and streamlined so no labor is involved and the cost savings are captured. The reality is there are many such optimizations opportunities that are just too small individually but material when viewed in aggregate. Imagine the VM example above but you have 10K or even 100K such VMs - going after them one by one is manual and time consuming. Utilizing AI would make this obsolete and minimize the cost for the customer.


Recent Posts

See All

Commentaires


bottom of page