How should CIOs deal with AI providers?

How should CIOs deal with AI providers?
How should CIOs deal with AI providers?

Generative AI is at the heart of discussions within companies. But very few can claim that these technologies solve a critical problem.

For several months, generative AI has been at the heart of discussions within companies. Everyone is looking to see how they can use it, whether to reduce costs or improve the experience of their customers and employees.

All surveys converge on the fact that organizations plan to increase their investments in AI in the coming months. However, there are still very few who can affirm that these technologies will solve a critical problem, and for good reason: not all models confer a competitive advantage!

Maintaining a rational approach to AI despite widespread enthusiasm

Logically, each new solution implemented within an organization should be evaluated for its ability to meet needs with concrete economic and operational repercussions. But the excitement around AI has changed the situation somewhat. Executives are under enormous pressure to quickly implement AI solutions. And despite the proliferation of large language model (LLM) releases over the past 18 months, it is sometimes difficult to demonstrate their business value.

The fact is that the performance of AI solutions is correlated to the context and data on which they are based. Useful AI is designed and developed for specific uses, reducing the context and task to the essentials of what the business needs to achieve the desired results. Specialized AI tools, such as intelligent document processing, can be trained over time to read and understand a specific type of document, regardless of format, layout, size or language. This is why it is essential to consider testing with clearly formulated, measurable and quantifiable expectations. And when implemented solutions fail to meet expectations, IT managers should document issues, raise concerns with the vendor, review contractual agreements, and discuss customization options.

Rather than deploying AI solutions very quickly in the hope of gaining a competitive advantage, it makes more sense to take a reasonable approach to ensure that these tools meet the needs of the business. This is where specialized AI comes in. Organizations need small, purpose-built language models to understand, interpret, and act on complex data with incredible accuracy and efficiency.

AI technologies are still new and sometimes poorly understood by the managers who implement them. IT managers must therefore be trained to better master them and to make decisions that go beyond suppliers’ marketing speeches.

Allocate internal resources to actively monitor this growing market

The AI ​​market is still in its infancy, and new solutions are emerging every day without really delivering compelling improvements to business operations. It can be difficult to sort out fact from fiction among the myriad of promises made by suppliers and their investors.

To determine how much trust they can place in a vendor, CIOs must therefore rely on hard facts, such as their track record in implementing AI technologies.

However, solutions that are popular today are likely to change position as competition continues. The speed at which the AI ​​market is growing therefore means that IT managers must monitor its development closely, while keeping an eye on the solutions they have already approved.

Beyond the evolution of the AI ​​landscape and the identification of potential partners, the interest in monitoring this market is to help teams better establish their understanding of the impact and level of risk of these technologies.

Questioning the reliability, security and bias of AI models

Most generative AI solutions carry risks, including imprecise or inaccurate results. Many analysts expect regulators to verify the validity of AI claims. But managers and their teams must still demonstrate skepticism in evaluating the results generated, which somewhat qualifies the interest of the solutions adopted.

The issue of security also helps set AI apart from other technologies. The way these solutions are evaluated differs from other technologies, and those charged with securing them need to ask more questions about data privacy, copyright liability, and the limitations of models currently on the market.

Unlike emerging technologies, which are often slow to establish themselves, AI has such power of attraction and such transformative potential that companies seek to integrate it quickly, in the hope of remaining competitive, or even standing out. of their competitors. But too much haste runs the risk of errors of judgment and sometimes additional costs compared to the services provided.

While waiting for the market to take shape, and to protect themselves against the pitfalls linked to the youth of these technologies, companies should adopt AI with moderation, for example by including legal protections in their contracts allowing them to get rid of it in case of limited performance.

-

-

PREV the hotel holding company files for bankruptcy
NEXT Meta will not train its AI on the data of its Swiss users