Latest release of Red Hat OpenShift AI accelerates AI scalability and flexibility – Press Releases

Latest release of Red Hat OpenShift AI accelerates AI scalability and flexibility – Press Releases
Latest release of Red Hat OpenShift AI accelerates AI scalability and flexibility – Press Releases

The following text comes from a press release and in no way reflects the opinion of the editorial staff.

Red Hat, the global leader in open source solutions, announces the release of Red Hat OpenShift AI 2.15. This AI/ML platform, based on Red Hat OpenShift, offers businesses a powerful and scalable solution for developing and deploying artificial intelligence applications in a hybrid cloud environment. With this new release, businesses benefit from increased flexibility, advanced tools for customization and monitoring, and improved security and consistency across public clouds, data centers and edge environments.

According to IDC, more than 40% of IT spending among Forbes Global 2000 companies will be devoted to AI-related initiatives. IDC also predicts that the use of generative AI and automation technologies could enable $1 trillion in productivity gains by 2026. Red Hat anticipates that these investments require a robust platform capable of handling the lifecycle of AI/ML models and deploy generative AI applications alongside existing workloads in the hybrid cloud.

Main features of OpenShift AI 2.15

Saving models : centralized management for sharing, tracking and versioning of generative AI models, with multi-registry support. Red Hat gave this project to the Kubeflow community.
Data drift detection : Continuous monitoring of input data to identify discrepancies between training data and production data, improving model reliability and accuracy.
Bias detection : tools to monitor fairness and transparency of models, essential for building trust in AI, from the TrustyAI community.
Optimization with LoRA : use of low-rank adapters (LoRA) for efficient adjustment of language models (LLM), reducing costs and resource requirements.
NVIDIA NIM support : Accelerates the deployment of generative AI applications via microservices accessible and integrated into the NVIDIA AI Enterprise platform.
Support des GPU AMD : New options for developing, training, and tuning models with AMD GPUs, ideal for optimizing performance for intensive compute loads.

Improved model deployment

Red Hat OpenShift AI 2.15 improves the deployment of generative AI models with new features like the vLLM runtime for KServe and support for KServe ModelCars. These additions enable the use of Open Container Initiative (OCI) compliant repositories to easily store and manage containerized models. Additionally, private and public route options for KServe endpoints enhance security by allowing targeting of internal endpoints.

Strengthened AI training and experimentation

This OpenShift AI update improves data science pipelines and experiment tracking with intelligent advancements. Hyperparameter tuning via Ray Tune increases accuracy while accelerating the training of predictive and generative models. Adding base containerized images to Ray clusters makes it easier to distribute training and optimization tasks across multiple workloads within the cluster. This not only reduces processing time but also maximizes the utilization of available nodes.

Availability

Red Hat OpenShift AI 2.15 has been available since mid-November 2024.

-

-

PREV “Your package did not fit in the mailbox”: watch out for this scam which is rampant again as the Christmas holidays approach
NEXT the price of Samsung’s tablet drops even further for Black Friday