KubernetesPro https://www.webpronews.com/developer/kubernetespro/ Breaking News in Tech, Search, Social, & Business Wed, 28 Aug 2024 15:09:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://i0.wp.com/www.webpronews.com/wp-content/uploads/2020/03/cropped-wpn_siteidentity-7.png?fit=32%2C32&ssl=1 KubernetesPro https://www.webpronews.com/developer/kubernetespro/ 32 32 138578674 Red Hat Releases OpenStack Services on OpenShift https://www.webpronews.com/red-hat-releases-openstack-services-on-openshift/ Wed, 28 Aug 2024 15:09:39 +0000 https://www.webpronews.com/?p=606999 In a big win for open-source cloud computing, Red Hat has announced the general availability of OpenStack Services on OpenShift.

OpenStack is an open-source cloud computing platform that has gained significant traction in the telecoms sector. Meanwhile, OpenShift is a Kubernetes-based containerization platform developed by Red Hat.

The general availability of OpenStack Services on OpenShift means that organizations can now deploy cloud platforms as part of their containerized workflows. Red Hat touts the combination of the tools as a better way for organizations, especially in the telecom industry, to integrate traditional and cloud-native networks.

This is a significant step forward in how enterprises, particularly telecommunication service providers, can better unify traditional and cloud-native networks into a singular, modernized network fabric. Red Hat OpenStack Services on OpenShift opens up a new pathway for how organizations can rethink their virtualization strategies, making it easier for them to scale, upgrade and add resources to their cloud environments.

Red Hat says OpenStack Services on OpenShift allows compute node deployment up to 4x faster than Red Hat OpenStack Platform 17.1.

The company also touts the following benefits.

  • Accelerated time-to-market with Ansible integration;
  • A scalable OpenStack control plane that can manage Kubernetes-native pods running on Red Hat OpenShift;
  • Easier day 2 operations for control plane and lifecycle management;
  • Greater cost management and freedom to choose third party plug-ins and virtualize resources;
  • Improved security and compliance scanning of the control plane and Role-based Access Control encrypts communications and memory cache;
  • A deeper understanding about the health of your hybrid cloud with observability user interface, cluster observability operator and an OpenShift cluster logging operator;
  • AI-optimized infrastructure supports hardware acceleration technologies to help ensure seamless integration and efficient utilization of specialized hardware for AI tasks.

Red Hat says OpenStack Services on OpenShift should be a boon for telecom companies, especially as they look to capitalize on AI developments.

By further blending Red Hat OpenStack Platform with Red Hat OpenShift, Red Hat will continue to help telecommunication service providers solve today’s problems while also preparing their environments to best capitalize on opportunities provided by intelligent networks that can leverage AI, flourish at the edge and scale on-demand. 94% of telecommunication companies in the Fortune 500 rely on Red Hat, underscoring our proven ability to support and modernize their networks. With Red Hat OpenStack Services on OpenShift, telecommunication service providers can expand new services, applications and revenue streams – propelling their business forward for 5G and beyond.

“Red Hat’s dedication to OpenStack is demonstrated through our extensive contributions to the project, our leadership in the OpenStack community and our focus on delivering enterprise-grade OpenStack solutions to our customers,” said Chris Wright, senior vice president of global engineering and chief technology officer. “This dedication must evolve as our customers’ needs change, and Red Hat OpenStack Services on OpenShift will help provide our OpenStack customers with a more unified, flexible application platform.”

]]>
606999
Kubernetes Scaling Strategies: A Deep Dive into Efficient Resource Management https://www.webpronews.com/kubernetes-scaling-strategies-a-deep-dive-into-efficient-resource-management/ Sun, 25 Aug 2024 07:27:24 +0000 https://www.webpronews.com/?p=606815 In the ever-evolving world of cloud computing, Kubernetes has emerged as a dominant force, providing enterprises with a robust platform to manage containerized applications at scale. However, scaling within Kubernetes is not a one-size-fits-all proposition. It involves a complex interplay of various strategies that ensure applications run efficiently, balancing resource utilization with performance demands. This deep dive explores the intricacies of Kubernetes scaling strategies, offering insights from industry experts and practical guidance for optimizing your Kubernetes deployments.

The Importance of Scaling in Kubernetes

Scaling is one of the most critical aspects of cloud computing, particularly in containerized environments like Kubernetes. Effective scaling ensures that applications have the right amount of resources—neither too much nor too little—to meet their operational demands. This delicate balance is crucial because over-provisioning resources leads to unnecessary costs, while under-provisioning can degrade performance or even cause application failures.

As one Kubernetes expert from Sysxplore succinctly puts it: “Scaling is probably one of the most important aspects of computing and a common cause of bankruptcy if our processes use more memory and CPU than what they need—they’re wasting money or stealing those resources from others.” The goal, therefore, is to assign just the right amount of resources to processes, a task that Kubernetes helps achieve through its sophisticated scaling mechanisms.

Vertical Scaling: A Legacy Approach

Vertical scaling, or scaling up, involves adding more CPU and memory to an existing node or application. This method increases the capacity of individual components rather than adding more components to handle the load. In Kubernetes, vertical scaling is typically managed through the Vertical Pod Autoscaler (VPA), which adjusts the resource limits of pods based on their observed usage.

For legacy applications that cannot run multiple replicas, vertical scaling is often the only viable option. “Vertical scaling is useful for applications that cannot run multiple replicas—so single-replica applications might be good candidates for VPA and not much more,” says a Kubernetes consultant. However, the limitations of vertical scaling are evident; it does not work well with horizontal scaling, which is the preferred method in most modern cloud-native applications.

Moreover, vertical scaling in Kubernetes comes with certain caveats. Changes to pod resources often require a pod restart, which can be disruptive to application performance. As the consultant points out, “Single-replica applications are the best candidates for vertical scaling, but we do not tend to design applications like that anymore.” Consequently, while vertical scaling has its place, particularly in managing older applications, it is not the go-to strategy for most Kubernetes environments.

Horizontal Scaling: The Preferred Strategy

Horizontal scaling, or scaling out, is the process of increasing the number of replicas of a pod to distribute the load more evenly across multiple instances. This method is the cornerstone of Kubernetes’ scalability, allowing applications to handle increased traffic by simply adding more pods.

Horizontal Pod Autoscaler (HPA) is the primary tool for managing horizontal scaling in Kubernetes. HPA monitors metrics like CPU and memory usage and adjusts the number of pod replicas accordingly. “Horizontal scaling is a must for all applications that can run multiple replicas and do not get penalized by being dynamic,” the Sysxplore expert notes. This method is particularly effective for stateless applications, which can easily be replicated without worrying about data consistency across instances.

For example, an HPA configuration might specify that an application should have a minimum of two replicas and a maximum of five, scaling up when CPU usage exceeds 80%. This ensures that the application can handle varying loads without overburdening any single pod. However, HPA is not without its limitations. It primarily scales based on CPU and memory metrics, which may not capture the full picture of an application’s performance needs.

Event-Driven Scaling with KEDA

For more complex scaling requirements, particularly those involving external or custom metrics, Kubernetes Event-Driven Autoscaling (KEDA) offers a more flexible alternative to HPA. KEDA allows scaling based on a wide range of triggers, such as queue length, database load, or custom application metrics.

“KEDA shines for scaling based on any other criteria,” says a Kubernetes architect. Unlike HPA, which is limited to CPU and memory metrics, KEDA can scale applications based on virtually any metric that can be observed, making it ideal for event-driven applications. For instance, an e-commerce platform might use KEDA to scale its order processing service based on the number of pending orders in a queue, ensuring that the system can handle sudden spikes in demand.

KEDA works by extending the capabilities of HPA, integrating with various data sources such as Prometheus, Kafka, or Azure Monitor. This flexibility makes KEDA particularly powerful in environments where applications need to respond quickly to external events or where traditional resource metrics are insufficient to determine scaling needs.

Scaling Kubernetes Nodes: Vertical vs. Horizontal

Just as applications need to be scaled, so too do the nodes that run them. In Kubernetes, node scaling can be approached vertically or horizontally, each with its own set of considerations.

Vertical scaling of nodes involves adding more resources—CPU, memory, or storage—to existing nodes. While this might be necessary in certain on-premises environments, it is generally less efficient in cloud environments, where nodes are typically created and destroyed dynamically. “If a node is too small, create a bigger one and move the app that needed more capacity to that node,” advises the Sysxplore expert. The overhead involved in dynamically resizing nodes often makes horizontal scaling the more practical choice.

Horizontal scaling of nodes, managed by the Cluster Autoscaler, is the preferred method in Kubernetes environments. The Cluster Autoscaler automatically adjusts the number of nodes in a cluster based on the resource requirements of the pods running within it. This ensures that the cluster can handle varying workloads without the need for manual intervention.

For example, during a traffic spike, the Cluster Autoscaler might add additional nodes to ensure that all pods have the resources they need. Once the traffic subsides, the autoscaler reduces the number of nodes, saving costs by only using the resources that are necessary at any given time.

“Horizontal scaling of nodes is a no-brainer,” the expert asserts. “Enable Cluster Autoscaler right away—just do it.” This strategy not only optimizes resource utilization but also ensures that the cluster can scale up or down in response to real-time demands, providing both flexibility and cost-efficiency.

Best Practices for Kubernetes Scaling

Given the various scaling strategies available in Kubernetes, determining the best approach for your applications can be challenging. Here are some best practices to guide your scaling decisions:

  1. Use Vertical Scaling for Legacy Applications: If your application cannot run multiple replicas, consider using VPA to manage its resource allocation. However, be mindful of the limitations and potential disruptions caused by pod restarts.
  2. Leverage Horizontal Scaling for Modern Applications: For most cloud-native applications, horizontal scaling with HPA is the optimal choice. Ensure that your applications are designed to run multiple replicas and are stateless where possible.
  3. Incorporate Event-Driven Scaling with KEDA: For applications that need to respond to external events or custom metrics, KEDA provides the flexibility needed to scale based on non-traditional metrics. Consider using KEDA alongside HPA for complex applications with diverse scaling requirements.
  4. Automate Node Scaling with Cluster Autoscaler: Always enable the Cluster Autoscaler in your Kubernetes clusters. This ensures that your cluster can dynamically adjust its size to meet the resource demands of your applications, optimizing both performance and cost.
  5. Monitor and Adjust Scaling Parameters: Scaling is not a set-it-and-forget-it process. Continuously monitor the performance of your scaling strategies and adjust parameters as needed to ensure optimal resource utilization.

Final Thoughts: Scaling Kubernetes for Success

Scaling in Kubernetes is a multifaceted challenge that requires a deep understanding of both the platform and your specific application needs. By leveraging the right combination of vertical and horizontal scaling strategies, along with tools like HPA, VPA, KEDA, and the Cluster Autoscaler, you can ensure that your Kubernetes deployments are both efficient and resilient.

As cloud computing continues to evolve, so too will the strategies for scaling Kubernetes. Staying informed about the latest developments and best practices will be key to maintaining a competitive edge in this dynamic landscape. Whether you’re scaling a small startup application or a large enterprise system, Kubernetes provides the tools you need to manage resources effectively, ensuring that your applications can grow and adapt in an ever-changing environment.

]]>
606815
Navigating the Complex Landscape of AWS Container Services https://www.webpronews.com/navigating-the-complex-landscape-of-aws-container-services/ Sat, 13 Apr 2024 12:36:47 +0000 https://www.webpronews.com/?p=603287

The plethora of container deployment options can overwhelm Amazon Web Services (AWS) ever-evolving ecosystem. This complexity is not just a trivial inconvenience; it’s a significant challenge that developers and companies face when optimizing their applications for the cloud. Understanding the different services AWS offers for container deployment and their advantages and disadvantages is crucial for making informed decisions that align with specific business needs.

An in-depth video (below) by the YouTube Channel Be A Better Dev navigates the complex landscape of AWS container services.

Container Deployment on AWS: A Multitude of Options

AWS provides several container services, each tailored to different requirements and use cases. Here’s a breakdown of the most popular services and their ideal use scenarios:

1. Amazon Elastic Kubernetes Service (EKS)

If Kubernetes is your choice for container orchestration, Amazon EKS is the go-to service. It offers a managed Kubernetes service that simplifies the tasks of setting up, scaling, and managing container applications. EKS is highly scalable and resilient, spreading applications across availability zones to enhance fault tolerance. However, newcomers to Kubernetes might find the initial setup daunting, and the costs can vary significantly based on the resources used.

2. AWS Lambda

AWS Lambda can now deploy container images for those operating within the serverless paradigm, allowing code to run in response to events on a fully managed platform. Lambda is particularly cost-effective for applications with variable workloads due to its pay-as-you-go pricing model. However, it imposes a 15-minute maximum execution time, which may not be suitable for long-running applications.

3. AWS Fargate

Fargate is a serverless container compute engine with Amazon Elastic Container Service (ECS) and Amazon EKS. This service removes the need to manage servers and clusters, making it easier to focus on designing and building applications. Fargate is ideal for applications that require long-running processes and high availability without the operational overhead of managing servers.

4. AWS ECS (Elastic Container Service)

ECS is an end-to-end solution for running a wide range of containerized applications. It supports both Docker containers and now, with Fargate, offers a serverless option to run containers without managing servers or clusters. ECS is highly versatile but comes with a complexity that can be a barrier for users unfamiliar with container orchestration.

5. AWS Lightsail

Lightsail is designed for simpler use cases like small businesses or developers who want to launch a project quickly. It provides a more straightforward and more cost-effective option for running containers, with a setup process that is significantly less complex than ECS or EKS. However, it might not scale as well as other AWS services for more extensive applications.

6. AWS App Runner

App Runner is the newest addition to AWS’s container services, offering an easy way to build and run applications directly from a container image or source code. It is a fully managed service, making it ideal for developers who prefer to focus on their applications rather than infrastructure management.

7. Amazon EC2

While not a container service per se, EC2 allows users to run containers on virtual machines they manage. EC2 offers excellent flexibility and control over containers, making it suitable for custom container orchestration setups. However, it requires a deep understanding of cloud infrastructure management, which can be a significant hurdle for less experienced users.

Choosing the Right Service

The decision to use a particular AWS container service depends on several factors, including the complexity of the application, budget constraints, specific technical requirements, and team expertise. A flowchart or decision tree approach can help clarify the best path forward by considering these variables systematically.

Services to Avoid

While AWS offers a range of powerful tools for container deployment, some services may no longer be the best fit due to newer alternatives that offer improved functionality and ease of use. For instance, Elastic Beanstalk, while versatile, has been somewhat superseded by services like AWS App Runner, which offers similar capabilities but with greater simplicity and lower cost.

Conclusion

As containers continue to be a critical part of cloud infrastructure, understanding the nuances of each AWS service is vital to deploying efficient, resilient, and cost-effective applications. Whether your application requires the robustness of Kubernetes with EKS, the simplicity of App Runner, or the power of EC2, AWS provides various solutions to meet the diverse needs of modern software development. Making informed choices about container deployment will ensure that your applications are performant and aligned with your strategic business goals.

]]> 603287 Kubernetes Continues to Evolve in the Container Orchestration Space https://www.webpronews.com/kubernetes-continues-to-evolve-in-the-container-orchestration-space/ Sat, 09 Mar 2024 17:59:30 +0000 https://www.webpronews.com/?p=601109 Kubernetes continues to evolve rapidly with ongoing innovations and advancements in the container orchestration space. Here are some of the latest innovations and trends related to Kubernetes:

  1. Kubernetes Service Meshes: Service meshes such as Istio, Linkerd, and Consul are gaining popularity for managing microservices communication within Kubernetes clusters. These tools provide features like traffic management, observability, and security without requiring changes to application code.
  2. Serverless Kubernetes: Serverless frameworks like Knative and KEDA (Kubernetes-based Event-Driven Autoscaling) enable auto-scaling of containerized workloads and provide a serverless experience on Kubernetes, allowing developers to focus on writing code without worrying about infrastructure management.
  3. GitOps: GitOps practices are becoming more prevalent for managing Kubernetes clusters and applications. GitOps leverages Git repositories as the single source of truth for declarative infrastructure and application definitions, allowing for automated deployments, rollbacks, and versioning.
  4. Multi-Cluster Management: As organizations adopt Kubernetes at scale, managing multiple clusters across different environments (e.g., on-premises, cloud, edge) becomes crucial. Tools like Rancher, VMware Tanzu, and Google Anthos enable centralized management, monitoring, and governance of distributed Kubernetes deployments.
  5. Kubernetes-native Continuous Delivery: Continuous Delivery (CD) platforms like Argo CD and Flux CD are designed specifically for Kubernetes environments. They automate the deployment of application changes based on Git repository updates, ensuring consistent and auditable application deployments.
  6. Kubernetes Operators: Operators extend Kubernetes’ capabilities to manage complex, stateful applications. They encapsulate operational knowledge into software, automating tasks like provisioning, scaling, and maintenance. The Operator Framework and Operator Hub provide a framework and repository for sharing and discovering Kubernetes Operators.
  7. Container Runtime Innovation: While Docker remains a popular container runtime, alternatives like containerd, CRI-O, and Kata Containers are gaining traction for their lightweight footprint, improved security, and better integration with Kubernetes.
  8. Edge Computing with Kubernetes: Kubernetes is increasingly being used for edge computing scenarios where resources are distributed across geographically dispersed locations. Projects like K3s, OpenYurt, and KubeEdge provide lightweight Kubernetes distributions optimized for edge deployments, enabling consistent application management across edge and cloud environments.
  9. Security Enhancements: Kubernetes security continues to evolve with features like PodSecurityPolicies, Network Policies, and Runtime Security. Projects such as Falco and OPA (Open Policy Agent) help enforce security policies and detect anomalous behavior within Kubernetes clusters.
  10. Ecosystem Growth: The Kubernetes ecosystem continues to expand with a rich ecosystem of third-party tools, libraries, and integrations aimed at simplifying Kubernetes adoption, enhancing developer productivity, and addressing various operational challenges.

These are just a few examples of the latest innovations and trends in the Kubernetes ecosystem. As Kubernetes adoption continues to grow, we can expect further advancements and enhancements in various areas of container orchestration, management, and deployment.

 

]]>
601109
Google Cloud Fixes Kubernetes Security Flaw https://www.webpronews.com/google-cloud-fixes-kubernetes-security-flaw/ Tue, 05 Mar 2024 00:46:16 +0000 https://www.webpronews.com/?p=600276 Google Cloud has fixed a flaw impacting Kubernetes that could allow an attacker to escalate their privileges.

According to TheHackerNews, Palo Alto Networks Unit 42 discovered the flaw and reported it via Google’s Vulnerability Reward Program. Google detailed the issue in a security bulletin:

An attacker who has compromised the Fluent Bit logging container could combine that access with high privileges required by Anthos Service Mesh (on clusters that have enabled it) to escalate privileges in the cluster. The issues with Fluent Bit and Anthos Service Mesh have been mitigated and fixes are now available. These vulnerabilities are not exploitable on their own in GKE and require an initial compromise. We are not aware of any instances of exploitation of these vulnerabilities.

Google recommends manually upgrading GKE to ensure customers are running the patched version:

The following versions of GKE have been updated with code to fix these vulnerabilities in Fluent Bit and for users of managed Anthos Service Mesh. For security purposes, even if you have node auto-upgrade enabled, we recommend that you manually upgrade your cluster and node pools to one of the following GKE versions or later:

  • 1.25.16-gke.1020000
  • 1.26.10-gke.1235000
  • 1.27.7-gke.1293000
  • 1.28.4-gke.1083000
]]>
600276
Microsoft Azure Linux Containers for AKS Now Available https://www.webpronews.com/microsoft-azure-linux-containers-for-aks-now-available/ Mon, 04 Mar 2024 15:49:10 +0000 https://www.webpronews.com/?p=523993 Microsoft has announced the general availability of its Azure Linux for Azure Kubernetes Service (AKS).

Microsoft first announced a preview of Azure Linux containers in October 2022. Jim Perrin, Linux Systems Group Principle Program Manger Lead, announced the general release in a blog post.

We are excited to announce the general availability of the Azure Linux container host for Azure Kubernetes Service (AKS). The Azure Linux container host for AKS is a lightweight, secure, and reliable OS platform optimized for performance on Azure. With this platform, you can easily deploy and manage your container workloads using the same proven tooling used by many of Microsoft’s own services. This General Availability announcement follows our October preview announcement under the CBL-Mariner project codename. We’d like to thank the customers who provided valuable feedback and insight during our preview. Your insight and feedback helped to shape the product and ensure it’s ready for production workloads.

Getting started with the Azure Linux container host is as easy as changing the OSSku parameter in your ARM template or other deployment tooling. For more information or to get started check out our documentation.

Perrin says emphasized the platform’s security and reliability .

Our goal is to provide a secure and reliable platform to run your workloads. Towards this end, all updates to the Azure Linux container host are first run through a rigorous suite of Azure validation tests. This suite of tests is kept constantly updated as support for new scenarios is added. Additionally, since there are far fewer packages in the container host, the volume of required security patching is lower, and these issues are patched promptly as well. We closely monitor and fully curate the software supply chain, which enables a greater assurance of quality and resilience end to end.

ISVs and vendors looking to partner with Microsoft can reach out to the company via azurelinuxisv@microsoft.com.

]]>
523993
96% of Third-Party Cloud Container Apps Have Known Vulnerabilities https://www.webpronews.com/96-of-third-party-cloud-container-apps-have-known-vulnerabilities/ Mon, 04 Mar 2024 02:01:35 +0000 https://www.webpronews.com/?p=512280

A whopping 96% of third-party cloud container apps have known vulnerabilities, highlighting ongoing cloud security challenges.

Cloud computing is often touted as more secure than traditional options. Unfortunately, this is only true if all parties involved make security a prime objective.

According to Palo Alto Networks’ Unit 42 team, some 96% of third-party container apps have known vulnerabilities. In addition, 63% of third-party code templates contain insecure configurations.

The news is especially concerning given the rise of supply chain attacks. Hackers are increasingly targeting widely used, third-party software, services, containers and plugins. Successfully compromising a single vendor who’s product is used by thousands of customers can have a far greater impact than compromising a single target.

Unit 42 highlights the danger of supply chain cloud attacks:

In most supply chain attacks, an attacker compromises a vendor and inserts malicious code in software used by customers. Cloud infrastructure can fall prey to a similar approach in which unvetted third-party code could introduce security flaws and give attackers access to sensitive data in the cloud environment. Additionally, unless organizations verify sources, third-party code can come from anyone, including an Advanced Persistent Threat (APT).

Organizations that want to stay secure must start making DevOps security a priority:

Teams continue to neglect DevOps security, due in part to lack of attention to supply chain threats. Cloud native applications have a long chain of dependencies, and those dependencies have dependences of their own. DevOps and security teams need to gain visibility into the bill of materials in every cloud workload in order to evaluate risk at every stage of the dependency chain and establish guardrails.

]]> 512280 AWS Using Bottlerocket Linux For Container Hosting https://www.webpronews.com/aws-using-bottlerocket-linux-for-container-hosting/ Sat, 02 Mar 2024 22:42:16 +0000 https://www.webpronews.com/?p=501242 AWS has revealed that Bottlerocket Linux is the operating system (OS) it is using for container hosting.

Containers are packages containing all the apps, code, libraries and dependencies necessary to run. Containers can be easily moved from one host to another, without worrying about the underlying OS and environment. Containers can also be managed to prevent any one app or process from hogging a system’s resources, making them the ideal way to scale cloud, hosting and IT systems.

Bottlerocket is a new Linux distribution that AWS designed and optimized specifically to work with containers.

“Bottlerocket reflects much of what we have learned over the years,” writes Jeff Barr, Chief Evangelist for AWS. “It includes only the packages that are needed to make it a great container host, and integrates with existing container orchestrators. It supports Docker image and images that conform to the Open Container Initiative (OCI) image format.

“Instead of a package update system, Bottlerocket uses a simple, image-based model that allows for a rapid & complete rollback if necessary. This removes opportunities for conflicts and breakage, and makes it easier for you to apply fleet-wide updates with confidence using orchestrators such as EKS.

“In addition to the minimal package set, Bottlerocket uses a file system that is primarily read-only, and that is integrity-checked at boot time via dm-verity. SSH access is discouraged, and is available only as part of a separate admin container that you can enable on an as-needed basis and then use for troubleshooting purposes.”

AWS is launching a public preview of the OS and inviting others to try it.

]]>
501242
Red Hat OpenShift Comes to Oracle Cloud Infrastructure https://www.webpronews.com/red-hat-openshift-comes-to-oracle-cloud-infrastructure/ Thu, 28 Sep 2023 13:00:00 +0000 https://www.webpronews.com/?p=598996 Despite competing in some markets, Red Hat and Oracle are expanding their alliance to bring Red Hat OpenShift to Oracle Cloud Infrastructure (OCI).

Red Hat bills OpenShift as “the industry’s leading hybrid cloud application platform powered by Kubernetes for architecting, building, and deploying cloud-native applications.”

The collaboration will see Red Hat OpenShift available on both OCI Compute virtual machines and bare metal. Customers have the assurance that Red Hat OpenShift on OCI is a solution that is “tested, certified, and supported by both Oracle and Red Hat.”

The certification and support for Red Hat OpenShift on OCI will build on the availability of Red Hat Enterprise Linux running on OCI as a supported operating system that was announced in January 2023. Now, Red Hat Enterprise Linux is also certified to support workloads on OCI bare metal servers and Oracle VMware Cloud Solution, in addition to OCI flexible virtual machines, with Red Hat OpenShift certification to follow at general availability. Furthermore, customers can now use Red Hat Enterprise Linux image builder, available as part of their Red Hat Enterprise Linux subscription, to create customized Red Hat Enterprise Linux gold images for OCI to accommodate a wide range of application workloads and security compliance requirements.

“With today’s announcement, Red Hat and Oracle continue to deliver on our efforts to extend customer choice and flexibility on OCI to our large, global customer base,” said Ashesh Badani, senior vice president and chief product officer, Red Hat. “Red Hat Enterprise Linux and Red Hat OpenShift on OCI offer customers the power to build, deploy, and manage enterprise applications on OCI at scale for faster results and with easier manageability, equipping them with the flexibility to choose their level of control and security based on business needs.”

“Enterprises are migrating to Oracle Cloud Infrastructure to take advantage of the platform’s highly performant, secure, and low-cost services,” said Karan Batta, senior vice president, Oracle Cloud Infrastructure. “Fully certifying and supporting Red Hat OpenShift on Oracle Cloud Infrastructure will enable Red Hat OpenShift customers to simply and easily run their workloads anywhere in the world on OCI’s distributed cloud.”

]]>
598996