<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Kubernetes Blog</title>
    <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/</link>
    <description>The Kubernetes blog is used by the project to communicate new features, community reports, and any news that might be relevant to the Kubernetes community.</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    
    
    <atom:link href="https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/feed.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>Kubernetes v1.31: kubeadm v1beta4</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/23/kubernetes-1-31-kubeadm-v1beta4/</link>
      <pubDate>Fri, 23 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/23/kubernetes-1-31-kubeadm-v1beta4/</guid>
      <description>
        
        
        &lt;p&gt;As part of the Kubernetes v1.31 release, &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/setup-tools/kubeadm/&#34;&gt;&lt;code&gt;kubeadm&lt;/code&gt;&lt;/a&gt; is
adopting a new (&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/config-api/kubeadm-config.v1beta4/&#34;&gt;v1beta4&lt;/a&gt;) version of
its configuration file format. Configuration in the previous v1beta3 format is now formally
deprecated, which means it&#39;s supported but you should migrate to v1beta4 and stop using
the deprecated format.
Support for v1beta3 configuration will be removed after a minimum of 3 Kubernetes minor releases.&lt;/p&gt;
&lt;p&gt;In this article, I&#39;ll walk you through key changes;
I&#39;ll explain about the kubeadm v1beta4 configuration format,
and how to migrate from v1beta3 to v1beta4.&lt;/p&gt;
&lt;p&gt;You can read the reference for the v1beta4 configuration format:
&lt;a href=&#34;(/docs/reference/config-api/kubeadm-config.v1beta4/)&#34;&gt;kubeadm Configuration (v1beta4)&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;a-list-of-changes-since-v1beta3&#34;&gt;A list of changes since v1beta3&lt;/h3&gt;
&lt;p&gt;This version improves on the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/config-api/kubeadm-config.v1beta3/&#34;&gt;v1beta3&lt;/a&gt;
format by fixing some minor issues and adding a few new fields.&lt;/p&gt;
&lt;p&gt;To put it simply,&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Two new configuration elements: ResetConfiguration and UpgradeConfiguration&lt;/li&gt;
&lt;li&gt;For InitConfiguration and JoinConfiguration, &lt;code&gt;dryRun&lt;/code&gt; mode and &lt;code&gt;nodeRegistration.imagePullSerial&lt;/code&gt; are supported&lt;/li&gt;
&lt;li&gt;For ClusterConfiguration, there are new fields including &lt;code&gt;certificateValidityPeriod&lt;/code&gt;,
&lt;code&gt;caCertificateValidityPeriod&lt;/code&gt;, &lt;code&gt;encryptionAlgorithm&lt;/code&gt;, &lt;code&gt;dns.disabled&lt;/code&gt; and &lt;code&gt;proxy.disabled&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Support &lt;code&gt;extraEnvs&lt;/code&gt; for all control plan components&lt;/li&gt;
&lt;li&gt;&lt;code&gt;extraArgs&lt;/code&gt; changed from a map to structured extra arguments for duplicates&lt;/li&gt;
&lt;li&gt;Add a &lt;code&gt;timeouts&lt;/code&gt; structure for init, join, upgrade and reset.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For details, you can see the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/config-api/kubeadm-config.v1beta4/&#34;&gt;official document&lt;/a&gt; below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Support custom environment variables in control plane components under &lt;code&gt;ClusterConfiguration&lt;/code&gt;.
Use &lt;code&gt;apiServer.extraEnvs&lt;/code&gt;, &lt;code&gt;controllerManager.extraEnvs&lt;/code&gt;, &lt;code&gt;scheduler.extraEnvs&lt;/code&gt;, &lt;code&gt;etcd.local.extraEnvs&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The ResetConfiguration API type is now supported in v1beta4. Users are able to reset a node by passing
a &lt;code&gt;--config&lt;/code&gt; file to &lt;code&gt;kubeadm reset&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dryRun&lt;/code&gt; mode is now configurable in InitConfiguration and JoinConfiguration.&lt;/li&gt;
&lt;li&gt;Replace the existing string/string extra argument maps with structured extra arguments that support duplicates.
The change applies to &lt;code&gt;ClusterConfiguration&lt;/code&gt; - &lt;code&gt;apiServer.extraArgs&lt;/code&gt;, &lt;code&gt;controllerManager.extraArgs&lt;/code&gt;,
&lt;code&gt;scheduler.extraArgs&lt;/code&gt;, &lt;code&gt;etcd.local.extraArgs&lt;/code&gt;. Also to &lt;code&gt;nodeRegistrationOptions.kubeletExtraArgs&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Added &lt;code&gt;ClusterConfiguration.encryptionAlgorithm&lt;/code&gt; that can be used to set the asymmetric encryption
algorithm used for this cluster&#39;s keys and certificates. Can be one of &amp;quot;RSA-2048&amp;quot; (default), &amp;quot;RSA-3072&amp;quot;,
&amp;quot;RSA-4096&amp;quot; or &amp;quot;ECDSA-P256&amp;quot;.&lt;/li&gt;
&lt;li&gt;Added &lt;code&gt;ClusterConfiguration.dns.disabled&lt;/code&gt; and &lt;code&gt;ClusterConfiguration.proxy.disabled&lt;/code&gt; that can be used
to disable the CoreDNS and kube-proxy addons during cluster initialization.
Skipping the related addons phases, during cluster creation will set the same fields to &lt;code&gt;true&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Added the &lt;code&gt;nodeRegistration.imagePullSerial&lt;/code&gt; field in &lt;code&gt;InitConfiguration&lt;/code&gt; and &lt;code&gt;JoinConfiguration&lt;/code&gt;,
which can be used to control if kubeadm pulls images serially or in parallel.&lt;/li&gt;
&lt;li&gt;The UpgradeConfiguration kubeadm API is now supported in v1beta4 when passing &lt;code&gt;--config&lt;/code&gt; to
&lt;code&gt;kubeadm upgrade&lt;/code&gt; subcommands.
For upgrade subcommands, the usage of component configuration for kubelet and kube-proxy, as well as
InitConfiguration and ClusterConfiguration, is now deprecated and will be ignored when passing &lt;code&gt;--config&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Added a &lt;code&gt;timeouts&lt;/code&gt; structure to &lt;code&gt;InitConfiguration&lt;/code&gt;, &lt;code&gt;JoinConfiguration&lt;/code&gt;, &lt;code&gt;ResetConfiguration&lt;/code&gt; and
&lt;code&gt;UpgradeConfiguration&lt;/code&gt; that can be used to configure various timeouts.
The &lt;code&gt;ClusterConfiguration.timeoutForControlPlane&lt;/code&gt; field is replaced by &lt;code&gt;timeouts.controlPlaneComponentHealthCheck&lt;/code&gt;.
The &lt;code&gt;JoinConfiguration.discovery.timeout&lt;/code&gt; is replaced by &lt;code&gt;timeouts.discovery&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Added a &lt;code&gt;certificateValidityPeriod&lt;/code&gt; and &lt;code&gt;caCertificateValidityPeriod&lt;/code&gt; fields to &lt;code&gt;ClusterConfiguration&lt;/code&gt;.
These fields can be used to control the validity period of certificates generated by kubeadm during
sub-commands such as &lt;code&gt;init&lt;/code&gt;, &lt;code&gt;join&lt;/code&gt;, &lt;code&gt;upgrade&lt;/code&gt; and &lt;code&gt;certs&lt;/code&gt;.
Default values continue to be 1 year for non-CA certificates and 10 years for CA certificates.
Also note that only non-CA certificates are renewable by &lt;code&gt;kubeadm certs renew&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These changes simplify the configuration of tools that use kubeadm
and improve the extensibility of kubeadm itself.&lt;/p&gt;
&lt;h3 id=&#34;how-to-migrate-v1beta3-configuration-to-v1beta4&#34;&gt;How to migrate v1beta3 configuration to v1beta4?&lt;/h3&gt;
&lt;p&gt;If your configuration is not using the latest version, it is recommended that you migrate using
the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-migrate&#34;&gt;kubeadm config migrate&lt;/a&gt; command.&lt;/p&gt;
&lt;p&gt;This command reads an existing configuration file that uses the old format, and writes a new
file that uses the current format.&lt;/p&gt;
&lt;h4 id=&#34;example-kubeadm-config-migrate&#34;&gt;Example&lt;/h4&gt;
&lt;p&gt;Using kubeadm v1.31, run &lt;code&gt;kubeadm config migrate --old-config old-v1beta3.yaml --new-config new-v1beta4.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h2 id=&#34;how-do-i-get-involved&#34;&gt;How do I get involved?&lt;/h2&gt;
&lt;p&gt;Huge thanks to all the contributors who helped with the design, implementation,
and review of this feature:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Lubomir I. Ivanov (&lt;a href=&#34;https://github.com/neolit123&#34;&gt;neolit123&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Dave Chen(&lt;a href=&#34;https://github.com/chendave&#34;&gt;chendave&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Paco Xu (&lt;a href=&#34;https://github.com/pacoxu&#34;&gt;pacoxu&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Sata Qiu(&lt;a href=&#34;https://github.com/sataqiu&#34;&gt;sataqiu&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Baofa Fan(&lt;a href=&#34;https://github.com/carlory&#34;&gt;carlory&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Calvin Chen(&lt;a href=&#34;https://github.com/calvin0327&#34;&gt;calvin0327&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Ruquan Zhao(&lt;a href=&#34;https://github.com/ruquanzhao&#34;&gt;ruquanzhao&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For those interested in getting involved in future discussions on kubeadm configuration,
you can reach out kubeadm or &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-cluster-lifecycle/README.md&#34;&gt;SIG-cluster-lifecycle&lt;/a&gt; by several means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;v1beta4 related items are tracked in &lt;a href=&#34;https://github.com/kubernetes/kubeadm/issues/2890&#34;&gt;kubeadm issue #2890&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Slack: &lt;a href=&#34;https://kubernetes.slack.com/messages/kubeadm&#34;&gt;#kubeadm&lt;/a&gt; or &lt;a href=&#34;https://kubernetes.slack.com/messages/sig-cluster-lifecycle&#34;&gt;#sig-cluster-lifecycle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle&#34;&gt;Mailing list&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: Custom Profiling in Kubectl Debug Graduates to Beta</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/22/kubernetes-1-31-custom-profiling-kubectl-debug/</link>
      <pubDate>Thu, 22 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/22/kubernetes-1-31-custom-profiling-kubectl-debug/</guid>
      <description>
        
        
        &lt;p&gt;There are many ways of troubleshooting the pods and nodes in the cluster. However, &lt;code&gt;kubectl debug&lt;/code&gt; is one of the easiest, highly used and most prominent ones. It
provides a set of static profiles and each profile serves for a different kind of role. For instance, from the network administrator&#39;s point of view,
debugging the node should be as easy as this:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;$ kubectl debug node/mynode -it --image&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;busybox --profile&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;netadmin
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;On the other hand, static profiles also bring about inherent rigidity, which has some implications for some pods contrary to their ease of use.
Because there are various kinds of pods (or nodes) that all have their specific
necessities, and unfortunately, some can&#39;t be debugged by only using the static profiles.&lt;/p&gt;
&lt;p&gt;Take an instance of a simple pod consisting of a container whose healthiness relies on an environment variable:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;example-pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;example-container&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;customapp:latest&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;env&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;REQUIRED_ENV_VAR&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;value&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;value1&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Currently, copying the pod is the sole mechanism that supports debugging this pod in kubectl debug. Furthermore, what if user needs to modify the &lt;code&gt;REQUIRED_ENV_VAR&lt;/code&gt; to something different
for advanced troubleshooting?. There is no mechanism to achieve this.&lt;/p&gt;
&lt;h2 id=&#34;custom-profiling&#34;&gt;Custom Profiling&lt;/h2&gt;
&lt;p&gt;Custom profiling is a new functionality available under &lt;code&gt;--custom&lt;/code&gt; flag, introduced in kubectl debug to provide extensibility. It expects partial &lt;code&gt;Container&lt;/code&gt; spec in either YAML or JSON format.
In order to debug the example-container above by creating an ephemeral container, we simply have to define this YAML:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# partial_container.yaml&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;env&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;REQUIRED_ENV_VAR&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;value&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;value2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;and execute:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl debug example-pod -it --image&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;customapp --custom&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;partial_container.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Here is another example that modifies multiple fields at once (change port number, add resource limits, modify environment variable) in JSON:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-json&#34; data-lang=&#34;json&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;ports&amp;#34;&lt;/span&gt;: [
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;containerPort&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#666&#34;&gt;80&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    }
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  ],
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;resources&amp;#34;&lt;/span&gt;: {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;limits&amp;#34;&lt;/span&gt;: {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;cpu&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;0.5&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;memory&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;512Mi&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    },
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;requests&amp;#34;&lt;/span&gt;: {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;cpu&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;0.2&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;memory&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;256Mi&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    }
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  },
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;env&amp;#34;&lt;/span&gt;: [
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;name&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;REQUIRED_ENV_VAR&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;value&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;value2&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    }
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  ]
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;constraints&#34;&gt;Constraints&lt;/h2&gt;
&lt;p&gt;Uncontrolled extensibility hurts the usability. So that, custom profiling is not allowed for certain fields such as command, image, lifecycle, volume devices and container name.
In the future, more fields can be added to the disallowed list if required.&lt;/p&gt;
&lt;h2 id=&#34;limitations&#34;&gt;Limitations&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;kubectl debug&lt;/code&gt; command has 3 aspects: Debugging with ephemeral containers, pod copying, and node debugging. The largest intersection set of these aspects is the container spec within a Pod
That&#39;s why, custom profiling only supports the modification of the fields that are defined with &lt;code&gt;containers&lt;/code&gt;. This leads to a limitation that if user needs to modify the other fields in the Pod spec, it is not supported.&lt;/p&gt;
&lt;h2 id=&#34;acknowledgments&#34;&gt;Acknowledgments&lt;/h2&gt;
&lt;p&gt;Special thanks to all the contributors who reviewed and commented on this feature, from the initial conception to its actual implementation (alphabetical order):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/eddiezane&#34;&gt;Eddie Zaneski&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/soltysh&#34;&gt;Maciej Szulik&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/verb&#34;&gt;Lee Verberne&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: Fine-grained SupplementalGroups control</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/22/fine-grained-supplementalgroups-control/</link>
      <pubDate>Thu, 22 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/22/fine-grained-supplementalgroups-control/</guid>
      <description>
        
        
        &lt;p&gt;This blog discusses a new feature in Kubernetes 1.31 to improve the handling of supplementary groups in containers within Pods.&lt;/p&gt;
&lt;h2 id=&#34;motivation-implicit-group-memberships-defined-in-etc-group-in-the-container-image&#34;&gt;Motivation: Implicit group memberships defined in &lt;code&gt;/etc/group&lt;/code&gt; in the container image&lt;/h2&gt;
&lt;p&gt;Although this behavior may not be popular with many Kubernetes cluster users/admins, kubernetes, by default, &lt;em&gt;merges&lt;/em&gt; group information from the Pod with information defined in &lt;code&gt;/etc/group&lt;/code&gt; in the container image.&lt;/p&gt;
&lt;p&gt;Let&#39;s see an example, below Pod specifies &lt;code&gt;runAsUser=1000&lt;/code&gt;, &lt;code&gt;runAsGroup=3000&lt;/code&gt; and &lt;code&gt;supplementalGroups=4000&lt;/code&gt; in the Pod&#39;s security context.&lt;/p&gt;
&lt;div class=&#34;highlight code-sample&#34;&gt;
    &lt;div class=&#34;copy-code-icon&#34;&gt;
    &lt;a href=&#34;https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/implicit-groups.yaml&#34; download=&#34;implicit-groups.yaml&#34;&gt;&lt;code&gt;implicit-groups.yaml&lt;/code&gt;
    &lt;/a&gt;&lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/copycode.svg&#34; class=&#34;icon-copycode&#34; onclick=&#34;copyCode(&#39;implicit-groups-yaml&#39;)&#34; title=&#34;Copy implicit-groups.yaml to clipboard&#34;&gt;&lt;/img&gt;&lt;/div&gt;
    &lt;div class=&#34;includecode&#34; id=&#34;implicit-groups-yaml&#34;&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;implicit-groups&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;securityContext&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;runAsUser&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;1000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;runAsGroup&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;3000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;supplementalGroups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#666&#34;&gt;4000&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ctr&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;registry.k8s.io/e2e-test-images/agnhost:2.45&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;command&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;sh&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;-c&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;sleep 1h&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;securityContext&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;allowPrivilegeEscalation&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;false&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;What is the result of &lt;code&gt;id&lt;/code&gt; command in the &lt;code&gt;ctr&lt;/code&gt; container?&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;#&lt;/span&gt; Create the Pod:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;$&lt;/span&gt; kubectl apply -f https://k8s.io/blog/2024-08-22-Fine-grained-SupplementalGroups-control/implicit-groups.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;#&lt;/span&gt; Verify that the Pod&lt;span style=&#34;&#34;&gt;&amp;#39;&lt;/span&gt;s Container is running:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;$&lt;/span&gt; kubectl get pod implicit-groups
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;#&lt;/span&gt; Check the id &lt;span style=&#34;color:#a2f&#34;&gt;command&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;$&lt;/span&gt; kubectl &lt;span style=&#34;color:#a2f&#34;&gt;exec&lt;/span&gt; implicit-groups -- id
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Then, output should be similar to this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-none&#34; data-lang=&#34;none&#34;&gt;uid=1000 gid=3000 groups=3000,4000,50000
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Where does group ID &lt;code&gt;50000&lt;/code&gt; in supplementary groups (&lt;code&gt;groups&lt;/code&gt; field) come from, even though &lt;code&gt;50000&lt;/code&gt; is not defined in the Pod&#39;s manifest at all? The answer is &lt;code&gt;/etc/group&lt;/code&gt; file in the container image.&lt;/p&gt;
&lt;p&gt;Checking the contents of &lt;code&gt;/etc/group&lt;/code&gt; in the container image should show below:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;$&lt;/span&gt; kubectl &lt;span style=&#34;color:#a2f&#34;&gt;exec&lt;/span&gt; implicit-groups -- cat /etc/group
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;user-defined-in-image:x:1000:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;group-defined-in-image:x:50000:user-defined-in-image
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Aha! The container&#39;s primary user &lt;code&gt;1000&lt;/code&gt; belongs to the group &lt;code&gt;50000&lt;/code&gt; in the last entry.&lt;/p&gt;
&lt;p&gt;Thus, the group membership defined in &lt;code&gt;/etc/group&lt;/code&gt; in the container image for the container&#39;s primary user is &lt;em&gt;implicitly&lt;/em&gt; merged to the information from the Pod. Please note that this was a design decision the current CRI implementations inherited from Docker, and the community never really reconsidered it until now.&lt;/p&gt;
&lt;h3 id=&#34;what-s-wrong-with-it&#34;&gt;What&#39;s wrong with it?&lt;/h3&gt;
&lt;p&gt;The &lt;em&gt;implicitly&lt;/em&gt; merged group information from &lt;code&gt;/etc/group&lt;/code&gt; in the container image may cause some concerns particularly in accessing volumes (see &lt;a href=&#34;https://issue.k8s.io/112879&#34;&gt;kubernetes/kubernetes#112879&lt;/a&gt; for details) because file permission is controlled by uid/gid in Linux. Even worse, the implicit gids from &lt;code&gt;/etc/group&lt;/code&gt; can not be detected/validated by any policy engines because there is no clue for the implicit group information in the manifest. This can also be a concern for Kubernetes security.&lt;/p&gt;
&lt;h2 id=&#34;fine-grained-supplementalgroups-control-in-a-pod-supplementarygroupspolicy&#34;&gt;Fine-grained SupplementalGroups control in a Pod: &lt;code&gt;SupplementaryGroupsPolicy&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;To tackle the above problem, Kubernetes 1.31 introduces new field &lt;code&gt;supplementalGroupsPolicy&lt;/code&gt; in Pod&#39;s &lt;code&gt;.spec.securityContext&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This field provies a way to control how to calculate supplementary groups for the container processes in a Pod. The available policy is below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Merge&lt;/em&gt;: The group membership defined in &lt;code&gt;/etc/group&lt;/code&gt; for the container&#39;s primary user will be merged. If not specified, this policy will be applied (i.e. as-is behavior for backword compatibility).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Strict&lt;/em&gt;: it only attaches specified group IDs in &lt;code&gt;fsGroup&lt;/code&gt;, &lt;code&gt;supplementalGroups&lt;/code&gt;, or &lt;code&gt;runAsGroup&lt;/code&gt; fields as the supplementary groups of the container processes. This means no group membership defined in &lt;code&gt;/etc/group&lt;/code&gt; for the container&#39;s primary user will be merged.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let&#39;s see how &lt;code&gt;Strict&lt;/code&gt; policy works.&lt;/p&gt;
&lt;div class=&#34;highlight code-sample&#34;&gt;
    &lt;div class=&#34;copy-code-icon&#34;&gt;
    &lt;a href=&#34;https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/strict-supplementalgroups-policy.yaml&#34; download=&#34;strict-supplementalgroups-policy.yaml&#34;&gt;&lt;code&gt;strict-supplementalgroups-policy.yaml&lt;/code&gt;
    &lt;/a&gt;&lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/copycode.svg&#34; class=&#34;icon-copycode&#34; onclick=&#34;copyCode(&#39;strict-supplementalgroups-policy-yaml&#39;)&#34; title=&#34;Copy strict-supplementalgroups-policy.yaml to clipboard&#34;&gt;&lt;/img&gt;&lt;/div&gt;
    &lt;div class=&#34;includecode&#34; id=&#34;strict-supplementalgroups-policy-yaml&#34;&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;strict-supplementalgroups-policy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;securityContext&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;runAsUser&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;1000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;runAsGroup&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;3000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;supplementalGroups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#666&#34;&gt;4000&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;supplementalGroupsPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Strict&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ctr&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;registry.k8s.io/e2e-test-images/agnhost:2.45&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;command&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;sh&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;-c&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;sleep 1h&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;securityContext&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;allowPrivilegeEscalation&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;false&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;#&lt;/span&gt; Create the Pod:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;$&lt;/span&gt; kubectl apply -f https://k8s.io/blog/2024-08-22-Fine-grained-SupplementalGroups-control/strict-supplementalgroups-policy.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;#&lt;/span&gt; Verify that the Pod&lt;span style=&#34;&#34;&gt;&amp;#39;&lt;/span&gt;s Container is running:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;$&lt;/span&gt; kubectl get pod strict-supplementalgroups-policy
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;#&lt;/span&gt; Check the process identity:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;kubectl exec -it strict-supplementalgroups-policy -- id
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The output should be similar to this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-none&#34; data-lang=&#34;none&#34;&gt;uid=1000 gid=3000 groups=3000,4000
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can see &lt;code&gt;Strict&lt;/code&gt; policy can exclude group &lt;code&gt;50000&lt;/code&gt; from &lt;code&gt;groups&lt;/code&gt;!&lt;/p&gt;
&lt;p&gt;Thus, ensuring &lt;code&gt;supplementalGroupsPolicy: Strict&lt;/code&gt; (enforced by some policy mechanism) helps prevent the implicit supplementary groups in a Pod.&lt;/p&gt;

&lt;div class=&#34;alert alert-info&#34; role=&#34;alert&#34;&gt;&lt;h4 class=&#34;alert-heading&#34;&gt;Note:&lt;/h4&gt;Actually, this is not enough because container with sufficient privileges / capability can change its process identity. Please see the following section for details.&lt;/div&gt;

&lt;h2 id=&#34;attached-process-identity-in-pod-status&#34;&gt;Attached process identity in Pod status&lt;/h2&gt;
&lt;p&gt;This feature also exposes the process identity attached to the first container process of the container
via &lt;code&gt;.status.containerStatuses[].user.linux&lt;/code&gt; field. It would be helpful to see if implicit group IDs are attached.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;status&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containerStatuses&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ctr&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;user&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;linux&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;gid&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;3000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;supplementalGroups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#666&#34;&gt;3000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#666&#34;&gt;4000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;uid&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;1000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class=&#34;alert alert-info&#34; role=&#34;alert&#34;&gt;&lt;h4 class=&#34;alert-heading&#34;&gt;Note:&lt;/h4&gt;Please note that the values in &lt;code&gt;status.containerStatuses[].user.linux&lt;/code&gt; field is &lt;em&gt;the firstly attached&lt;/em&gt;
process identity to the first container process in the container. If the container has sufficient privilege
to call system calls related to process identity (e.g. &lt;a href=&#34;https://man7.org/linux/man-pages/man2/setuid.2.html&#34;&gt;&lt;code&gt;setuid(2)&lt;/code&gt;&lt;/a&gt;, &lt;a href=&#34;https://man7.org/linux/man-pages/man2/setgid.2.html&#34;&gt;&lt;code&gt;setgid(2)&lt;/code&gt;&lt;/a&gt; or &lt;a href=&#34;https://man7.org/linux/man-pages/man2/setgroups.2.html&#34;&gt;&lt;code&gt;setgroups(2)&lt;/code&gt;&lt;/a&gt;, etc.), the container process can change its identity. Thus, the &lt;em&gt;actual&lt;/em&gt; process identity will be dynamic.&lt;/div&gt;

&lt;h2 id=&#34;feature-availability&#34;&gt;Feature availability&lt;/h2&gt;
&lt;p&gt;To enable &lt;code&gt;supplementalGroupsPolicy&lt;/code&gt; field, the following components have to be used:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Kubernetes: v1.31 or later, with the &lt;code&gt;SupplementalGroupsPolicy&lt;/code&gt; &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;feature gate&lt;/a&gt; enabled. As of v1.31, the gate is marked as alpha.&lt;/li&gt;
&lt;li&gt;CRI runtime:
&lt;ul&gt;
&lt;li&gt;containerd: v2.0 or later&lt;/li&gt;
&lt;li&gt;CRI-O: v1.31 or later&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can see if the feature is supported in the Node&#39;s &lt;code&gt;.status.features.supplementalGroupsPolicy&lt;/code&gt; field.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Node&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;status&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;features&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;supplementalGroupsPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;what-s-next&#34;&gt;What&#39;s next?&lt;/h2&gt;
&lt;p&gt;Kubernetes SIG Node hope - and expect - that the feature will be promoted to beta and eventually
general availability (GA) in future releases of Kubernetes, so that users no longer need to enable
the feature gate manually.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Merge&lt;/code&gt; policy is applied when &lt;code&gt;supplementalGroupsPolicy&lt;/code&gt; is not specified, for backwards compatibility.&lt;/p&gt;
&lt;h2 id=&#34;how-can-i-learn-more&#34;&gt;How can I learn more?&lt;/h2&gt;
&lt;!-- https://github.com/kubernetes/website/pull/46920 --&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/configure-pod-container/security-context/&#34;&gt;Configure a Security Context for a Pod or Container&lt;/a&gt;
for the further details of &lt;code&gt;supplementalGroupsPolicy&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3619&#34;&gt;KEP-3619: Fine-grained SupplementalGroups control&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;how-to-get-involved&#34;&gt;How to get involved?&lt;/h2&gt;
&lt;p&gt;This feature is driven by the SIG Node community. Please join us to connect with
the community and share your ideas and feedback around the above feature and
beyond. We look forward to hearing from you!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.31: New Kubernetes CPUManager Static Policy: Distribute CPUs Across Cores</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/22/cpumanager-static-policy-distributed-cpu-across-cores/</link>
      <pubDate>Thu, 22 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/22/cpumanager-static-policy-distributed-cpu-across-cores/</guid>
      <description>
        
        
        &lt;p&gt;In Kubernetes v1.31, we are excited to introduce a significant enhancement to CPU management capabilities: the &lt;code&gt;distribute-cpus-across-cores&lt;/code&gt; option for the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options&#34;&gt;CPUManager static policy&lt;/a&gt;. This feature is currently in alpha and hidden by default, marking a strategic shift aimed at optimizing CPU utilization and improving system performance across multi-core processors.&lt;/p&gt;
&lt;h2 id=&#34;understanding-the-feature&#34;&gt;Understanding the feature&lt;/h2&gt;
&lt;p&gt;Traditionally, Kubernetes&#39; CPUManager tends to allocate CPUs as compactly as possible, typically packing them onto the fewest number of physical cores. However, allocation strategy matters, CPUs on the same physical host still share some resources of the physical core, such as the cache and execution units, etc.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/22/cpumanager-static-policy-distributed-cpu-across-cores/cpu-cache-architecture.png&#34;
         alt=&#34;cpu-cache-architecture&#34;/&gt; 
&lt;/figure&gt;
&lt;p&gt;While default approach minimizes inter-core communication and can be beneficial under certain scenarios, it also poses a challenge. CPUs sharing a physical core can lead to resource contention, which in turn may cause performance bottlenecks, particularly noticeable in CPU-intensive applications.&lt;/p&gt;
&lt;p&gt;The new &lt;code&gt;distribute-cpus-across-cores&lt;/code&gt; feature addresses this issue by modifying the allocation strategy. When enabled, this policy option instructs the CPUManager to spread out the CPUs (hardware threads) across as many physical cores as possible. This distribution is designed to minimize contention among CPUs sharing the same physical core, potentially enhancing the performance of applications by providing them dedicated core resources.&lt;/p&gt;
&lt;p&gt;Technically, within this static policy, the free CPU list is reordered in the manner depicted in the diagram, aiming to allocate CPUs from separate physical cores.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/22/cpumanager-static-policy-distributed-cpu-across-cores/cpu-ordering.png&#34;
         alt=&#34;cpu-ordering&#34;/&gt; 
&lt;/figure&gt;
&lt;h2 id=&#34;enabling-the-feature&#34;&gt;Enabling the feature&lt;/h2&gt;
&lt;p&gt;To enable this feature, users firstly need to add &lt;code&gt;--cpu-manager-policy=static&lt;/code&gt; kubelet flag or the &lt;code&gt;cpuManagerPolicy: static&lt;/code&gt; field in KubeletConfiuration. Then user can add &lt;code&gt;--cpu-manager-policy-options distribute-cpus-across-cores=true&lt;/code&gt; or &lt;code&gt;distribute-cpus-across-cores=true&lt;/code&gt; to their CPUManager policy options in the Kubernetes configuration or. This setting directs the CPUManager to adopt the new distribution strategy. It is important to note that this policy option cannot currently be used in conjunction with &lt;code&gt;full-pcpus-only&lt;/code&gt; or &lt;code&gt;distribute-cpus-across-numa&lt;/code&gt; options.&lt;/p&gt;
&lt;h2 id=&#34;current-limitations-and-future-directions&#34;&gt;Current limitations and future directions&lt;/h2&gt;
&lt;p&gt;As with any new feature, especially one in alpha, there are limitations and areas for future improvement. One significant current limitation is that &lt;code&gt;distribute-cpus-across-cores&lt;/code&gt; cannot be combined with other policy options that might conflict in terms of CPU allocation strategies. This restriction can affect compatibility with certain workloads and deployment scenarios that rely on more specialized resource management.&lt;/p&gt;
&lt;p&gt;Looking forward, we are committed to enhancing the compatibility and functionality of the &lt;code&gt;distribute-cpus-across-cores&lt;/code&gt; option. Future updates will focus on resolving these compatibility issues, allowing this policy to be combined with other CPUManager policies seamlessly. Our goal is to provide a more flexible and robust CPU allocation framework that can adapt to a variety of workloads and performance demands.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The introduction of the &lt;code&gt;distribute-cpus-across-cores&lt;/code&gt; policy in Kubernetes CPUManager is a step forward in our ongoing efforts to refine resource management and improve application performance. By reducing the contention on physical cores, this feature offers a more balanced approach to CPU resource allocation, particularly beneficial for environments running heterogeneous workloads. We encourage Kubernetes users to test this new feature and provide feedback, which will be invaluable in shaping its future development.&lt;/p&gt;
&lt;p&gt;This draft aims to clearly explain the new feature while setting expectations for its current stage and future improvements.&lt;/p&gt;
&lt;h2 id=&#34;further-reading&#34;&gt;Further reading&lt;/h2&gt;
&lt;p&gt;Please check out the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/administer-cluster/cpu-management-policies/&#34;&gt;Control CPU Management Policies on the Node&lt;/a&gt;
task page to learn more about the CPU Manager, and how it fits in relation to the other node-level resource managers.&lt;/p&gt;
&lt;h2 id=&#34;getting-involved&#34;&gt;Getting involved&lt;/h2&gt;
&lt;p&gt;This feature is driven by the &lt;a href=&#34;https://github.com/Kubernetes/community/blob/master/sig-node/README.md&#34;&gt;SIG Node&lt;/a&gt;. If you are interested in helping develop this feature, sharing feedback, or participating in any other ongoing SIG Node projects, please attend the SIG Node meeting for more details.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: Autoconfiguration For Node Cgroup Driver (beta)</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/21/cri-cgroup-driver-lookup-now-beta/</link>
      <pubDate>Wed, 21 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/21/cri-cgroup-driver-lookup-now-beta/</guid>
      <description>
        
        
        &lt;p&gt;Historically, configuring the correct cgroup driver has been a pain point for users running new
Kubernetes clusters. On Linux systems, there are two different cgroup drivers:
&lt;code&gt;cgroupfs&lt;/code&gt; and &lt;code&gt;systemd&lt;/code&gt;. In the past, both the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/kubelet/&#34;&gt;kubelet&lt;/a&gt;
and CRI implementation (like CRI-O or containerd) needed to be configured to use
the same cgroup driver, or else the kubelet would exit with an error. This was a
source of headaches for many cluster admins. However, there is light at the end of the tunnel!&lt;/p&gt;
&lt;h2 id=&#34;automated-cgroup-driver-detection&#34;&gt;Automated cgroup driver detection&lt;/h2&gt;
&lt;p&gt;In v1.28.0, the SIG Node community introduced the feature gate
&lt;code&gt;KubeletCgroupDriverFromCRI&lt;/code&gt;, which instructs the kubelet to ask the CRI
implementation which cgroup driver to use. A few minor releases of Kubernetes
happened whilst we waited for support to land in the major two CRI implementations
(containerd and CRI-O), but as of v1.31.0, this feature is now beta!&lt;/p&gt;
&lt;p&gt;In addition to setting the feature gate, a cluster admin needs to ensure their
CRI implementation is new enough:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;containerd: Support was added in v2.0.0&lt;/li&gt;
&lt;li&gt;CRI-O: Support was added in v1.28.0&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Then, they should ensure their CRI implementation is configured to the
cgroup_driver they would like to use.&lt;/p&gt;
&lt;h2 id=&#34;future-work&#34;&gt;Future work&lt;/h2&gt;
&lt;p&gt;Eventually, support for the kubelet&#39;s &lt;code&gt;cgroupDriver&lt;/code&gt; configuration field will be
dropped, and the kubelet will fail to start if the CRI implementation isn&#39;t new
enough to have support for this feature.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: Streaming Transitions from SPDY to WebSockets</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/20/websockets-transition/</link>
      <pubDate>Tue, 20 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/20/websockets-transition/</guid>
      <description>
        
        
        &lt;p&gt;In Kubernetes 1.31, by default kubectl now uses the WebSocket protocol
instead of SPDY for streaming.&lt;/p&gt;
&lt;p&gt;This post describes what these changes mean for you and why these streaming APIs
matter.&lt;/p&gt;
&lt;h2 id=&#34;streaming-apis-in-kubernetes&#34;&gt;Streaming APIs in Kubernetes&lt;/h2&gt;
&lt;p&gt;In Kubernetes, specific endpoints that are exposed as an HTTP or RESTful
interface are upgraded to streaming connections, which require a streaming
protocol. Unlike HTTP, which is a request-response protocol, a streaming
protocol provides a persistent connection that&#39;s bi-directional, low-latency,
and lets you interact in real-time. Streaming protocols support reading and
writing data between your client and the server, in both directions, over the
same connection. This type of connection is useful, for example, when you create
a shell in a running container from your local workstation and run commands in
the container.&lt;/p&gt;
&lt;h2 id=&#34;why-change-the-streaming-protocol&#34;&gt;Why change the streaming protocol?&lt;/h2&gt;
&lt;p&gt;Before the v1.31 release, Kubernetes used the SPDY/3.1 protocol by default when
upgrading streaming connections. SPDY/3.1 has been deprecated for eight years,
and it was never standardized. Many modern proxies, gateways, and load balancers
no longer support the protocol. As a result, you might notice that commands like
&lt;code&gt;kubectl cp&lt;/code&gt;, &lt;code&gt;kubectl attach&lt;/code&gt;, &lt;code&gt;kubectl exec&lt;/code&gt;, and &lt;code&gt;kubectl port-forward&lt;/code&gt;
stop working when you try to access your cluster through a proxy or gateway.&lt;/p&gt;
&lt;p&gt;As of Kubernetes v1.31, SIG API Machinery has modified the streaming
protocol that a Kubernetes client (such as &lt;code&gt;kubectl&lt;/code&gt;) uses for these commands
to the more modern &lt;a href=&#34;https://datatracker.ietf.org/doc/html/rfc6455&#34;&gt;WebSocket streaming protocol&lt;/a&gt;.
The WebSocket protocol is a currently supported standardized streaming protocol
that guarantees compatibility and interoperability with different components and
programming languages. The WebSocket protocol is more widely supported by modern
proxies and gateways than SPDY.&lt;/p&gt;
&lt;h2 id=&#34;how-streaming-apis-work&#34;&gt;How streaming APIs work&lt;/h2&gt;
&lt;p&gt;Kubernetes upgrades HTTP connections to streaming connections by adding
specific upgrade headers to the originating HTTP request. For example, an HTTP
upgrade request for running the &lt;code&gt;date&lt;/code&gt; command on an &lt;code&gt;nginx&lt;/code&gt; container within
a cluster is similar to the following:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;$&lt;/span&gt; kubectl &lt;span style=&#34;color:#a2f&#34;&gt;exec&lt;/span&gt; -v&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;8&lt;/span&gt; nginx -- date
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;GET https://127.0.0.1:43251/api/v1/namespaces/default/pods/nginx/exec?command=date…
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Request Headers:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    Connection: Upgrade
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    Upgrade: websocket
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    Sec-Websocket-Protocol: v5.channel.k8s.io
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    User-Agent: kubectl/v1.31.0 (linux/amd64) kubernetes/6911225
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;If the container runtime supports the WebSocket streaming protocol and at least
one of the subprotocol versions (e.g. &lt;code&gt;v5.channel.k8s.io&lt;/code&gt;), the server responds
with a successful &lt;code&gt;101 Switching Protocols&lt;/code&gt; status, along with the negotiated
subprotocol version:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Response Status: 101 Switching Protocols in 3 milliseconds
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Response Headers:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    Upgrade: websocket
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    Connection: Upgrade
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    Sec-Websocket-Accept: j0/jHW9RpaUoGsUAv97EcKw8jFM=
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    Sec-Websocket-Protocol: v5.channel.k8s.io
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;At this point the TCP connection used for the HTTP protocol has changed to a
streaming connection. Subsequent STDIN, STDOUT, and STDERR data (as well as
terminal resizing data and process exit code data) for this shell interaction is
then streamed over this upgraded connection.&lt;/p&gt;
&lt;h2 id=&#34;how-to-use-the-new-websocket-streaming-protocol&#34;&gt;How to use the new WebSocket streaming protocol&lt;/h2&gt;
&lt;p&gt;If your cluster and kubectl are on version 1.29 or later, there are two
control plane feature gates and two kubectl environment variables that
govern the use of the WebSockets rather than SPDY. In Kubernetes 1.31,
all of the following feature gates are in beta and are enabled by
default:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;Feature gates&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;TranslateStreamCloseWebsocketRequests&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;.../exec&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;.../attach&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PortForwardWebsockets&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;.../port-forward&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;kubectl feature control environment variables
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;KUBECTL_REMOTE_COMMAND_WEBSOCKETS&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;kubectl exec&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl cp&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl attach&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;KUBECTL_PORT_FORWARD_WEBSOCKETS&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;kubectl port-forward&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you&#39;re connecting to an older cluster but can manage the feature gate
settings, turn on both &lt;code&gt;TranslateStreamCloseWebsocketRequests&lt;/code&gt; (added in
Kubernetes v1.29) and &lt;code&gt;PortForwardWebsockets&lt;/code&gt; (added in Kubernetes
v1.30) to try this new behavior. Version 1.31 of &lt;code&gt;kubectl&lt;/code&gt; can automatically use
the new behavior, but you do need to connect to a cluster where the server-side
features are explicitly enabled.&lt;/p&gt;
&lt;h2 id=&#34;learn-more-about-streaming-apis&#34;&gt;Learn more about streaming APIs&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/4006-transition-spdy-to-websockets&#34;&gt;KEP 4006 - Transitioning from SPDY to WebSockets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://datatracker.ietf.org/doc/html/rfc6455&#34;&gt;RFC 6455 - The WebSockets Protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kubernetes.io/blog/2024/05/01/cri-streaming-explained/&#34;&gt;Container Runtime Interface streaming explained&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: Pod Failure Policy for Jobs Goes GA</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/19/kubernetes-1-31-pod-failure-policy-for-jobs-goes-ga/</link>
      <pubDate>Mon, 19 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/19/kubernetes-1-31-pod-failure-policy-for-jobs-goes-ga/</guid>
      <description>
        
        
        &lt;p&gt;This post describes &lt;em&gt;Pod failure policy&lt;/em&gt;, which graduates to stable in Kubernetes
1.31, and how to use it in your Jobs.&lt;/p&gt;
&lt;h2 id=&#34;about-pod-failure-policy&#34;&gt;About Pod failure policy&lt;/h2&gt;
&lt;p&gt;When you run workloads on Kubernetes, Pods might fail for a variety of reasons.
Ideally, workloads like Jobs should be able to ignore transient, retriable
failures and continue running to completion.&lt;/p&gt;
&lt;p&gt;To allow for these transient failures, Kubernetes Jobs include the &lt;code&gt;backoffLimit&lt;/code&gt;
field, which lets you specify a number of Pod failures that you&#39;re willing to tolerate
during Job execution. However, if you set a large value for the &lt;code&gt;backoffLimit&lt;/code&gt; field
and rely solely on this field, you might notice unnecessary increases in operating
costs as Pods restart excessively until the backoffLimit is met.&lt;/p&gt;
&lt;p&gt;This becomes particularly problematic when running large-scale Jobs with
thousands of long-running Pods across thousands of nodes.&lt;/p&gt;
&lt;p&gt;The Pod failure policy extends the backoff limit mechanism to help you reduce
costs in the following ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Gives you control to fail the Job as soon as a non-retriable Pod failure occurs.&lt;/li&gt;
&lt;li&gt;Allows you to ignore retriable errors without increasing the &lt;code&gt;backoffLimit&lt;/code&gt; field.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, you can use a Pod failure policy to run your workload on more affordable spot machines
by ignoring Pod failures caused by
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/cluster-administration/node-shutdown/#graceful-node-shutdown&#34;&gt;graceful node shutdown&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The policy allows you to distinguish between retriable and non-retriable Pod
failures based on container exit codes or Pod conditions in a failed Pod.&lt;/p&gt;
&lt;h2 id=&#34;how-it-works&#34;&gt;How it works&lt;/h2&gt;
&lt;p&gt;You specify a Pod failure policy in the Job specification, represented as a list
of rules.&lt;/p&gt;
&lt;p&gt;For each rule you define &lt;em&gt;match requirements&lt;/em&gt; based on one of the following properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Container exit codes: the &lt;code&gt;onExitCodes&lt;/code&gt; property.&lt;/li&gt;
&lt;li&gt;Pod conditions: the &lt;code&gt;onPodConditions&lt;/code&gt; property.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Additionally, for each rule, you specify one of the following actions to take
when a Pod matches the rule:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Ignore&lt;/code&gt;: Do not count the failure towards the &lt;code&gt;backoffLimit&lt;/code&gt; or &lt;code&gt;backoffLimitPerIndex&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;FailJob&lt;/code&gt;: Fail the entire Job and terminate all running Pods.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;FailIndex&lt;/code&gt;: Fail the index corresponding to the failed Pod.
This action works with the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/controllers/job/#backoff-limit-per-index&#34;&gt;Backoff limit per index&lt;/a&gt; feature.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Count&lt;/code&gt;: Count the failure towards the &lt;code&gt;backoffLimit&lt;/code&gt; or &lt;code&gt;backoffLimitPerIndex&lt;/code&gt;.
This is the default behavior.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When Pod failures occur in a running Job, Kubernetes matches the
failed Pod status against the list of Pod failure policy rules, in the specified
order, and takes the corresponding actions for the first matched rule.&lt;/p&gt;
&lt;p&gt;Note that when specifying the Pod failure policy, you must also set the Job&#39;s
Pod template with &lt;code&gt;restartPolicy: Never&lt;/code&gt;. This prevents race conditions between
the kubelet and the Job controller when counting Pod failures.&lt;/p&gt;
&lt;h3 id=&#34;kubernetes-initiated-pod-disruptions&#34;&gt;Kubernetes-initiated Pod disruptions&lt;/h3&gt;
&lt;p&gt;To allow matching Pod failure policy rules against failures caused by
disruptions initiated by Kubernetes, this feature introduces the &lt;code&gt;DisruptionTarget&lt;/code&gt;
Pod condition.&lt;/p&gt;
&lt;p&gt;Kubernetes adds this condition to any Pod, regardless of whether it&#39;s managed by
a Job controller, that fails because of a retriable
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/pods/disruptions/#pod-disruption-conditions&#34;&gt;disruption scenario&lt;/a&gt;.
The &lt;code&gt;DisruptionTarget&lt;/code&gt; condition contains one of the following reasons that
corresponds to these disruption scenarios:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;PreemptionByKubeScheduler&lt;/code&gt;: &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/scheduling-eviction/pod-priority-preemption&#34;&gt;Preemption&lt;/a&gt;
by &lt;code&gt;kube-scheduler&lt;/code&gt; to accommodate a new Pod that has a higher priority.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DeletionByTaintManager&lt;/code&gt; - the Pod is due to be deleted by
&lt;code&gt;kube-controller-manager&lt;/code&gt; due to a &lt;code&gt;NoExecute&lt;/code&gt; &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/scheduling-eviction/taint-and-toleration/&#34;&gt;taint&lt;/a&gt;
that the Pod doesn&#39;t tolerate.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;EvictionByEvictionAPI&lt;/code&gt; - the Pod is due to be deleted by an
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/scheduling-eviction/api-eviction/&#34;&gt;API-initiated eviction&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DeletionByPodGC&lt;/code&gt; - the Pod is bound to a node that no longer exists, and is due to
be deleted by &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection&#34;&gt;Pod garbage collection&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;TerminationByKubelet&lt;/code&gt; - the Pod was terminated by
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/cluster-administration/node-shutdown/#graceful-node-shutdown&#34;&gt;graceful node shutdown&lt;/a&gt;,
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/scheduling-eviction/node-pressure-eviction/&#34;&gt;node pressure eviction&lt;/a&gt;
or preemption for &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/&#34;&gt;system critical pods&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In all other disruption scenarios, like eviction due to exceeding
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/configuration/manage-resources-containers/&#34;&gt;Pod container limits&lt;/a&gt;,
Pods don&#39;t receive the &lt;code&gt;DisruptionTarget&lt;/code&gt; condition because the disruptions were
likely caused by the Pod and would reoccur on retry.&lt;/p&gt;
&lt;h3 id=&#34;example&#34;&gt;Example&lt;/h3&gt;
&lt;p&gt;The Pod failure policy snippet below demonstrates an example use:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;podFailurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;rules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;action&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Ignore&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;onPodConditions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;DisruptionTarget&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;action&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;FailJob&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;onPodConditions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ConfigIssue&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;action&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;FailJob&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;onExitCodes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operator&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;In&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;values&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;42&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;In this example, the Pod failure policy does the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ignores any failed Pods that have the built-in &lt;code&gt;DisruptionTarget&lt;/code&gt;
condition. These Pods don&#39;t count towards Job backoff limits.&lt;/li&gt;
&lt;li&gt;Fails the Job if any failed Pods have the custom user-supplied
&lt;code&gt;ConfigIssue&lt;/code&gt; condition, which was added either by a custom controller or webhook.&lt;/li&gt;
&lt;li&gt;Fails the Job if any containers exited with the exit code 42.&lt;/li&gt;
&lt;li&gt;Counts all other Pod failures towards the default &lt;code&gt;backoffLimit&lt;/code&gt; (or
&lt;code&gt;backoffLimitPerIndex&lt;/code&gt; if used).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;learn-more&#34;&gt;Learn more&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;For a hands-on guide to using Pod failure policy, see
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/job/pod-failure-policy/&#34;&gt;Handling retriable and non-retriable pod failures with Pod failure policy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Read the documentation for
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/controllers/job/#pod-failure-policy&#34;&gt;Pod failure policy&lt;/a&gt; and
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/controllers/job/#backoff-limit-per-index&#34;&gt;Backoff limit per index&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Read the documentation for
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/pods/disruptions/#pod-disruption-conditions&#34;&gt;Pod disruption conditions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Read the KEP for &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3329-retriable-and-non-retriable-failures&#34;&gt;Pod failure policy&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;related-work&#34;&gt;Related work&lt;/h2&gt;
&lt;p&gt;Based on the concepts introduced by Pod failure policy, the following additional work is in progress:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;JobSet integration: &lt;a href=&#34;https://github.com/kubernetes-sigs/jobset/issues/262&#34;&gt;Configurable Failure Policy API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4443&#34;&gt;Pod failure policy extension to add more granular failure reasons&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Support for Pod failure policy via JobSet in &lt;a href=&#34;https://github.com/kubeflow/training-operator/pull/2171&#34;&gt;Kubeflow Training v2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Proposal: &lt;a href=&#34;https://docs.google.com/document/d/1t25jgO_-LRHhjRXf4KJ5xY_t8BZYdapv7MDAxVGY6R8&#34;&gt;Disrupted Pods should be removed from endpoints&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;get-involved&#34;&gt;Get involved&lt;/h2&gt;
&lt;p&gt;This work was sponsored by
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/wg-batch&#34;&gt;batch working group&lt;/a&gt;
in close collaboration with the
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-apps&#34;&gt;SIG Apps&lt;/a&gt;,
and &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node&#34;&gt;SIG Node&lt;/a&gt;,
and &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-scheduling&#34;&gt;SIG Scheduling&lt;/a&gt;
communities.&lt;/p&gt;
&lt;p&gt;If you are interested in working on new features in the space we recommend
subscribing to our &lt;a href=&#34;https://kubernetes.slack.com/messages/wg-batch&#34;&gt;Slack&lt;/a&gt;
channel and attending the regular community meetings.&lt;/p&gt;
&lt;h2 id=&#34;acknowledgments&#34;&gt;Acknowledgments&lt;/h2&gt;
&lt;p&gt;I would love to thank everyone who was involved in this project over the years -
it&#39;s been a journey and a joint community effort! The list below is
my best-effort attempt to remember and recognize people who made an impact.
Thank you!&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/alculquicondor/&#34;&gt;Aldo Culquicondor&lt;/a&gt; for guidance and reviews throughout the process&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/liggitt&#34;&gt;Jordan Liggitt&lt;/a&gt; for KEP and API reviews&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/deads2k&#34;&gt;David Eads&lt;/a&gt; for API reviews&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/soltysh&#34;&gt;Maciej Szulik&lt;/a&gt; for KEP reviews from SIG Apps PoV&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/smarterclayton&#34;&gt;Clayton Coleman&lt;/a&gt; for guidance and SIG Node reviews&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/SergeyKanzhelev&#34;&gt;Sergey Kanzhelev&lt;/a&gt; for KEP reviews from SIG Node PoV&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/dchen1107&#34;&gt;Dawn Chen&lt;/a&gt; for KEP reviews from SIG Node PoV&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/lavalamp&#34;&gt;Daniel Smith&lt;/a&gt; for reviews from SIG API machinery PoV&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/apelisse&#34;&gt;Antoine Pelisse&lt;/a&gt; for reviews from SIG API machinery PoV&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/johnbelamaric&#34;&gt;John Belamaric&lt;/a&gt; for PRR reviews&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/atiratree&#34;&gt;Filip Křepinský&lt;/a&gt; for thorough reviews from SIG Apps PoV and bug-fixing&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/bobbypage&#34;&gt;David Porter&lt;/a&gt; for thorough reviews from SIG Node PoV&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/jensentanlo&#34;&gt;Jensen Lo&lt;/a&gt; for early requirements discussions, testing and reporting issues&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/danielvegamyhre&#34;&gt;Daniel Vega-Myhre&lt;/a&gt; for advancing JobSet integration and reporting issues&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/ahg-g&#34;&gt;Abdullah Gharaibeh&lt;/a&gt; for early design discussions and guidance&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/aojea&#34;&gt;Antonio Ojea&lt;/a&gt; for test reviews&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/tenzen-y&#34;&gt;Yuki Iwai&lt;/a&gt; for reviews and aligning implementation of the closely related Job features&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kannon92&#34;&gt;Kevin Hannon&lt;/a&gt; for reviews and aligning implementation of the closely related Job features&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/sftim&#34;&gt;Tim Bannister&lt;/a&gt; for docs reviews&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/shannonxtreme&#34;&gt;Shannon Kularathna&lt;/a&gt; for docs reviews&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/cortespao&#34;&gt;Paola Cortés&lt;/a&gt; for docs reviews&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: MatchLabelKeys in PodAffinity graduates to beta</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/16/matchlabelkeys-podaffinity/</link>
      <pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/16/matchlabelkeys-podaffinity/</guid>
      <description>
        
        
        &lt;p&gt;Kubernetes 1.29 introduced new fields &lt;code&gt;MatchLabelKeys&lt;/code&gt; and &lt;code&gt;MismatchLabelKeys&lt;/code&gt; in PodAffinity and PodAntiAffinity.&lt;/p&gt;
&lt;p&gt;In Kubernetes 1.31, this feature moves to beta and the corresponding feature gate (&lt;code&gt;MatchLabelKeysInPodAffinity&lt;/code&gt;) gets enabled by default.&lt;/p&gt;
&lt;h2 id=&#34;matchlabelkeys-enhanced-scheduling-for-versatile-rolling-updates&#34;&gt;&lt;code&gt;MatchLabelKeys&lt;/code&gt; - Enhanced scheduling for versatile rolling updates&lt;/h2&gt;
&lt;p&gt;During a workload&#39;s (e.g., Deployment) rolling update, a cluster may have Pods from multiple versions at the same time.
However, the scheduler cannot distinguish between old and new versions based on the &lt;code&gt;LabelSelector&lt;/code&gt; specified in PodAffinity or PodAntiAffinity. As a result, it will co-locate or disperse Pods regardless of their versions.&lt;/p&gt;
&lt;p&gt;This can lead to sub-optimal scheduling outcome, for example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;New version Pods are co-located with old version Pods (PodAffinity), which will eventually be removed after rolling updates.&lt;/li&gt;
&lt;li&gt;Old version Pods are distributed across all available topologies, preventing new version Pods from finding nodes due to PodAntiAffinity.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;MatchLabelKeys&lt;/code&gt; is a set of Pod label keys and addresses this problem.
The scheduler looks up the values of these keys from the new Pod&#39;s labels and combines them with &lt;code&gt;LabelSelector&lt;/code&gt;
so that PodAffinity matches Pods that have the same key-value in labels.&lt;/p&gt;
&lt;p&gt;By using label &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label&#34;&gt;pod-template-hash&lt;/a&gt; in &lt;code&gt;MatchLabelKeys&lt;/code&gt;,
you can ensure that only Pods of the same version are evaluated for PodAffinity or PodAntiAffinity.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;apps/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Deployment&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;application-server&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;affinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;podAffinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;labelSelector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchExpressions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;app&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operator&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;In&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;values&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;- database&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;topologyKey&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;topology.kubernetes.io/zone&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchLabelKeys&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- pod-template-hash&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The above matchLabelKeys will be translated in Pods like:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;application-server&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;labels&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;pod-template-hash&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;xyz&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;affinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;podAffinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;labelSelector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchExpressions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;app&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operator&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;In&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;values&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;- database&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pod-template-hash&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Added from matchLabelKeys; Only Pods from the same replicaset will match this affinity.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operator&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;In&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;values&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;- xyz &lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;topologyKey&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;topology.kubernetes.io/zone&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchLabelKeys&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- pod-template-hash&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;mismatchlabelkeys-service-isolation&#34;&gt;&lt;code&gt;MismatchLabelKeys&lt;/code&gt; - Service isolation&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;MismatchLabelKeys&lt;/code&gt; is a set of Pod label keys, like &lt;code&gt;MatchLabelKeys&lt;/code&gt;,
which looks up the values of these keys from the new Pod&#39;s labels, and merge them with &lt;code&gt;LabelSelector&lt;/code&gt; as &lt;code&gt;key notin (value)&lt;/code&gt;
so that PodAffinity does &lt;em&gt;not&lt;/em&gt; match Pods that have the same key-value in labels.&lt;/p&gt;
&lt;p&gt;Suppose all Pods for each tenant get &lt;code&gt;tenant&lt;/code&gt; label via a controller or a manifest management tool like Helm.&lt;/p&gt;
&lt;p&gt;Although the value of &lt;code&gt;tenant&lt;/code&gt; label is unknown when composing each workload&#39;s manifest,
the cluster admin wants to achieve exclusive 1:1 tenant to domain placement for a tenant isolation.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;MismatchLabelKeys&lt;/code&gt; works for this usecase;
By applying the following affinity globally using a mutating webhook,
the cluster admin can ensure that the Pods from the same tenant will land on the same domain exclusively,
meaning Pods from other tenants won&#39;t land on the same domain.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;affinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;podAffinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# ensures the pods of this tenant land on the same node pool&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchLabelKeys&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- tenant&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;topologyKey&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;node-pool&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;podAntiAffinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# ensures only Pods from this tenant lands on the same node pool&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;mismatchLabelKeys&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- tenant&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;labelSelector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchExpressions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;tenant&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operator&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Exists&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;topologyKey&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;node-pool&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The above matchLabelKeys and mismatchLabelKeys will be translated to like:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;application-server&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;labels&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;tenant&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;service-a&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;affinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;podAffinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# ensures the pods of this tenant land on the same node pool&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchLabelKeys&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;- tenant&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;topologyKey&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;node-pool&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;labelSelector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchExpressions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;tenant&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operator&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;In&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;values&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;- service-a &lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;podAntiAffinity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# ensures only Pods from this tenant lands on the same node pool&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;mismatchLabelKeys&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;- tenant&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;labelSelector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchExpressions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;tenant&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operator&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Exists&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;tenant&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operator&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;NotIn&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;values&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;            &lt;/span&gt;- service-a &lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;topologyKey&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;node-pool&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;getting-involved&#34;&gt;Getting involved&lt;/h2&gt;
&lt;p&gt;These features are managed by Kubernetes &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-scheduling&#34;&gt;SIG Scheduling&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Please join us and share your feedback. We look forward to hearing from you!&lt;/p&gt;
&lt;h2 id=&#34;how-can-i-learn-more&#34;&gt;How can I learn more?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity&#34;&gt;The official document of PodAffinity&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/3633-matchlabelkeys-to-podaffinity/README.md#story-2&#34;&gt;KEP-3633: Introduce MatchLabelKeys and MismatchLabelKeys to PodAffinity and PodAntiAffinity&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: Prevent PersistentVolume Leaks When Deleting out of Order</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/16/kubernetes-1-31-prevent-persistentvolume-leaks-when-deleting-out-of-order/</link>
      <pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/16/kubernetes-1-31-prevent-persistentvolume-leaks-when-deleting-out-of-order/</guid>
      <description>
        
        
        &lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/&#34;&gt;PersistentVolume&lt;/a&gt; (or PVs for short) are
associated with &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#reclaim-policy&#34;&gt;Reclaim Policy&lt;/a&gt;.
The reclaim policy is used to determine the actions that need to be taken by the storage
backend on deletion of the PVC Bound to a PV.
When the reclaim policy is &lt;code&gt;Delete&lt;/code&gt;, the expectation is that the storage backend
releases the storage resource allocated for the PV. In essence, the reclaim
policy needs to be honored on PV deletion.&lt;/p&gt;
&lt;p&gt;With the recent Kubernetes v1.31 release, a beta feature lets you configure your
cluster to behave that way and honor the configured reclaim policy.&lt;/p&gt;
&lt;h2 id=&#34;how-did-reclaim-work-in-previous-kubernetes-releases&#34;&gt;How did reclaim work in previous Kubernetes releases?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#Introduction&#34;&gt;PersistentVolumeClaim&lt;/a&gt; (or PVC for short) is
a user&#39;s request for storage. A PV and PVC are considered &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#Binding&#34;&gt;Bound&lt;/a&gt;
if a newly created PV or a matching PV is found. The PVs themselves are
backed by volumes allocated by the storage backend.&lt;/p&gt;
&lt;p&gt;Normally, if the volume is to be deleted, then the expectation is to delete the
PVC for a bound PV-PVC pair. However, there are no restrictions on deleting a PV
before deleting a PVC.&lt;/p&gt;
&lt;p&gt;First, I&#39;ll demonstrate the behavior for clusters running an older version of Kubernetes.&lt;/p&gt;
&lt;h4 id=&#34;retrieve-a-pvc-that-is-bound-to-a-pv&#34;&gt;Retrieve a PVC that is bound to a PV&lt;/h4&gt;
&lt;p&gt;Retrieve an existing PVC &lt;code&gt;example-vanilla-block-pvc&lt;/code&gt;&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl get pvc example-vanilla-block-pvc
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The following output shows the PVC and its bound PV; the PV is shown under the &lt;code&gt;VOLUME&lt;/code&gt; column:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS               AGE
example-vanilla-block-pvc   Bound    pvc-6791fdd4-5fad-438e-a7fb-16410363e3da   5Gi        RWO            example-vanilla-block-sc   19s
&lt;/code&gt;&lt;/pre&gt;&lt;h4 id=&#34;delete-pv&#34;&gt;Delete PV&lt;/h4&gt;
&lt;p&gt;When I try to delete a bound PV, the kubectl session blocks and the &lt;code&gt;kubectl&lt;/code&gt;
tool does not return back control to the shell; for example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl delete pv pvc-6791fdd4-5fad-438e-a7fb-16410363e3da
&lt;/code&gt;&lt;/pre&gt;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;persistentvolume &amp;#34;pvc-6791fdd4-5fad-438e-a7fb-16410363e3da&amp;#34; deleted
^C
&lt;/code&gt;&lt;/pre&gt;&lt;h4 id=&#34;retrieving-the-pv&#34;&gt;Retrieving the PV&lt;/h4&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl get pv pvc-6791fdd4-5fad-438e-a7fb-16410363e3da
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;It can be observed that the PV is in a &lt;code&gt;Terminating&lt;/code&gt; state&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                               STORAGECLASS               REASON   AGE
pvc-6791fdd4-5fad-438e-a7fb-16410363e3da   5Gi        RWO            Delete           Terminating   default/example-vanilla-block-pvc   example-vanilla-block-sc            2m23s
&lt;/code&gt;&lt;/pre&gt;&lt;h4 id=&#34;delete-pvc&#34;&gt;Delete PVC&lt;/h4&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl delete pvc example-vanilla-block-pvc
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The following output is seen if the PVC gets successfully deleted:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;persistentvolumeclaim &amp;#34;example-vanilla-block-pvc&amp;#34; deleted
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The PV object from the cluster also gets deleted. When attempting to retrieve the PV
it will be observed that the PV is no longer found:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl get pv pvc-6791fdd4-5fad-438e-a7fb-16410363e3da
&lt;/code&gt;&lt;/pre&gt;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Error from server (NotFound): persistentvolumes &amp;#34;pvc-6791fdd4-5fad-438e-a7fb-16410363e3da&amp;#34; not found
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Although the PV is deleted, the underlying storage resource is not deleted and
needs to be removed manually.&lt;/p&gt;
&lt;p&gt;To sum up, the reclaim policy associated with the PersistentVolume is currently
ignored under certain circumstances. For a &lt;code&gt;Bound&lt;/code&gt; PV-PVC pair, the ordering of PV-PVC
deletion determines whether the PV reclaim policy is honored. The reclaim policy
is honored if the PVC is deleted first; however, if the PV is deleted prior to
deleting the PVC, then the reclaim policy is not exercised. As a result of this behavior,
the associated storage asset in the external infrastructure is not removed.&lt;/p&gt;
&lt;h2 id=&#34;pv-reclaim-policy-with-kubernetes-v1-31&#34;&gt;PV reclaim policy with Kubernetes v1.31&lt;/h2&gt;
&lt;p&gt;The new behavior ensures that the underlying storage object is deleted from the backend when users attempt to delete a PV manually.&lt;/p&gt;
&lt;h4 id=&#34;how-to-enable-new-behavior&#34;&gt;How to enable new behavior?&lt;/h4&gt;
&lt;p&gt;To take advantage of the new behavior, you must have upgraded your cluster to the v1.31 release of Kubernetes
and run the CSI &lt;a href=&#34;https://github.com/kubernetes-csi/external-provisioner&#34;&gt;&lt;code&gt;external-provisioner&lt;/code&gt;&lt;/a&gt; version &lt;code&gt;5.0.1&lt;/code&gt; or later.&lt;/p&gt;
&lt;h4 id=&#34;how-does-it-work&#34;&gt;How does it work?&lt;/h4&gt;
&lt;p&gt;For CSI volumes, the new behavior is achieved by adding a &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/overview/working-with-objects/finalizers/&#34;&gt;finalizer&lt;/a&gt; &lt;code&gt;external-provisioner.volume.kubernetes.io/finalizer&lt;/code&gt;
on new and existing PVs. The finalizer is only removed after the storage from the backend is deleted.
`&lt;/p&gt;
&lt;p&gt;An example of a PV with the finalizer, notice the new finalizer in the finalizers list&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl get pv pvc-a7b7e3ba-f837-45ba-b243-dec7d8aaed53 -o yaml
&lt;/code&gt;&lt;/pre&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;PersistentVolume&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;annotations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;pv.kubernetes.io/provisioned-by&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;csi.vsphere.vmware.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;creationTimestamp&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2021-11-17T19:28:56Z&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;finalizers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- kubernetes.io/pv-protection&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- external-provisioner.volume.kubernetes.io/finalizer&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pvc-a7b7e3ba-f837-45ba-b243-dec7d8aaed53&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resourceVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;194711&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;uid&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;087f14f2-4157-4e95-8a70-8294b039d30e&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;accessModes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- ReadWriteOnce&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;capacity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storage&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;1Gi&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;PersistentVolumeClaim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;example-vanilla-block-pvc&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;default&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resourceVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;194677&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;uid&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;a7b7e3ba-f837-45ba-b243-dec7d8aaed53&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;csi&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driver&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;csi.vsphere.vmware.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;fsType&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ext4&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeAttributes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storage.kubernetes.io/csiProvisionerIdentity&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;1637110610497-8081&lt;/span&gt;-csi.vsphere.vmware.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;vSphere CNS Block Volume&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeHandle&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;2dacf297-803f-4ccc-afc7-3d3c3f02051e&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;persistentVolumeReclaimPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Delete&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storageClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;example-vanilla-block-sc&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeMode&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Filesystem&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;status&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;phase&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Bound&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/overview/working-with-objects/finalizers/&#34;&gt;finalizer&lt;/a&gt; prevents this
PersistentVolume from being removed from the
cluster. As stated previously, the finalizer is only removed from the PV object
after it is successfully deleted from the storage backend. To learn more about
finalizers, please refer to &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/05/14/using-finalizers-to-control-deletion/&#34;&gt;Using Finalizers to Control Deletion&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Similarly, the finalizer &lt;code&gt;kubernetes.io/pv-controller&lt;/code&gt; is added to dynamically provisioned in-tree plugin volumes.&lt;/p&gt;
&lt;h4 id=&#34;what-about-csi-migrated-volumes&#34;&gt;What about CSI migrated volumes?&lt;/h4&gt;
&lt;p&gt;The fix applies to CSI migrated volumes as well.&lt;/p&gt;
&lt;h3 id=&#34;some-caveats&#34;&gt;Some caveats&lt;/h3&gt;
&lt;p&gt;The fix does not apply to statically provisioned in-tree plugin volumes.&lt;/p&gt;
&lt;h3 id=&#34;references&#34;&gt;References&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2644-honor-pv-reclaim-policy&#34;&gt;KEP-2644&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes-csi/external-provisioner/issues/546&#34;&gt;Volume leak issue&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;how-do-i-get-involved&#34;&gt;How do I get involved?&lt;/h3&gt;
&lt;p&gt;The Kubernetes Slack channel &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact&#34;&gt;SIG Storage communication channels&lt;/a&gt; are great mediums to reach out to the SIG Storage and migration working group teams.&lt;/p&gt;
&lt;p&gt;Special thanks to the following people for the insightful reviews, thorough consideration and valuable contribution:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fan Baofa (carlory)&lt;/li&gt;
&lt;li&gt;Jan Šafránek (jsafrane)&lt;/li&gt;
&lt;li&gt;Xing Yang (xing-yang)&lt;/li&gt;
&lt;li&gt;Matthew Wong (wongma7)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Join the &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;Kubernetes Storage Special Interest Group (SIG)&lt;/a&gt; if you&#39;re interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system. We’re rapidly growing and always welcome new contributors.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: Read Only Volumes Based On OCI Artifacts (alpha)</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/16/kubernetes-1-31-image-volume-source/</link>
      <pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/16/kubernetes-1-31-image-volume-source/</guid>
      <description>
        
        
        &lt;p&gt;The Kubernetes community is moving towards fulfilling more Artificial
Intelligence (AI) and Machine Learning (ML) use cases in the future. While the
project has been designed to fulfill microservice architectures in the past,
it’s now time to listen to the end users and introduce features which have a
stronger focus on AI/ML.&lt;/p&gt;
&lt;p&gt;One of these requirements is to support &lt;a href=&#34;https://opencontainers.org&#34;&gt;Open Container Initiative (OCI)&lt;/a&gt;
compatible images and artifacts (referred as OCI objects) directly as a native
volume source. This allows users to focus on OCI standards as well as enables
them to store and distribute any content using OCI registries. A feature like
this gives the Kubernetes project a chance to grow into use cases which go
beyond running particular images.&lt;/p&gt;
&lt;p&gt;Given that, the Kubernetes community is proud to present a new alpha feature
introduced in v1.31: The Image Volume Source
(&lt;a href=&#34;https://kep.k8s.io/4639&#34;&gt;KEP-4639&lt;/a&gt;). This feature allows users to specify an
image reference as volume in a pod while reusing it as volume mount within
containers:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;…&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- …&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeMounts&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;my-volume&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;mountPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/path/to/directory&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;my-volume&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;reference&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;my-image:tag&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The above example would result in mounting &lt;code&gt;my-image:tag&lt;/code&gt; to
&lt;code&gt;/path/to/directory&lt;/code&gt; in the pod’s container.&lt;/p&gt;
&lt;h2 id=&#34;use-cases&#34;&gt;Use cases&lt;/h2&gt;
&lt;p&gt;The goal of this enhancement is to stick as close as possible to the existing
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/containers/images/&#34;&gt;container image&lt;/a&gt; implementation within the
kubelet, while introducing a new API surface to allow more extended use cases.&lt;/p&gt;
&lt;p&gt;For example, users could share a configuration file among multiple containers in
a pod without including the file in the main image, so that they can minimize
security risks and the overall image size. They can also package and distribute
binary artifacts using OCI images and mount them directly into Kubernetes pods,
so that they can streamline their CI/CD pipeline as an example.&lt;/p&gt;
&lt;p&gt;Data scientists, MLOps engineers, or AI developers, can mount large language
model weights or machine learning model weights in a pod alongside a
model-server, so that they can efficiently serve them without including them in
the model-server container image. They can package these in an OCI object to
take advantage of OCI distribution and ensure efficient model deployment. This
allows them to separate the model specifications/content from the executables
that process them.&lt;/p&gt;
&lt;p&gt;Another use case is that security engineers can use a public image for a malware
scanner and mount in a volume of private (commercial) malware signatures, so
that they can load those signatures without baking their own combined image
(which might not be allowed by the copyright on the public image). Those files
work regardless of the OS or version of the scanner software.&lt;/p&gt;
&lt;p&gt;But in the long term it will be up to &lt;strong&gt;you&lt;/strong&gt; as an end user of this project to
outline further important use cases for the new feature.
&lt;a href=&#34;https://github.com/kubernetes/community/blob/54a67f5/sig-node/README.md&#34;&gt;SIG Node&lt;/a&gt;
is happy to retrieve any feedback or suggestions for further enhancements to
allow more advanced usage scenarios. Feel free to provide feedback by either
using the &lt;a href=&#34;https://kubernetes.slack.com/messages/sig-node&#34;&gt;Kubernetes Slack (#sig-node)&lt;/a&gt;
channel or the &lt;a href=&#34;https://groups.google.com/g/kubernetes-sig-node&#34;&gt;SIG Node mailinglist&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;example&#34;&gt;Detailed example&lt;/h2&gt;
&lt;p&gt;The Kubernetes alpha feature gate &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates&#34;&gt;&lt;code&gt;ImageVolume&lt;/code&gt;&lt;/a&gt;
needs to be enabled on the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/kube-apiserver&#34;&gt;API Server&lt;/a&gt;
as well as the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/kubelet&#34;&gt;kubelet&lt;/a&gt;
to make it functional. If that’s the case and the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/setup/production-environment/container-runtimes&#34;&gt;container runtime&lt;/a&gt;
has support for the feature (like CRI-O ≥ v1.31), then an example &lt;code&gt;pod.yaml&lt;/code&gt;
like this can be created:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;registry.k8s.io/e2e-test-images/echoserver:2.3&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeMounts&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;volume&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;mountPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/volume&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;volume&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;reference&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;quay.io/crio/artifact:v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;pullPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;IfNotPresent&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The pod declares a new volume using the &lt;code&gt;image.reference&lt;/code&gt; of
&lt;code&gt;quay.io/crio/artifact:v1&lt;/code&gt;, which refers to an OCI object containing two files.
The &lt;code&gt;pullPolicy&lt;/code&gt; behaves in the same way as for container images and allows the
following values:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Always&lt;/code&gt;: the kubelet always attempts to pull the reference and the container
creation will fail if the pull fails.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Never&lt;/code&gt;: the kubelet never pulls the reference and only uses a local image or
artifact. The container creation will fail if the reference isn’t present.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;IfNotPresent&lt;/code&gt;: the kubelet pulls if the reference isn’t already present on
disk. The container creation will fail if the reference isn’t present and the
pull fails.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;code&gt;volumeMounts&lt;/code&gt; field is indicating that the container with the name &lt;code&gt;test&lt;/code&gt;
should mount the volume under the path &lt;code&gt;/volume&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;If you now create the pod:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl apply -f pod.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;And exec into it:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl &lt;span style=&#34;color:#a2f&#34;&gt;exec&lt;/span&gt; -it pod -- sh
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Then you’re able to investigate what has been mounted:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;/ # ls /volume
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;dir   file
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;/ # cat /volume/file
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;2
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;/ # ls /volume/dir
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;file
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;/ # cat /volume/dir/file
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;&lt;strong&gt;You managed to consume an OCI artifact using Kubernetes!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The container runtime pulls the image (or artifact), mounts it to the
container and makes it finally available for direct usage. There are a bunch of
details in the implementation, which closely align to the existing image pull
behavior of the kubelet. For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If a &lt;code&gt;:latest&lt;/code&gt; tag as &lt;code&gt;reference&lt;/code&gt; is provided, then the &lt;code&gt;pullPolicy&lt;/code&gt; will
default to &lt;code&gt;Always&lt;/code&gt;, while in any other case it will default to &lt;code&gt;IfNotPresent&lt;/code&gt;
if unset.&lt;/li&gt;
&lt;li&gt;The volume gets re-resolved if the pod gets deleted and recreated, which means
that new remote content will become available on pod recreation. A failure to
resolve or pull the image during pod startup will block containers from
starting and may add significant latency. Failures will be retried using
normal volume backoff and will be reported on the pod reason and message.&lt;/li&gt;
&lt;li&gt;Pull secrets will be assembled in the same way as for the container image by
looking up node credentials, service account image pull secrets, and pod spec
image pull secrets.&lt;/li&gt;
&lt;li&gt;The OCI object gets mounted in a single directory by merging the manifest
layers in the same way as for container images.&lt;/li&gt;
&lt;li&gt;The volume is mounted as read-only (&lt;code&gt;ro&lt;/code&gt;) and non-executable files
(&lt;code&gt;noexec&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Sub-path mounts for containers are not supported
(&lt;code&gt;spec.containers[*].volumeMounts.subpath&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;The field &lt;code&gt;spec.securityContext.fsGroupChangePolicy&lt;/code&gt; has no effect on this
volume type.&lt;/li&gt;
&lt;li&gt;The feature will also work with the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages&#34;&gt;&lt;code&gt;AlwaysPullImages&lt;/code&gt; admission plugin&lt;/a&gt;
if enabled.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Thank you for reading through the end of this blog post! SIG Node is proud and
happy to deliver this feature as part of Kubernetes v1.31.&lt;/p&gt;
&lt;p&gt;As writer of this blog post, I would like to emphasize my special thanks to
&lt;strong&gt;all&lt;/strong&gt; involved individuals out there! You all rock, let’s keep on hacking!&lt;/p&gt;
&lt;h2 id=&#34;further-reading&#34;&gt;Further reading&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/configure-pod-container/image-volumes&#34;&gt;Use an Image Volume With a Pod&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/volumes/#image&#34;&gt;&lt;code&gt;image&lt;/code&gt; volume overview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: VolumeAttributesClass for Volume Modification Beta</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/15/kubernetes-1-31-volume-attributes-class/</link>
      <pubDate>Thu, 15 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/15/kubernetes-1-31-volume-attributes-class/</guid>
      <description>
        
        
        &lt;p&gt;Volumes in Kubernetes have been described by two attributes: their storage class, and
their capacity. The storage class is an immutable property of the volume, while the
capacity can be changed dynamically with &lt;a href=&#34;https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims&#34;&gt;volume
resize&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This complicates vertical scaling of workloads with volumes. While cloud providers and
storage vendors often offer volumes which allow specifying IO quality of service
(Performance) parameters like IOPS or throughput and tuning them as workloads operate,
Kubernetes has no API which allows changing them.&lt;/p&gt;
&lt;p&gt;We are pleased to announce that the &lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md&#34;&gt;VolumeAttributesClass
KEP&lt;/a&gt;,
alpha since Kubernetes 1.29, will be beta in 1.31. This provides a generic,
Kubernetes-native API for modifying volume parameters like provisioned IO.&lt;/p&gt;
&lt;p&gt;Like all new volume features in Kubernetes, this API is implemented via the &lt;a href=&#34;https://kubernetes-csi.github.io/docs/&#34;&gt;container
storage interface (CSI)&lt;/a&gt;. In addition to the
VolumeAttributesClass feature gate, your provisioner-specific CSI driver must support the
new ModifyVolume API which is the CSI side of this feature.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/&#34;&gt;full
documentation&lt;/a&gt;
for all details. Here we show the common workflow.&lt;/p&gt;
&lt;h3 id=&#34;dynamically-modifying-volume-attributes&#34;&gt;Dynamically modifying volume attributes.&lt;/h3&gt;
&lt;p&gt;A &lt;code&gt;VolumeAttributesClass&lt;/code&gt; is a cluster-scoped resource that specifies provisioner-specific
attributes. These are created by the cluster administrator in the same way as storage
classes. For example, a series of gold, silver and bronze volume attribute classes can be
created for volumes with greater or lessor amounts of provisioned IO.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1alpha1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeAttributesClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;silver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driverName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;your-csi-driver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-iops&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;500&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-throughput&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;50MiB/s&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1alpha1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeAttributesClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gold&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driverName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;your-csi-driver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-iops&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;10000&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-throughput&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;500MiB/s&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;An attribute class is added to a PVC in much the same way as a storage class.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;PersistentVolumeClaim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test-pv-claim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storageClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;any-storage-class&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeAttributesClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;silver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;accessModes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- ReadWriteOnce&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requests&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storage&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;64Gi&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Unlike a storage class, the volume attributes class can be changed:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl patch pvc test-pv-claim -p &amp;#39;{&amp;#34;spec&amp;#34;: &amp;#34;volumeAttributesClassName&amp;#34;: &amp;#34;gold&amp;#34;}&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Kubernetes will work with the CSI driver to update the attributes of the
volume. The status of the PVC will track the current and desired attributes
class. The PV resource will also be updated with the new volume attributes class
which will be set to the currently active attributes of the PV.&lt;/p&gt;
&lt;h3 id=&#34;limitations-with-the-beta&#34;&gt;Limitations with the beta&lt;/h3&gt;
&lt;p&gt;As a beta feature, there are still some features which are planned for GA but not yet
present. The largest is quota support, see the
&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md&#34;&gt;KEP&lt;/a&gt;
and discussion in
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;sig-storage&lt;/a&gt; for details.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://kubernetes-csi.github.io/docs/drivers.html&#34;&gt;Kubernetes CSI driver
list&lt;/a&gt; for up-to-date
information of support for this feature in CSI drivers.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from Cache</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/15/consistent-read-from-cache-beta/</link>
      <pubDate>Thu, 15 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/15/consistent-read-from-cache-beta/</guid>
      <description>
        
        
        &lt;p&gt;Kubernetes is renowned for its robust orchestration of containerized applications,
but as clusters grow, the demands on the control plane can become a bottleneck.
A key challenge has been ensuring strongly consistent reads from the etcd datastore,
requiring resource-intensive quorum reads.&lt;/p&gt;
&lt;p&gt;Today, the Kubernetes community is excited to announce a major improvement:
&lt;em&gt;consistent reads from cache&lt;/em&gt;, graduating to Beta in Kubernetes v1.31.&lt;/p&gt;
&lt;h3 id=&#34;why-consistent-reads-matter&#34;&gt;Why consistent reads matter&lt;/h3&gt;
&lt;p&gt;Consistent reads are essential for ensuring that Kubernetes components have an accurate view of the latest cluster state.
Guaranteeing consistent reads is crucial for maintaining the accuracy and reliability of Kubernetes operations,
enabling components to make informed decisions based on up-to-date information.
In large-scale clusters, fetching and processing this data can be a performance bottleneck,
especially for requests that involve filtering results.
While Kubernetes can filter data by namespace directly within etcd,
any other filtering by labels or field selectors requires the entire dataset to be fetched from etcd and then filtered in-memory by the Kubernetes API server.
This is particularly impactful for components like the kubelet,
which only needs to list pods scheduled to its node - but previously required the API Server and etcd to process all pods in the cluster.&lt;/p&gt;
&lt;h3 id=&#34;the-breakthrough-caching-with-confidence&#34;&gt;The breakthrough: Caching with confidence&lt;/h3&gt;
&lt;p&gt;Kubernetes has long used a watch cache to optimize read operations.
The watch cache stores a snapshot of the cluster state and receives updates through etcd watches.
However, until now, it couldn&#39;t serve consistent reads directly, as there was no guarantee the cache was sufficiently up-to-date.&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;consistent reads from cache&lt;/em&gt; feature addresses this by leveraging etcd&#39;s
&lt;a href=&#34;https://etcd.io/docs/v3.5/dev-guide/interacting_v3/#watch-progress&#34;&gt;progress notifications&lt;/a&gt;
mechanism.
These notifications inform the watch cache about how current its data is compared to etcd.
When a consistent read is requested, the system first checks if the watch cache is up-to-date.
If the cache is not up-to-date, the system queries etcd for progress notifications until it&#39;s confirmed that the cache is sufficiently fresh.
Once ready, the read is efficiently served directly from the cache,
which can significantly improve performance,
particularly in cases where it would require fetching a lot of data from etcd.
This enables requests that filter data to be served from the cache,
with only minimal metadata needing to be read from etcd.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important Note:&lt;/strong&gt; To benefit from this feature, your Kubernetes cluster must be running etcd version 3.4.31+ or 3.5.13+.
For older etcd versions, Kubernetes will automatically fall back to serving consistent reads directly from etcd.&lt;/p&gt;
&lt;h3 id=&#34;performance-gains-you-ll-notice&#34;&gt;Performance gains you&#39;ll notice&lt;/h3&gt;
&lt;p&gt;This seemingly simple change has a profound impact on Kubernetes performance and scalability:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reduced etcd Load:&lt;/strong&gt; Kubernetes v1.31 can offload work from etcd,
freeing up resources for other critical operations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Lower Latency:&lt;/strong&gt; Serving reads from cache is significantly faster than fetching
and processing data from etcd. This translates to quicker responses for components,
improving overall cluster responsiveness.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Improved Scalability:&lt;/strong&gt; Large clusters with thousands of nodes and pods will
see the most significant gains, as the reduction in etcd load allows the
control plane to handle more requests without sacrificing performance.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;5k Node Scalability Test Results:&lt;/strong&gt; In recent scalability tests on 5,000 node
clusters, enabling consistent reads from cache delivered impressive improvements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;30% reduction&lt;/strong&gt; in kube-apiserver CPU usage&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;25% reduction&lt;/strong&gt; in etcd CPU usage&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Up to 3x reduction&lt;/strong&gt; (from 5 seconds to 1.5 seconds) in 99th percentile pod LIST request latency&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;what-s-next&#34;&gt;What&#39;s next?&lt;/h3&gt;
&lt;p&gt;With the graduation to beta, consistent reads from cache are enabled by default,
offering a seamless performance boost to all Kubernetes users running a supported
etcd version.&lt;/p&gt;
&lt;p&gt;Our journey doesn&#39;t end here. Kubernetes community is actively exploring
pagination support in the watch cache, which will unlock even more performance
optimizations in the future.&lt;/p&gt;
&lt;h3 id=&#34;getting-started&#34;&gt;Getting started&lt;/h3&gt;
&lt;p&gt;Upgrading to Kubernetes v1.31 and ensuring you are using etcd version 3.4.31+ or
3.5.13+ is the easiest way to experience the benefits of consistent reads from
cache.
If you have any questions or feedback, don&#39;t hesitate to reach out to the Kubernetes community.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Let us know how&lt;/strong&gt; &lt;em&gt;consistent reads from cache&lt;/em&gt; &lt;strong&gt;transforms your Kubernetes experience!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Special thanks to @ah8ad3 and @p0lyn0mial for their contributions to this feature!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31: Moving cgroup v1 Support into Maintenance Mode</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/14/kubernetes-1-31-moving-cgroup-v1-support-maintenance-mode/</link>
      <pubDate>Wed, 14 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/14/kubernetes-1-31-moving-cgroup-v1-support-maintenance-mode/</guid>
      <description>
        
        
        &lt;p&gt;As Kubernetes continues to evolve and adapt to the changing landscape of
container orchestration, the community has decided to move cgroup v1 support
into &lt;a href=&#34;#what-does-maintenance-mode-mean&#34;&gt;maintenance mode&lt;/a&gt; in v1.31.
This shift aligns with the broader industry&#39;s move towards cgroup v2, offering
improved functionalities: including scalability and a more consistent interface.
Before we dive into the consequences for Kubernetes, let&#39;s take a step back to
understand what cgroups are and their significance in Linux.&lt;/p&gt;
&lt;h2 id=&#34;understanding-cgroups&#34;&gt;Understanding cgroups&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://man7.org/linux/man-pages/man7/cgroups.7.html&#34;&gt;Control groups&lt;/a&gt;, or
cgroups, are a Linux kernel feature that allows the allocation, prioritization,
denial, and management of system resources (such as CPU, memory, disk I/O,
and network bandwidth) among processes. This functionality is crucial for
maintaining system performance and ensuring that no single process can
monopolize system resources, which is especially important in multi-tenant
environments.&lt;/p&gt;
&lt;p&gt;There are two versions of cgroups:
&lt;a href=&#34;https://docs.kernel.org/admin-guide/cgroup-v1/index.html&#34;&gt;v1&lt;/a&gt; and
&lt;a href=&#34;https://docs.kernel.org/admin-guide/cgroup-v2.html&#34;&gt;v2&lt;/a&gt;. While cgroup v1
provided sufficient capabilities for resource management, it had limitations
that led to the development of cgroup v2. Cgroup v2 offers a more unified and
consistent interface, on top of better resource control features.&lt;/p&gt;
&lt;h2 id=&#34;cgroups-in-kubernetes&#34;&gt;Cgroups in Kubernetes&lt;/h2&gt;
&lt;p&gt;For Linux nodes, Kubernetes relies heavily on cgroups to manage and isolate the
resources consumed by containers running in pods. Each container in Kubernetes
is placed in its own cgroup, which allows Kubernetes to enforce resource limits,
monitor usage, and ensure fair resource distribution among all containers.&lt;/p&gt;
&lt;h3 id=&#34;how-kubernetes-uses-cgroups&#34;&gt;How Kubernetes uses cgroups&lt;/h3&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;strong&gt;Resource Allocation&lt;/strong&gt;&lt;/dt&gt;
&lt;dd&gt;Ensures that containers do not exceed their allocated CPU and memory limits.&lt;/dd&gt;
&lt;dt&gt;&lt;strong&gt;Isolation&lt;/strong&gt;&lt;/dt&gt;
&lt;dd&gt;Isolates containers from each other to prevent resource contention.&lt;/dd&gt;
&lt;dt&gt;&lt;strong&gt;Monitoring&lt;/strong&gt;&lt;/dt&gt;
&lt;dd&gt;Tracks resource usage for each container to provide insights and metrics.&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;transitioning-to-cgroup-v2&#34;&gt;Transitioning to Cgroup v2&lt;/h2&gt;
&lt;p&gt;The Linux community has been focusing on cgroup v2 for new features and
improvements. Major Linux distributions and projects like
&lt;a href=&#34;https://systemd.io/&#34;&gt;systemd&lt;/a&gt; are
&lt;a href=&#34;https://github.com/systemd/systemd/issues/30852&#34;&gt;transitioning&lt;/a&gt; towards cgroup v2.
Using cgroup v2 provides several benefits over cgroupv1, such as Unified Hierarchy,
Improved Interface, Better Resource Control,
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/117793&#34;&gt;cgroup aware OOM killer&lt;/a&gt;,
&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2033-kubelet-in-userns-aka-rootless/README.md#cgroup&#34;&gt;rootless support&lt;/a&gt; etc.&lt;/p&gt;
&lt;p&gt;Given these advantages, Kubernetes is also making the move to embrace cgroup
v2 more fully. However, this transition needs to be handled carefully to avoid
disrupting existing workloads and to provide a smooth migration path for users.&lt;/p&gt;
&lt;h2 id=&#34;moving-cgroup-v1-support-into-maintenance-mode&#34;&gt;Moving cgroup v1 support into maintenance mode&lt;/h2&gt;
&lt;h3 id=&#34;what-does-maintenance-mode-mean&#34;&gt;What does maintenance mode mean?&lt;/h3&gt;
&lt;p&gt;When cgroup v1 is placed into maintenance mode in Kubernetes, it means that:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Feature Freeze&lt;/strong&gt;: No new features will be added to cgroup v1 support.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security Fixes&lt;/strong&gt;: Critical security fixes will still be provided.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Best-Effort Bug Fixes&lt;/strong&gt;: Major bugs may be fixed if feasible, but some
issues might remain unresolved.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;why-move-to-maintenance-mode&#34;&gt;Why move to maintenance mode?&lt;/h3&gt;
&lt;p&gt;The move to maintenance mode is driven by the need to stay in line with the
broader ecosystem and to encourage the adoption of cgroup v2, which offers
better performance, security, and usability. By transitioning cgroup v1 to
maintenance mode, Kubernetes can focus on enhancing support for cgroup v2
and ensure it meets the needs of modern workloads. It&#39;s important to note
that maintenance mode does not mean deprecation; cgroup v1 will continue to
receive critical security fixes and major bug fixes as needed.&lt;/p&gt;
&lt;h2 id=&#34;what-this-means-for-cluster-administrators&#34;&gt;What this means for cluster administrators&lt;/h2&gt;
&lt;p&gt;Users currently relying on cgroup v1 are highly encouraged to plan for the
transition to cgroup v2. This transition involves:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Upgrading Systems&lt;/strong&gt;: Ensuring that the underlying operating systems and
container runtimes support cgroup v2.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Testing Workloads&lt;/strong&gt;: Verifying that workloads and applications function
correctly with cgroup v2.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;further-reading&#34;&gt;Further reading&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://man7.org/linux/man-pages/man7/cgroups.7.html&#34;&gt;Linux cgroups&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/architecture/cgroups/&#34;&gt;Cgroup v2 in Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2022/08/31/cgroupv2-ga-1-25/&#34;&gt;Kubernetes 1.25: cgroup v2 graduates to GA&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.31: PersistentVolume Last Phase Transition Time Moves to GA</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/14/last-phase-transition-time-ga/</link>
      <pubDate>Wed, 14 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/14/last-phase-transition-time-ga/</guid>
      <description>
        
        
        &lt;p&gt;Announcing the graduation to General Availability (GA) of the PersistentVolume &lt;code&gt;lastTransitionTime&lt;/code&gt; status
field, in Kubernetes v1.31!&lt;/p&gt;
&lt;p&gt;The Kubernetes SIG Storage team is excited to announce that the &amp;quot;PersistentVolumeLastPhaseTransitionTime&amp;quot; feature, introduced
as an alpha in Kubernetes v1.28, has now reached GA status and is officially part of the Kubernetes v1.31 release. This enhancement
helps Kubernetes users understand when a &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/&#34;&gt;PersistentVolume&lt;/a&gt; transitions between
different phases, allowing for more efficient and informed resource management.&lt;/p&gt;
&lt;p&gt;For a v1.31 cluster, you can now assume that every PersistentVolume object has a
&lt;code&gt;.status.lastTransitionTime&lt;/code&gt; field, that holds a timestamp of
when the volume last transitioned its phase. This change is not immediate; the new field will be populated whenever a PersistentVolume
is updated and first transitions between phases (&lt;code&gt;Pending&lt;/code&gt;, &lt;code&gt;Bound&lt;/code&gt;, or &lt;code&gt;Released&lt;/code&gt;) after upgrading to Kubernetes v1.31.&lt;/p&gt;
&lt;h2 id=&#34;what-changed&#34;&gt;What changed?&lt;/h2&gt;
&lt;p&gt;The API strategy for updating PersistentVolume objects has been modified to populate the &lt;code&gt;.status.lastTransitionTime&lt;/code&gt; field with the
current timestamp whenever a PersistentVolume transitions phases. Users are allowed to set this field manually if needed, but it will
be overwritten when the PersistentVolume transitions phases again.&lt;/p&gt;
&lt;p&gt;For more details, read about
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#phase-transition-timestamp&#34;&gt;Phase transition timestamp&lt;/a&gt; in the Kubernetes documentation.
You can also read the previous &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/23/persistent-volume-last-phase-transition-time&#34;&gt;blog post&lt;/a&gt; announcing the feature as alpha in v1.28.&lt;/p&gt;
&lt;p&gt;To provide feedback, join our &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;Kubernetes Storage Special-Interest-Group&lt;/a&gt; (SIG)
or participate in discussions on our &lt;a href=&#34;https://app.slack.com/client/T09NY5SBT/C09QZFCE5&#34;&gt;public Slack channel&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.31: Elli</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/13/kubernetes-v1-31-release/</link>
      <pubDate>Tue, 13 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/13/kubernetes-v1-31-release/</guid>
      <description>
        
        
        &lt;p&gt;&lt;strong&gt;Editors:&lt;/strong&gt; Matteo Bianchi, Yigit Demirbas, Abigail McCarthy, Edith Puclla, Rashan Smith&lt;/p&gt;
&lt;p&gt;Announcing the release of Kubernetes v1.31: Elli!&lt;/p&gt;
&lt;p&gt;Similar to previous releases, the release of Kubernetes v1.31 introduces new
stable, beta, and alpha features.
The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.
This release consists of 45 enhancements.
Of those enhancements, 11 have graduated to Stable, 22 are entering Beta,
and 12 have graduated to Alpha.&lt;/p&gt;
&lt;h2 id=&#34;release-theme-and-logo&#34;&gt;Release theme and logo&lt;/h2&gt;


&lt;figure class=&#34;release-logo &#34;&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/blog/2024-08-13-kubernetes-1.31-release/k8s-1.31.png&#34;
         alt=&#34;Kubernetes v1.31 Elli logo&#34;/&gt; 
&lt;/figure&gt;
&lt;p&gt;The Kubernetes v1.31 Release Theme is &amp;quot;Elli&amp;quot;.&lt;/p&gt;
&lt;p&gt;Kubernetes v1.31&#39;s Elli is a cute and joyful dog, with a heart of gold and a nice sailor&#39;s cap, as a playful wink to the huge and diverse family of Kubernetes contributors.&lt;/p&gt;
&lt;p&gt;Kubernetes v1.31 marks the first release after the project has successfully celebrated &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/06/06/10-years-of-kubernetes/&#34;&gt;its first 10 years&lt;/a&gt;.
Kubernetes has come a very long way since its inception, and it&#39;s still moving towards exciting new directions with each release.
After 10 years, it is awe-inspiring to reflect on the effort, dedication, skill, wit and tiring work of the countless Kubernetes contributors who have made this a reality.&lt;/p&gt;
&lt;p&gt;And yet, despite the herculean effort needed to run the project, there is no shortage of people who show up, time and again, with enthusiasm, smiles and a sense of pride for contributing and being part of the community.
This &amp;quot;spirit&amp;quot; that we see from new and old contributors alike is the sign of a vibrant community, a &amp;quot;joyful&amp;quot; community, if we might call it that.&lt;/p&gt;
&lt;p&gt;Kubernetes v1.31&#39;s Elli is all about celebrating this wonderful spirit! Here&#39;s to the next decade of Kubernetes!&lt;/p&gt;
&lt;h2 id=&#34;highlights-of-features-graduating-to-stable&#34;&gt;Highlights of features graduating to Stable&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;This is a selection of some of the improvements that are now stable following the v1.31 release.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=&#34;apparmor-support-is-now-stable&#34;&gt;AppArmor support is now stable&lt;/h3&gt;
&lt;p&gt;Kubernetes support for AppArmor is now GA. Protect your containers using AppArmor by setting the &lt;code&gt;appArmorProfile.type&lt;/code&gt; field in the container&#39;s &lt;code&gt;securityContext&lt;/code&gt;.
Note that before Kubernetes v1.30, AppArmor was controlled via annotations; starting in v1.30 it is controlled using fields.
It is recommended that you should migrate away from using annotations and start using the &lt;code&gt;appArmorProfile.type&lt;/code&gt; field.&lt;/p&gt;
&lt;p&gt;To learn more read the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tutorials/security/apparmor/&#34;&gt;AppArmor tutorial&lt;/a&gt;.
This work was done as a part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/24&#34;&gt;KEP #24&lt;/a&gt;, by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node&#34;&gt;SIG Node&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;improved-ingress-connectivity-reliability-for-kube-proxy&#34;&gt;Improved ingress connectivity reliability for kube-proxy&lt;/h3&gt;
&lt;p&gt;Kube-proxy improved ingress connectivity reliability is stable in v1.31.
One of the common problems with load balancers in Kubernetes is the synchronization between the different components involved to avoid traffic drop.
This feature implements a mechanism in kube-proxy for load balancers to do connection draining for terminating Nodes exposed by services of &lt;code&gt;type: LoadBalancer&lt;/code&gt; and &lt;code&gt;externalTrafficPolicy: Cluster&lt;/code&gt; and establish some best practices for cloud providers and Kubernetes load balancers implementations.&lt;/p&gt;
&lt;p&gt;To use this feature, kube-proxy needs to run as default service proxy on the cluster and the load balancer needs to support connection draining.
There are no specific changes required for using this feature, it has been enabled by default in kube-proxy since v1.30 and been promoted to stable in v1.31.&lt;/p&gt;
&lt;p&gt;For more details about this feature please visit the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/networking/virtual-ips/#external-traffic-policy&#34;&gt;Virtual IPs and Service Proxies documentation page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3836&#34;&gt;KEP #3836&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-network&#34;&gt;SIG Network&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;persistent-volume-last-phase-transition-time&#34;&gt;Persistent Volume last phase transition time&lt;/h3&gt;
&lt;p&gt;Persistent Volume last phase transition time feature moved to GA in v1.31.
This feature adds a &lt;code&gt;PersistentVolumeStatus&lt;/code&gt; field which holds a timestamp of when a PersistentVolume last transitioned to a different phase.
With this feature enabled, every PersistentVolume object will have a new field &lt;code&gt;.status.lastTransitionTime&lt;/code&gt;, that holds a timestamp of
when the volume last transitioned its phase.
This change is not immediate; the new field will be populated whenever a PersistentVolume is updated and first transitions between phases (&lt;code&gt;Pending&lt;/code&gt;, &lt;code&gt;Bound&lt;/code&gt;, or &lt;code&gt;Released&lt;/code&gt;) after upgrading to Kubernetes v1.31.
This allows you to measure time between when a PersistentVolume moves from &lt;code&gt;Pending&lt;/code&gt; to &lt;code&gt;Bound&lt;/code&gt;. This can be also useful for providing metrics and SLOs.&lt;/p&gt;
&lt;p&gt;For more details about this feature please visit the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/&#34;&gt;PersistentVolume documentation page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This work was done as a part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3762&#34;&gt;KEP #3762&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;SIG Storage&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;highlights-of-features-graduating-to-beta&#34;&gt;Highlights of features graduating to Beta&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;This is a selection of some of the improvements that are now beta following the v1.31 release.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=&#34;nftables-backend-for-kube-proxy&#34;&gt;nftables backend for kube-proxy&lt;/h3&gt;
&lt;p&gt;The nftables backend moves to beta in v1.31, behind the &lt;code&gt;NFTablesProxyMode&lt;/code&gt; feature gate which is now enabled by default.&lt;/p&gt;
&lt;p&gt;The nftables API is the successor to the iptables API and is designed to provide better performance and scalability than iptables.
The &lt;code&gt;nftables&lt;/code&gt; proxy mode is able to process changes to service endpoints faster and more efficiently than the &lt;code&gt;iptables&lt;/code&gt; mode, and is also able to more efficiently process packets in the kernel (though this only
becomes noticeable in clusters with tens of thousands of services).&lt;/p&gt;
&lt;p&gt;As of Kubernetes v1.31, the &lt;code&gt;nftables&lt;/code&gt; mode is still relatively new, and may not be compatible with all network plugins; consult the documentation for your network plugin.
This proxy mode is only available on Linux nodes, and requires kernel 5.13 or later.
Before migrating, note that some features, especially around NodePort services, are not implemented exactly the same in nftables mode as they are in iptables mode.
Check the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/networking/virtual-ips/#migrating-from-iptables-mode-to-nftables&#34;&gt;migration guide&lt;/a&gt; to see if you need to override the default configuration.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3866&#34;&gt;KEP #3866&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-network&#34;&gt;SIG Network&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;changes-to-reclaim-policy-for-persistentvolumes&#34;&gt;Changes to reclaim policy for PersistentVolumes&lt;/h3&gt;
&lt;p&gt;The Always Honor PersistentVolume Reclaim Policy feature has advanced to beta in Kubernetes v1.31.
This enhancement ensures that the PersistentVolume (PV) reclaim policy is respected even after the associated PersistentVolumeClaim (PVC) is deleted, thereby preventing the leakage of volumes.&lt;/p&gt;
&lt;p&gt;Prior to this feature, the reclaim policy linked to a PV could be disregarded under specific conditions, depending on whether the PV or PVC was deleted first.
Consequently, the corresponding storage resource in the external infrastructure might not be removed, even if the reclaim policy was set to &amp;quot;Delete&amp;quot;.
This led to potential inconsistencies and resource leaks.&lt;/p&gt;
&lt;p&gt;With the introduction of this feature, Kubernetes now guarantees that the &amp;quot;Delete&amp;quot; reclaim policy will be enforced, ensuring the deletion of the underlying storage object from the backend infrastructure, regardless of the deletion sequence of the PV and PVC.&lt;/p&gt;
&lt;p&gt;This work was done as a part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/2644&#34;&gt;KEP #2644&lt;/a&gt; and by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;SIG Storage&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;bound-service-account-token-improvements&#34;&gt;Bound service account token improvements&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;ServiceAccountTokenNodeBinding&lt;/code&gt; feature is promoted to beta in v1.31.
This feature allows requesting a token bound only to a node, not to a pod, which includes node information in claims in the token and validates the existence of the node when the token is used.
For more information, read the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-tokens&#34;&gt;bound service account tokens documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4193&#34;&gt;KEP #4193&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-auth&#34;&gt;SIG Auth&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;multiple-service-cidrs&#34;&gt;Multiple Service CIDRs&lt;/h3&gt;
&lt;p&gt;Support for clusters with multiple Service CIDRs moves to beta in v1.31 (disabled by default).&lt;/p&gt;
&lt;p&gt;There are multiple components in a Kubernetes cluster that consume IP addresses: Nodes, Pods and Services.
Nodes and Pods IP ranges can be dynamically changed because depend on the infrastructure or the network plugin respectively.
However, Services IP ranges are defined during the cluster creation as a hardcoded flag in the kube-apiserver.
IP exhaustion has been a problem for long lived or large clusters, as admins needed to expand, shrink or even replace entirely the assigned Service CIDR range.
These operations were never supported natively and were performed via complex and delicate maintenance operations, often causing downtime on their clusters. This new feature allows users and cluster admins to dynamically modify Service CIDR ranges with zero downtime.&lt;/p&gt;
&lt;p&gt;For more details about this feature please visit the
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/networking/virtual-ips/#ip-address-objects&#34;&gt;Virtual IPs and Service Proxies&lt;/a&gt; documentation page.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/1880&#34;&gt;KEP #1880&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-network&#34;&gt;SIG Network&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;traffic-distribution-for-services&#34;&gt;Traffic distribution for Services&lt;/h3&gt;
&lt;p&gt;Traffic distribution for Services moves to beta in v1.31 and is enabled by default.&lt;/p&gt;
&lt;p&gt;After several iterations on finding the best user experience and traffic engineering capabilities for Services networking, SIG Networking implemented the &lt;code&gt;trafficDistribution&lt;/code&gt; field in the Service specification, which serves as a guideline for the underlying implementation to consider while making routing decisions.&lt;/p&gt;
&lt;p&gt;For more details about this feature please read the
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/17/kubernetes-v1-30-release/#traffic-distribution-for-services-sig-network-https-github-com-kubernetes-community-tree-master-sig-network&#34;&gt;1.30 Release Blog&lt;/a&gt;
or visit the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/services-networking/service/#traffic-distribution&#34;&gt;Service&lt;/a&gt; documentation page.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4444&#34;&gt;KEP #4444&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-network&#34;&gt;SIG Network&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;kubernetes-volumeattributesclass-modifyvolume&#34;&gt;Kubernetes VolumeAttributesClass ModifyVolume&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/volume-attributes-classes/&#34;&gt;VolumeAttributesClass&lt;/a&gt; API is moving to beta in v1.31.
The VolumeAttributesClass provides a generic,
Kubernetes-native API for modifying dynamically volume parameters like provisioned IO.
This allows workloads to vertically scale their volumes on-line to balance cost and performance, if supported by their provider.
This feature had been alpha since Kubernetes 1.29.&lt;/p&gt;
&lt;p&gt;This work was done as a part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3751&#34;&gt;KEP #3751&lt;/a&gt; and lead by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;SIG Storage&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;new-features-in-alpha&#34;&gt;New features in Alpha&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;This is a selection of some of the improvements that are now alpha following the v1.31 release.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=&#34;new-dra-apis-for-better-accelerators-and-other-hardware-management&#34;&gt;New DRA APIs for better accelerators and other hardware management&lt;/h3&gt;
&lt;p&gt;Kubernetes v1.31 brings an updated dynamic resource allocation (DRA) API and design.
The main focus in the update is on structured parameters because they make resource information and requests transparent to Kubernetes and clients and enable implementing features like cluster autoscaling.
DRA support in the kubelet was updated such that version skew between kubelet and the control plane is possible. With structured parameters, the scheduler allocates ResourceClaims while scheduling a pod.
Allocation by a DRA driver controller is still supported through what is now called &amp;quot;classic DRA&amp;quot;.&lt;/p&gt;
&lt;p&gt;With Kubernetes v1.31, classic DRA has a separate feature gate named &lt;code&gt;DRAControlPlaneController&lt;/code&gt;, which you need to enable explicitly.
With such a control plane controller, a DRA driver can implement allocation policies that are not supported yet through structured parameters.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3063&#34;&gt;KEP #3063&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node&#34;&gt;SIG Node&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;support-for-image-volumes&#34;&gt;Support for image volumes&lt;/h3&gt;
&lt;p&gt;The Kubernetes community is moving towards fulfilling more Artificial Intelligence (AI) and Machine Learning (ML) use cases in the future.&lt;/p&gt;
&lt;p&gt;One of the requirements to fulfill these use cases is to support Open Container Initiative (OCI) compatible images and artifacts (referred as OCI objects) directly as a native volume source.
This allows users to focus on OCI standards as well as enables them to store and distribute any content using OCI registries.&lt;/p&gt;
&lt;p&gt;Given that, v1.31 adds a new alpha feature to allow using an OCI image as a volume in a Pod.
This feature allows users to specify an image reference as volume in a pod while reusing it as volume
mount within containers. You need to enable the &lt;code&gt;ImageVolume&lt;/code&gt; feature gate to try this out.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4639&#34;&gt;KEP #4639&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node&#34;&gt;SIG Node&lt;/a&gt; and &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;SIG Storage&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;exposing-device-health-information-through-pod-status&#34;&gt;Exposing device health information through Pod status&lt;/h3&gt;
&lt;p&gt;Expose device health information through the Pod Status is added as a new alpha feature in v1.31, disabled by default.&lt;/p&gt;
&lt;p&gt;Before Kubernetes v1.31, the way to know whether or not a Pod is associated with the failed device is to use the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources&#34;&gt;PodResources API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;By enabling this feature, the field &lt;code&gt;allocatedResourcesStatus&lt;/code&gt; will be added to each container status, within the &lt;code&gt;.status&lt;/code&gt; for each Pod. The &lt;code&gt;allocatedResourcesStatus&lt;/code&gt; field reports health information for each device assigned to the container.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4680&#34;&gt;KEP #4680&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node&#34;&gt;SIG Node&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;finer-grained-authorization-based-on-selectors&#34;&gt;Finer-grained authorization based on selectors&lt;/h3&gt;
&lt;p&gt;This feature allows webhook authorizers and future (but not currently designed) in-tree authorizers to
allow &lt;strong&gt;list&lt;/strong&gt; and &lt;strong&gt;watch&lt;/strong&gt; requests, provided those requests use label and/or field selectors.
For example, it is now possible for an authorizer to express: this user cannot list all pods, but can list all pods where &lt;code&gt;.spec.nodeName&lt;/code&gt; matches some specific value. Or to allow a user to watch all Secrets in a namespace
that are &lt;em&gt;not&lt;/em&gt; labelled as &lt;code&gt;confidential: true&lt;/code&gt;.
Combined with CRD field selectors (also moving to beta in v1.31), it is possible to write more secure
per-node extensions.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4601&#34;&gt;KEP #4601&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-auth&#34;&gt;SIG Auth&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;restrictions-on-anonymous-api-access&#34;&gt;Restrictions on anonymous API access&lt;/h3&gt;
&lt;p&gt;By enabling the feature gate &lt;code&gt;AnonymousAuthConfigurableEndpoints&lt;/code&gt; users can now use the authentication configuration file to configure the endpoints that can be accessed by anonymous requests.
This allows users to protect themselves against RBAC misconfigurations that can give anonymous users broad access to the cluster.&lt;/p&gt;
&lt;p&gt;This work was done as a part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4633&#34;&gt;KEP #4633&lt;/a&gt; and by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-auth&#34;&gt;SIG Auth&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;graduations-deprecations-and-removals-in-1-31&#34;&gt;Graduations, deprecations, and removals in 1.31&lt;/h2&gt;
&lt;h3 id=&#34;graduations-to-stable&#34;&gt;Graduations to Stable&lt;/h3&gt;
&lt;p&gt;This lists all the features that graduated to stable (also known as &lt;em&gt;general availability&lt;/em&gt;). For a full list of updates including new features and graduations from alpha to beta, see the release notes.&lt;/p&gt;
&lt;p&gt;This release includes a total of 11 enhancements promoted to Stable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3762&#34;&gt;PersistentVolume last phase transition time&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/2305&#34;&gt;Metric cardinality enforcement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3836&#34;&gt;Kube-proxy improved ingress connectivity reliability&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4009&#34;&gt;Add CDI devices to device plugin API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4569&#34;&gt;Move cgroup v1 support into maintenance mode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/24&#34;&gt;AppArmor support&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3017&#34;&gt;PodHealthyPolicy for PodDisruptionBudget&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3329&#34;&gt;Retriable and non-retriable Pod failures for Jobs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3715&#34;&gt;Elastic Indexed Jobs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3335&#34;&gt;Allow StatefulSet to control start replica ordinal numbering&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/2185&#34;&gt;Random Pod selection on ReplicaSet downscaling&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;deprecations-and-removals&#34;&gt;Deprecations and Removals&lt;/h3&gt;
&lt;p&gt;As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones for the project&#39;s overall health.
See the Kubernetes &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/deprecation-policy/&#34;&gt;deprecation and removal policy&lt;/a&gt; for more details on this process.&lt;/p&gt;
&lt;h4 id=&#34;cgroup-v1-enters-the-maintenance-mode&#34;&gt;Cgroup v1 enters the maintenance mode&lt;/h4&gt;
&lt;p&gt;As Kubernetes continues to evolve and adapt to the changing landscape of container orchestration, the community has decided to move cgroup v1 support into maintenance mode in v1.31.
This shift aligns with the broader industry&#39;s move towards &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/architecture/cgroups/&#34;&gt;cgroup v2&lt;/a&gt;, offering improved functionality, scalability, and a more consistent interface.
Kubernetes maintance mode means that no new features will be added to cgroup v1 support.
Critical security fixes will still be provided, however, bug-fixing is now best-effort, meaning major bugs may be fixed if feasible, but some issues might remain unresolved.&lt;/p&gt;
&lt;p&gt;It is recommended that you start switching to use cgroup v2 as soon as possible.
This transition depends on your architecture, including ensuring the underlying operating systems and container runtimes support cgroup v2 and testing workloads to verify that workloads and applications function correctly with cgroup v2.&lt;/p&gt;
&lt;p&gt;Please report any problems you encounter by filing an &lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/new/choose&#34;&gt;issue&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This work was done as part of &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4569&#34;&gt;KEP #4569&lt;/a&gt; by &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node&#34;&gt;SIG Node&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id=&#34;a-note-about-sha-1-signature-support&#34;&gt;A note about SHA-1 signature support&lt;/h4&gt;
&lt;p&gt;In &lt;a href=&#34;https://go.dev/doc/go1.18#sha1&#34;&gt;go1.18&lt;/a&gt; (released in March 2022), the crypto/x509 library started to reject certificates signed with a SHA-1 hash function.
While SHA-1 is established to be unsafe and publicly trusted Certificate Authorities have not issued SHA-1 certificates since 2015, there might still be cases in the context of Kubernetes where user-provided certificates are signed using a SHA-1 hash function through private authorities with them being used for Aggregated API Servers or webhooks.
If you have relied on SHA-1 based certificates, you must explicitly opt back into its support by setting &lt;code&gt;GODEBUG=x509sha1=1&lt;/code&gt; in your environment.&lt;/p&gt;
&lt;p&gt;Given Go&#39;s &lt;a href=&#34;https://go.dev/blog/compat&#34;&gt;compatibility policy for GODEBUGs&lt;/a&gt;, the &lt;code&gt;x509sha1&lt;/code&gt; GODEBUG and the support for SHA-1 certificates will &lt;a href=&#34;https://tip.golang.org/doc/go1.23&#34;&gt;fully go away in go1.24&lt;/a&gt; which will be released in the first half of 2025.
If you rely on SHA-1 certificates, please start moving off them.&lt;/p&gt;
&lt;p&gt;Please see &lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/125689&#34;&gt;Kubernetes issue #125689&lt;/a&gt; to get a better idea of timelines around the support for SHA-1 going away, when Kubernetes releases plans to adopt go1.24, and for more details on how to detect usage of SHA-1 certificates via metrics and audit logging.&lt;/p&gt;
&lt;h4 id=&#34;deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004&#34;&gt;Deprecation of &lt;code&gt;status.nodeInfo.kubeProxyVersion&lt;/code&gt; field for Nodes (&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4004&#34;&gt;KEP 4004&lt;/a&gt;)&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;.status.nodeInfo.kubeProxyVersion&lt;/code&gt; field of Nodes has been deprecated in Kubernetes v1.31,
and will be removed in a later release.
It&#39;s being deprecated because the value of this field wasn&#39;t (and isn&#39;t) accurate.
This field is set by the kubelet, which does not have reliable information about the kube-proxy version or whether kube-proxy is running.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;DisableNodeKubeProxyVersion&lt;/code&gt; &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;feature gate&lt;/a&gt; will be set to &lt;code&gt;true&lt;/code&gt; in by default in v1.31 and the kubelet will no longer attempt to set the &lt;code&gt;.status.kubeProxyVersion&lt;/code&gt; field for its associated Node.&lt;/p&gt;
&lt;h4 id=&#34;removal-of-all-in-tree-integrations-with-cloud-providers&#34;&gt;Removal of all in-tree integrations with cloud providers&lt;/h4&gt;
&lt;p&gt;As highlighted in a &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/20/completing-cloud-provider-migration/&#34;&gt;previous article&lt;/a&gt;, the last remaining in-tree support for cloud provider integration has been removed as part of the v1.31 release.
This doesn&#39;t mean you can&#39;t integrate with a cloud provider, however you now &lt;strong&gt;must&lt;/strong&gt; use the
recommended approach using an external integration. Some integrations are part of the Kubernetes
project and others are third party software.&lt;/p&gt;
&lt;p&gt;This milestone marks the completion of the externalization process for all cloud providers&#39; integrations from the Kubernetes core (&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md&#34;&gt;KEP-2395&lt;/a&gt;), a process started with Kubernetes v1.26.
This change helps Kubernetes to get closer to being a truly vendor-neutral platform.&lt;/p&gt;
&lt;p&gt;For further details on the cloud provider integrations, read our &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/14/cloud-provider-integration-changes/&#34;&gt;v1.29 Cloud Provider Integrations feature blog&lt;/a&gt;.
For additional context about the in-tree code removal, we invite you to check the (&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/11/16/kubernetes-1-29-upcoming-changes/#removal-of-in-tree-integrations-with-cloud-providers-kep-2395-https-kep-k8s-io-2395&#34;&gt;v1.29 deprecation blog&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;The latter blog also contains useful information for users who need to migrate to version v1.29 and later.&lt;/p&gt;
&lt;h4 id=&#34;removal-of-in-tree-provider-feature-gates&#34;&gt;Removal of in-tree provider feature gates&lt;/h4&gt;
&lt;p&gt;In Kubernetes v1.31, the following alpha feature gates &lt;code&gt;InTreePluginAWSUnregister&lt;/code&gt;, &lt;code&gt;InTreePluginAzureDiskUnregister&lt;/code&gt;, &lt;code&gt;InTreePluginAzureFileUnregister&lt;/code&gt;, &lt;code&gt;InTreePluginGCEUnregister&lt;/code&gt;, &lt;code&gt;InTreePluginOpenStackUnregister&lt;/code&gt;, and &lt;code&gt;InTreePluginvSphereUnregister&lt;/code&gt; have been removed. These feature gates were introduced to facilitate the testing of scenarios where in-tree volume plugins were removed from the codebase, without actually removing them. Since Kubernetes 1.30 had deprecated these in-tree volume plugins, these feature gates were redundant and no longer served a purpose. The only CSI migration gate still standing is &lt;code&gt;InTreePluginPortworxUnregister&lt;/code&gt;, which will remain in alpha until the CSI migration for Portworx is completed and its in-tree volume plugin will be ready for removal.&lt;/p&gt;
&lt;h4 id=&#34;removal-of-kubelet-keep-terminated-pod-volumes-command-line-flag&#34;&gt;Removal of kubelet &lt;code&gt;--keep-terminated-pod-volumes&lt;/code&gt; command line flag&lt;/h4&gt;
&lt;p&gt;The kubelet flag &lt;code&gt;--keep-terminated-pod-volumes&lt;/code&gt;, which was deprecated in 2017, has been removed as
part of the v1.31 release.&lt;/p&gt;
&lt;p&gt;You can find more details in the pull request &lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/122082&#34;&gt;#122082&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id=&#34;removal-of-cephfs-volume-plugin&#34;&gt;Removal of CephFS volume plugin&lt;/h4&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/volumes/#cephfs&#34;&gt;CephFS volume plugin&lt;/a&gt; was removed in this release and the &lt;code&gt;cephfs&lt;/code&gt; volume type became non-functional.&lt;/p&gt;
&lt;p&gt;It is recommended that you use the &lt;a href=&#34;https://github.com/ceph/ceph-csi/&#34;&gt;CephFS CSI driver&lt;/a&gt; as a third-party storage driver instead. If you were using the CephFS volume plugin before upgrading the cluster version to v1.31, you must re-deploy your application to use the new driver.&lt;/p&gt;
&lt;p&gt;CephFS volume plugin was formally marked as deprecated in v1.28.&lt;/p&gt;
&lt;h4 id=&#34;removal-of-ceph-rbd-volume-plugin&#34;&gt;Removal of Ceph RBD volume plugin&lt;/h4&gt;
&lt;p&gt;The v1.31 release removes the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/volumes/#rbd&#34;&gt;Ceph RBD volume plugin&lt;/a&gt; and its CSI migration support, making the &lt;code&gt;rbd&lt;/code&gt; volume type non-functional.&lt;/p&gt;
&lt;p&gt;It&#39;s recommended that you use the &lt;a href=&#34;https://github.com/ceph/ceph-csi/&#34;&gt;RBD CSI driver&lt;/a&gt; in your clusters instead.
If you were using Ceph RBD volume plugin before upgrading the cluster version to v1.31, you must re-deploy your application to use the new driver.&lt;/p&gt;
&lt;p&gt;The Ceph RBD volume plugin was formally marked as deprecated in v1.28.&lt;/p&gt;
&lt;h4 id=&#34;deprecation-of-non-csi-volume-limit-plugins-in-kube-scheduler&#34;&gt;Deprecation of non-CSI volume limit plugins in kube-scheduler&lt;/h4&gt;
&lt;p&gt;The v1.31 release will deprecate all non-CSI volume limit scheduler plugins, and will remove some
already deprected plugins from the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/scheduling/config/&#34;&gt;default plugins&lt;/a&gt;, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;AzureDiskLimits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CinderLimits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;EBSLimits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GCEPDLimits&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It&#39;s recommended that you use the &lt;code&gt;NodeVolumeLimits&lt;/code&gt; plugin instead because it can handle the same functionality as the removed plugins since those volume types have been migrated to CSI.
Please replace the deprecated plugins with the &lt;code&gt;NodeVolumeLimits&lt;/code&gt; plugin if you explicitly use them in the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/scheduling/config/&#34;&gt;scheduler config&lt;/a&gt;.
The &lt;code&gt;AzureDiskLimits&lt;/code&gt;, &lt;code&gt;CinderLimits&lt;/code&gt;, &lt;code&gt;EBSLimits&lt;/code&gt;, and &lt;code&gt;GCEPDLimits&lt;/code&gt; plugins will be removed in a future release.&lt;/p&gt;
&lt;p&gt;These plugins will be removed from the default scheduler plugins list as they have been deprecated since Kubernetes v1.14.&lt;/p&gt;
&lt;h3 id=&#34;release-notes-and-upgrade-actions-required&#34;&gt;Release notes and upgrade actions required&lt;/h3&gt;
&lt;p&gt;Check out the full details of the Kubernetes v1.31 release in our &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md&#34;&gt;release notes&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id=&#34;scheduler-now-uses-queueinghint-when-schedulerqueueinghints-is-enabled&#34;&gt;Scheduler now uses QueueingHint when &lt;code&gt;SchedulerQueueingHints&lt;/code&gt; is enabled&lt;/h4&gt;
&lt;p&gt;Added support to the scheduler to start using a QueueingHint registered for Pod/Updated events,
to determine whether updates to  previously unschedulable Pods have made them schedulable.
The new support is active when the feature gate &lt;code&gt;SchedulerQueueingHints&lt;/code&gt; is enabled.&lt;/p&gt;
&lt;p&gt;Previously, when unschedulable Pods were updated, the scheduler always put Pods back to into a queue
(&lt;code&gt;activeQ&lt;/code&gt; / &lt;code&gt;backoffQ&lt;/code&gt;). However not all updates to Pods make Pods schedulable, especially considering
many scheduling constraints nowadays are immutable. Under the new behaviour, once unschedulable Pods
are updated, the scheduling queue checks with QueueingHint(s) whether the update may make the
pod(s) schedulable, and requeues them to &lt;code&gt;activeQ&lt;/code&gt; or &lt;code&gt;backoffQ&lt;/code&gt; only when at least one
QueueingHint returns &lt;code&gt;Queue&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Action required for custom scheduler plugin developers&lt;/strong&gt;:
Plugins have to implement a QueueingHint for Pod/Update event if the rejection from them could be resolved by updating unscheduled Pods themselves. Example: suppose you develop a custom plugin that denies Pods that have a &lt;code&gt;schedulable=false&lt;/code&gt; label. Given Pods with a &lt;code&gt;schedulable=false&lt;/code&gt; label will be schedulable if the &lt;code&gt;schedulable=false&lt;/code&gt; label is removed, this plugin would implement QueueingHint for Pod/Update event that returns Queue when such label changes are made in unscheduled Pods. You can find more details in the pull request &lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/122234&#34;&gt;#122234&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id=&#34;removal-of-kubelet-keep-terminated-pod-volumes-command-line-flag-1&#34;&gt;Removal of kubelet --keep-terminated-pod-volumes command line flag&lt;/h4&gt;
&lt;p&gt;The kubelet flag &lt;code&gt;--keep-terminated-pod-volumes&lt;/code&gt;, which was deprecated in 2017, was removed as part of the v1.31 release.&lt;/p&gt;
&lt;p&gt;You can find more details in the pull request &lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/122082&#34;&gt;#122082&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;availability&#34;&gt;Availability&lt;/h2&gt;
&lt;p&gt;Kubernetes v1.31 is available for download on &lt;a href=&#34;https://github.com/kubernetes/kubernetes/releases/tag/v1.31.0&#34;&gt;GitHub&lt;/a&gt; or on the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/releases/download/&#34;&gt;Kubernetes download page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To get started with Kubernetes, check out these &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tutorials/&#34;&gt;interactive tutorials&lt;/a&gt; or run local Kubernetes clusters using &lt;a href=&#34;https://minikube.sigs.k8s.io/&#34;&gt;minikube&lt;/a&gt;. You can also easily install v1.31 using &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/setup/independent/create-cluster-kubeadm/&#34;&gt;kubeadm&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;release-team&#34;&gt;Release team&lt;/h2&gt;
&lt;p&gt;Kubernetes is only possible with the support, commitment, and hard work of its community.
Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up the Kubernetes releases you rely on.
This requires the specialized skills of people from all corners of our community, from the code itself to its documentation and project management.&lt;/p&gt;
&lt;p&gt;We would like to thank the entire &lt;a href=&#34;https://github.com/kubernetes/sig-release/blob/master/releases/release-1.31/release-team.md&#34;&gt;release team&lt;/a&gt; for the hours spent hard at work to deliver the Kubernetes v1.31 release to our community.
The Release Team&#39;s membership ranges from first-time shadows to returning team leads with experience forged over several release cycles.
A very special thanks goes out our release lead, Angelos Kolaitis, for supporting us through a successful release cycle, advocating for us, making sure that we could all contribute in the best way possible, and challenging us to improve the release process.&lt;/p&gt;
&lt;h2 id=&#34;project-velocity&#34;&gt;Project velocity&lt;/h2&gt;
&lt;p&gt;The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.&lt;/p&gt;
&lt;p&gt;In the v1.31 release cycle, which ran for 14 weeks (May 7th to August 13th), we saw contributions to Kubernetes from 113 different companies and 528 individuals.&lt;/p&gt;
&lt;p&gt;In the whole Cloud Native ecosystem we have 379 companies counting 2268 total contributors - which means that respect to the previous release cycle we experienced an astounding 63% increase on individuals contributing!&lt;/p&gt;
&lt;p&gt;Source for this data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://k8s.devstats.cncf.io/d/11/companies-contributing-in-repository-groups?orgId=1&amp;amp;from=1715032800000&amp;amp;to=1723586399000&amp;amp;var-period=d28&amp;amp;var-repogroup_name=Kubernetes&amp;amp;var-repo_name=kubernetes%2Fkubernetes&#34;&gt;Companies contributing to Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://k8s.devstats.cncf.io/d/11/companies-contributing-in-repository-groups?orgId=1&amp;amp;from=1715032800000&amp;amp;to=1723586399000&amp;amp;var-period=d28&amp;amp;var-repogroup_name=All&amp;amp;var-repo_name=kubernetes%2Fkubernetes&#34;&gt;Overall ecosystem contributions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By contribution we mean when someone makes a commit, code review, comment, creates an issue or PR, reviews a PR (including blogs and documentation) or comments on issues and PRs.&lt;/p&gt;
&lt;p&gt;If you are interested in contributing visit &lt;a href=&#34;https://www.kubernetes.dev/docs/guide/#getting-started&#34;&gt;this page&lt;/a&gt; to get started.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://k8s.devstats.cncf.io/d/11/companies-contributing-in-repository-groups?orgId=1&amp;amp;var-period=m&amp;amp;var-repogroup_name=All&#34;&gt;Check out DevStats&lt;/a&gt; to learn more about the overall velocity of the Kubernetes project and community.&lt;/p&gt;
&lt;h2 id=&#34;event-update&#34;&gt;Event update&lt;/h2&gt;
&lt;p&gt;Explore the upcoming Kubernetes and cloud-native events from August to November 2024, featuring KubeCon, KCD, and other notable conferences worldwide. Stay informed and engage with the Kubernetes community.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;August 2024&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://events.linuxfoundation.org/kubecon-cloudnativecon-open-source-summit-ai-dev-china/&#34;&gt;&lt;strong&gt;KubeCon + CloudNativeCon + Open Source Summit China 2024&lt;/strong&gt;&lt;/a&gt;: August 21-23, 2024 | Hong Kong&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://events.linuxfoundation.org/kubeday-japan/&#34;&gt;&lt;strong&gt;KubeDay Japan&lt;/strong&gt;&lt;/a&gt;: August 27, 2024 | Tokyo, Japan&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;September 2024&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://community.cncf.io/events/details/cncf-kcd-lahore-presents-kcd-lahore-pakistan-2024/&#34;&gt;&lt;strong&gt;KCD Lahore - Pakistan 2024&lt;/strong&gt;&lt;/a&gt;: September 1, 2024 | Lahore, Pakistan&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://community.cncf.io/events/details/cncf-stockholm-presents-kubertenes-birthday-bash-stockholm-a-couple-of-months-late/&#34;&gt;&lt;strong&gt;KuberTENes Birthday Bash Stockholm&lt;/strong&gt;&lt;/a&gt;: September 5, 2024 | Stockholm, Sweden&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://community.cncf.io/events/details/cncf-kcd-australia-presents-kcd-sydney-24/&#34;&gt;&lt;strong&gt;KCD Sydney ’24&lt;/strong&gt;&lt;/a&gt;: September 5-6, 2024 | Sydney, Australia&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://community.cncf.io/events/details/cncf-kcd-washington-dc-presents-kcd-washington-dc-2024/&#34;&gt;&lt;strong&gt;KCD Washington DC 2024&lt;/strong&gt;&lt;/a&gt;: September 24, 2024 | Washington, DC, United States&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://community.cncf.io/events/details/cncf-kcd-porto-presents-kcd-porto-2024/&#34;&gt;&lt;strong&gt;KCD Porto 2024&lt;/strong&gt;&lt;/a&gt;: September 27-28, 2024 | Porto, Portugal&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;October 2024&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://community.cncf.io/events/details/cncf-kcd-austria-presents-kcd-austria-2024/&#34;&gt;&lt;strong&gt;KCD Austria 2024&lt;/strong&gt;&lt;/a&gt;: October 8-10, 2024 | Wien, Austria&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://events.linuxfoundation.org/kubeday-australia/&#34;&gt;&lt;strong&gt;KubeDay Australia&lt;/strong&gt;&lt;/a&gt;: October 15, 2024 | Melbourne, Australia&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://community.cncf.io/events/details/cncf-kcd-uk-presents-kubernetes-community-days-uk-london-2024/&#34;&gt;&lt;strong&gt;KCD UK - London 2024&lt;/strong&gt;&lt;/a&gt;: October 22-23, 2024 | Greater London, United Kingdom&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;November 2024&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/&#34;&gt;&lt;strong&gt;KubeCon + CloudNativeCon North America 2024&lt;/strong&gt;&lt;/a&gt;: November 12-15, 2024 | Salt Lake City, United States&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/co-located-events/kubernetes-on-edge-day/&#34;&gt;&lt;strong&gt;Kubernetes on EDGE Day North America&lt;/strong&gt;&lt;/a&gt;: November 12, 2024 | Salt Lake City, United States&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;upcoming-release-webinar&#34;&gt;Upcoming release webinar&lt;/h2&gt;
&lt;p&gt;Join members of the Kubernetes v1.31 release team on Thursday, Thu Sep 12, 2024 10am PT to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades.
For more information and registration, visit the &lt;a href=&#34;https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-131-release/&#34;&gt;event page&lt;/a&gt; on the CNCF Online Programs site.&lt;/p&gt;
&lt;h2 id=&#34;get-involved&#34;&gt;Get involved&lt;/h2&gt;
&lt;p&gt;The simplest way to get involved with Kubernetes is by joining one of the many &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-list.md&#34;&gt;Special Interest Groups&lt;/a&gt; (SIGs) that align with your interests.
Have something you’d like to broadcast to the Kubernetes community?
Share your voice at our weekly &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/communication&#34;&gt;community meeting&lt;/a&gt;, and through the channels below.
Thank you for your continued feedback and support.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Follow us on X &lt;a href=&#34;https://x.com/kubernetesio&#34;&gt;@Kubernetesio&lt;/a&gt; for latest updates&lt;/li&gt;
&lt;li&gt;Join the community discussion on &lt;a href=&#34;https://discuss.kubernetes.io/&#34;&gt;Discuss&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Join the community on &lt;a href=&#34;http://slack.k8s.io/&#34;&gt;Slack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Post questions (or answer questions) on &lt;a href=&#34;http://stackoverflow.com/questions/tagged/kubernetes&#34;&gt;Stack Overflow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Share your Kubernetes &lt;a href=&#34;https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform&#34;&gt;story&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Read more about what’s happening with Kubernetes on the &lt;a href=&#34;https://kubernetes.io/blog/&#34;&gt;blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Learn more about the &lt;a href=&#34;https://github.com/kubernetes/sig-release/tree/master/release-team&#34;&gt;Kubernetes Release Team&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Introducing Feature Gates to Client-Go: Enhancing Flexibility and Control</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/12/feature-gates-in-client-go/</link>
      <pubDate>Mon, 12 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/12/feature-gates-in-client-go/</guid>
      <description>
        
        
        &lt;p&gt;Kubernetes components use on-off switches called &lt;em&gt;feature gates&lt;/em&gt; to manage the risk of adding a new feature.
The feature gate mechanism is what enables incremental graduation of a feature through the stages Alpha, Beta, and GA.&lt;/p&gt;
&lt;p&gt;Kubernetes components, such as kube-controller-manager and kube-scheduler, use the client-go library to interact with the API.
The same library is used across the Kubernetes ecosystem to build controllers, tools, webhooks, and more. client-go now includes
its own feature gating mechanism, giving developers and cluster administrators more control over how they adopt client features.&lt;/p&gt;
&lt;p&gt;To learn more about feature gates in Kubernetes, visit &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;Feature Gates&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;motivation&#34;&gt;Motivation&lt;/h2&gt;
&lt;p&gt;In the absence of client-go feature gates, each new feature separated feature availability from enablement in its own way, if at all.
Some features were enabled by updating to a newer version of client-go. Others needed to be actively configured in each program that used them.
A few were configurable at runtime using environment variables. Consuming a feature-gated functionality exposed by the kube-apiserver sometimes
required a client-side fallback mechanism to remain compatible with servers that don’t support the functionality due to their age or configuration.
In cases where issues were discovered in these fallback mechanisms, mitigation required updating to a fixed version of client-go or rolling back.&lt;/p&gt;
&lt;p&gt;None of these approaches offer good support for enabling a feature by default in some, but not all, programs that consume client-go.
Instead of enabling a new feature at first only for a single component, a change in the default setting immediately affects the default
for all Kubernetes components, which broadens the blast radius significantly.&lt;/p&gt;
&lt;h2 id=&#34;feature-gates-in-client-go&#34;&gt;Feature gates in client-go&lt;/h2&gt;
&lt;p&gt;To address these challenges, substantial client-go features will be phased in using the new feature gate mechanism.
It will allow developers and users to enable or disable features in a way that will be familiar to anyone who has experience
with feature gates  in the Kubernetes components.&lt;/p&gt;
&lt;p&gt;Out of the box, simply by using a recent version of client-go, this offers several benefits.&lt;/p&gt;
&lt;p&gt;For people who use software built with client-go:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Early adopters can enable a default-off client-go feature on a per-process basis.&lt;/li&gt;
&lt;li&gt;Misbehaving features can be disabled without building a new binary.&lt;/li&gt;
&lt;li&gt;The state of all known client-go feature gates is logged, allowing users to inspect it.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For people who develop software built with client-go:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;By default, client-go feature gate overrides are read from environment variables.
If a bug is found in a client-go feature, users will be able to disable it without waiting for a new release.&lt;/li&gt;
&lt;li&gt;Developers can replace the default environment-variable-based overrides in a program to change defaults,
read overrides from another source, or disable runtime overrides completely.
The Kubernetes components use this customizability to integrate client-go feature gates with
the existing &lt;code&gt;--feature-gates&lt;/code&gt; command-line flag, feature enablement metrics, and logging.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;overriding-client-go-feature-gates&#34;&gt;Overriding client-go feature gates&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This describes the default method for overriding client-go feature gates at runtime.
It can be disabled or customized by the developer of a particular program.
In Kubernetes components, client-go feature gate overrides are controlled by the &lt;code&gt;--feature-gates&lt;/code&gt; flag.&lt;/p&gt;
&lt;p&gt;Features of client-go can be enabled or disabled by setting environment variables prefixed with &lt;code&gt;KUBE_FEATURE&lt;/code&gt;.
For example, to enable a feature named &lt;code&gt;MyFeature&lt;/code&gt;, set the environment variable as follows:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt; KUBE_FEATURE_MyFeature=true
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To disable the feature, set the environment variable to &lt;code&gt;false&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt; KUBE_FEATURE_MyFeature=false
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Environment variables are case-sensitive on some operating systems.
Therefore, &lt;code&gt;KUBE_FEATURE_MyFeature&lt;/code&gt; and &lt;code&gt;KUBE_FEATURE_MYFEATURE&lt;/code&gt; would be considered two different variables.&lt;/p&gt;
&lt;h2 id=&#34;customizing-client-go-feature-gates&#34;&gt;Customizing client-go feature gates&lt;/h2&gt;
&lt;p&gt;The default environment-variable based mechanism for feature gate overrides can be sufficient for many programs in the Kubernetes ecosystem,
and requires no special integration. Programs that require different behavior can replace it with their own custom feature gate provider.
This allows a program to do things like force-disable a feature that is known to work poorly,
read feature gates directly from a remote configuration service, or accept feature gate overrides through command-line options.&lt;/p&gt;
&lt;p&gt;The Kubernetes components replace client-go’s default feature gate provider with a shim to the existing Kubernetes feature gate provider.
For all practical purposes, client-go feature gates are treated the same as other Kubernetes
feature gates: they are wired to the &lt;code&gt;--feature-gates&lt;/code&gt; command-line flag, included in feature enablement metrics, and logged on startup.&lt;/p&gt;
&lt;p&gt;To replace the default feature gate provider, implement the Gates interface and call ReplaceFeatureGates
at package initialization time, as in this simple example:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-go&#34; data-lang=&#34;go&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;import&lt;/span&gt; (
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt; &lt;span style=&#34;&#34;&gt;“&lt;/span&gt;k8s.io&lt;span style=&#34;color:#666&#34;&gt;/&lt;/span&gt;client&lt;span style=&#34;color:#666&#34;&gt;-&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;go&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;/&lt;/span&gt;features&lt;span style=&#34;&#34;&gt;”&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;type&lt;/span&gt; AlwaysEnabledGates &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;struct&lt;/span&gt;{}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;func&lt;/span&gt; (AlwaysEnabledGates) &lt;span style=&#34;color:#00a000&#34;&gt;Enabled&lt;/span&gt;(features.Feature) &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;bool&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;return&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;func&lt;/span&gt; &lt;span style=&#34;color:#00a000&#34;&gt;init&lt;/span&gt;() {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt; features.&lt;span style=&#34;color:#00a000&#34;&gt;ReplaceFeatureGates&lt;/span&gt;(AlwaysEnabledGates{})
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Implementations that need the complete list of defined client-go features can get it by implementing the Registry interface
and calling &lt;code&gt;AddFeaturesToExistingFeatureGates&lt;/code&gt;.
For a complete example, refer to &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/64ba17c605a41700f7f4c4e27dca3684b593b2b9/pkg/features/kube_features.go#L990-L997&#34;&gt;the usage within Kubernetes&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;With the introduction of feature gates in client-go v1.30, rolling out a new client-go feature has become safer and easier.
Users and developers can control the pace of their own adoption of client-go features.
The work of Kubernetes contributors is streamlined by having a common mechanism for graduating features that span both sides of the Kubernetes API boundary.&lt;/p&gt;
&lt;p&gt;Special shoutout to &lt;a href=&#34;https://github.com/sttts&#34;&gt;@sttts&lt;/a&gt; and &lt;a href=&#34;https://github.com/deads2k&#34;&gt;@deads2k&lt;/a&gt; for their help in shaping this feature.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Spotlight on SIG API Machinery</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/07/sig-api-machinery-spotlight-2024/</link>
      <pubDate>Wed, 07 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/08/07/sig-api-machinery-spotlight-2024/</guid>
      <description>
        
        
        &lt;p&gt;We recently talked with &lt;a href=&#34;https://github.com/fedebongio&#34;&gt;Federico Bongiovanni&lt;/a&gt; (Google) and &lt;a href=&#34;https://github.com/deads2k&#34;&gt;David
Eads&lt;/a&gt; (Red Hat), Chairs of SIG API Machinery, to know a bit more about
this Kubernetes Special Interest Group.&lt;/p&gt;
&lt;h2 id=&#34;introductions&#34;&gt;Introductions&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM): Hello, and thank your for your time. To start with, could you tell us about
yourselves and how you got involved in Kubernetes?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;: I started working on
&lt;a href=&#34;https://www.redhat.com/en/technologies/cloud-computing/openshift&#34;&gt;OpenShift&lt;/a&gt; (the Red Hat
distribution of Kubernetes) in the fall of 2014 and got involved pretty quickly in API Machinery. My
first PRs were fixing kube-apiserver error messages and from there I branched out to &lt;code&gt;kubectl&lt;/code&gt;
(&lt;em&gt;kubeconfigs&lt;/em&gt; are my fault!), &lt;code&gt;auth&lt;/code&gt; (&lt;a href=&#34;https://kubernetes.io/docs/reference/access-authn-authz/rbac/&#34;&gt;RBAC&lt;/a&gt; and &lt;code&gt;*Review&lt;/code&gt; APIs are ports
from OpenShift), &lt;code&gt;apps&lt;/code&gt; (&lt;em&gt;workqueues&lt;/em&gt; and &lt;em&gt;sharedinformers&lt;/em&gt; for example). Don’t tell the others,
but API Machinery is still my favorite :)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;: I was not as early in Kubernetes as David, but now it&#39;s been more than six years. At
my previous company we were starting to use Kubernetes for our own products, and when I came across
the opportunity to work directly with Kubernetes I left everything and boarded the ship (no pun
intended). I joined Google and Kubernetes in early 2018, and have been involved since.&lt;/p&gt;
&lt;h2 id=&#34;sig-machinery-s-scope&#34;&gt;SIG Machinery&#39;s scope&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM: It only takes a quick look at the SIG API Machinery charter to see that it has quite a
significant scope, nothing less than the Kubernetes control plane. Could you describe this scope in
your own words?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;: We own the &lt;code&gt;kube-apiserver&lt;/code&gt; and how to efficiently use it. On the backend, that includes
its contract with backend storage and how it allows API schema evolution over time.  On the
frontend, that includes schema best practices, serialization, client patterns, and controller
patterns on top of all of it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;: Kubernetes has a lot of different components, but the control plane has a really
critical mission: it&#39;s your communication layer with the cluster and also owns all the extensibility
mechanisms that make Kubernetes so powerful. We can&#39;t make mistakes like a regression, or an
incompatible change, because the blast radius is huge.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM: Given this breadth, how do you manage the different aspects of it?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;: We try to organize the large amount of work into smaller areas. The working groups and
subprojects are part of it. Different people on the SIG have their own areas of expertise, and if
everything fails, we are really lucky to have people like David, Joe, and Stefan who really are &amp;quot;all
terrain&amp;quot;, in a way that keeps impressing me even after all these years.  But on the other hand this
is the reason why we need more people to help us carry the quality and excellence of Kubernetes from
release to release.&lt;/p&gt;
&lt;h2 id=&#34;an-evolving-collaboration-model&#34;&gt;An evolving collaboration model&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM: Was the existing model always like this, or did it evolve with time - and if so, what would
you consider the main changes and the reason behind them?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;: API Machinery has evolved over time both growing and contracting in scope.  When trying
to satisfy client access patterns it’s very easy to add scope both in terms of features and applying
them.&lt;/p&gt;
&lt;p&gt;A good example of growing scope is the way that we identified a need to reduce memory utilization by
clients writing controllers and developed shared informers.  In developing shared informers and the
controller patterns use them (workqueues, error handling, and listers), we greatly reduced memory
utilization and eliminated many expensive lists.  The downside: we grew a new set of capability to
support and effectively took ownership of that area from sig-apps.&lt;/p&gt;
&lt;p&gt;For an example of more shared ownership: building out cooperative resource management (the goal of
server-side apply), &lt;code&gt;kubectl&lt;/code&gt; expanded to take ownership of leveraging the server-side apply
capability.  The transition isn’t yet complete, but &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cli&#34;&gt;SIG
CLI&lt;/a&gt; manages that usage and owns it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM: And for the boundary between approaches, do you have any guidelines?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;: I think much depends on the impact. If the impact is local in immediate effect, we advise
other SIGs and let them move at their own pace.  If the impact is global in immediate effect without
a natural incentive, we’ve found a need to press for adoption directly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM: Still on that note, SIG Architecture has an API Governance subproject, is it mostly
independent from SIG API Machinery or are there important connection points?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;: The projects have similar sounding names and carry some impacts on each other, but have
different missions and scopes.  API Machinery owns the how and API Governance owns the what.  API
conventions, the API approval process, and the final say on individual k8s.io APIs belong to API
Governance.  API Machinery owns the REST semantics and non-API specific behaviors.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;: I really like how David put it: &lt;em&gt;&amp;quot;API Machinery owns the how and API Governance owns
the what&amp;quot;&lt;/em&gt;: we don&#39;t own the actual APIs, but the actual APIs live through us.&lt;/p&gt;
&lt;h2 id=&#34;the-challenges-of-kubernetes-popularity&#34;&gt;The challenges of Kubernetes popularity&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM: With the growth in Kubernetes adoption we have certainly seen increased demands from the
Control Plane: how is this felt and how does it influence the work of the SIG?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;: It’s had a massive influence on API Machinery.  Over the years we have often responded to
and many times enabled the evolutionary stages of Kubernetes.  As the central orchestration hub of
nearly all capability on Kubernetes clusters, we both lead and follow the community.  In broad
strokes I see a few evolution stages for API Machinery over the years, with constantly high
activity.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Finding purpose&lt;/strong&gt;: &lt;code&gt;pre-1.0&lt;/code&gt; up until &lt;code&gt;v1.3&lt;/code&gt; (up to our first 1000+ nodes/namespaces) or
so. This time was characterized by rapid change.  We went through five different versions of our
schemas and rose to meet the need.  We optimized for quick, in-tree API evolution (sometimes to
the detriment of longer term goals), and defined patterns for the first time.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scaling to meet the need&lt;/strong&gt;: &lt;code&gt;v1.3-1.9&lt;/code&gt; (up to shared informers in controllers) or so.  When we
started trying to meet customer needs as we gained adoption, we found severe scale limitations in
terms of CPU and memory. This was where we broadened API machinery to include access patterns, but
were still heavily focused on in-tree types.  We built the watch cache, protobuf serialization,
and shared caches.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Fostering the ecosystem&lt;/strong&gt;: &lt;code&gt;v1.8-1.21&lt;/code&gt; (up to CRD v1) or so.  This was when we designed and wrote
CRDs (the considered replacement for third-party-resources), the immediate needs we knew were
coming (admission webhooks), and evolution to best practices we knew we needed (API schemas).
This enabled an explosion of early adopters willing to work very carefully within the constraints
to enable their use-cases for servicing pods.  The adoption was very fast, sometimes outpacing
our capability, and creating new problems.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Simplifying deployments&lt;/strong&gt;: &lt;code&gt;v1.22+&lt;/code&gt;.  In the relatively recent past, we’ve been responding to
pressures or running kube clusters at scale with large numbers of sometimes-conflicting ecosystem
projects using our extensions mechanisms.  Lots of effort is now going into making platform
extensions easier to write and safer to manage by people who don&#39;t hold PhDs in kubernetes.  This
started with things like server-side-apply and continues today with features like webhook match
conditions and validating admission policies.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Work in API Machinery has a broad impact across the project and the ecosystem.  It’s an exciting
area to work for those able to make a significant time investment on a long time horizon.&lt;/p&gt;
&lt;h2 id=&#34;the-road-ahead&#34;&gt;The road ahead&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM: With those different evolutionary stages in mind, what would you pinpoint as the top
priorities for the SIG at this time?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David:&lt;/strong&gt; &lt;strong&gt;Reliability, efficiency, and capability&lt;/strong&gt; in roughly that order.&lt;/p&gt;
&lt;p&gt;With the increased usage of our &lt;code&gt;kube-apiserver&lt;/code&gt; and extensions mechanisms, we find that our first
set of extensions mechanisms, while fairly complete in terms of capability, carry significant risks
in terms of potential mis-use with large blast radius.  To mitigate these risks, we’re investing in
features that reduce the blast radius for accidents (webhook match conditions) and which provide
alternative mechanisms with lower risk profiles for most actions (validating admission policy).&lt;/p&gt;
&lt;p&gt;At the same time, the increased usage has made us more aware of scaling limitations that we can
improve both server and client-side.  Efforts here include more efficient serialization (CBOR),
reduced etcd load (consistent reads from cache), and reduced peak memory usage (streaming lists).&lt;/p&gt;
&lt;p&gt;And finally, the increased usage has highlighted some long existing
gaps that we’re closing.  Things like field selectors for CRDs which
the &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/wg-batch/README.md&#34;&gt;Batch Working Group&lt;/a&gt;
is eager to leverage and will eventually form the basis for a new way
to prevent trampoline pod attacks from exploited nodes.&lt;/p&gt;
&lt;h2 id=&#34;joining-the-fun&#34;&gt;Joining the fun&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM: For anyone wanting to start contributing, what&#39;s your suggestions?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;: SIG API Machinery is not an exception to the Kubernetes motto: &lt;strong&gt;Chop Wood and Carry
Water&lt;/strong&gt;. There are multiple weekly meetings that are open to everybody, and there is always more
work to be done than people to do it.&lt;/p&gt;
&lt;p&gt;I acknowledge that API Machinery is not easy, and the ramp up will be steep. The bar is high,
because of the reasons we&#39;ve been discussing: we carry a huge responsibility. But of course with
passion and perseverance many people has ramped up through the years, and we hope more will come.&lt;/p&gt;
&lt;p&gt;In terms of concrete opportunities, there is the SIG meeting every two weeks. Everyone is welcome to
attend and listen, see what the group talks about, see what&#39;s going on in this release, etc.&lt;/p&gt;
&lt;p&gt;Also two times a week, Tuesday and Thursday, we have the public Bug Triage, where we go through
everything new from the last meeting. We&#39;ve been keeping this practice for more than 7 years
now. It&#39;s a great opportunity to volunteer to review code, fix bugs, improve documentation,
etc. Tuesday&#39;s it&#39;s at 1 PM (PST) and Thursday is on an EMEA friendly time (9:30 AM PST).  We are
always looking to improve, and we hope to be able to provide more concrete opportunities to join and
participate in the future.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM: Excellent, thank you! Any final comments you would like to share with our readers?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;: As I mentioned, the first steps might be hard, but the reward is also larger. Working
on API Machinery is working on an area of huge impact (millions of users?), and your contributions
will have a direct outcome in the way that Kubernetes works and the way that it&#39;s used. For me
that&#39;s enough reward and motivation!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes Removals and Major Changes In v1.31</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/07/19/kubernetes-1-31-upcoming-changes/</link>
      <pubDate>Fri, 19 Jul 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/07/19/kubernetes-1-31-upcoming-changes/</guid>
      <description>
        
        
        &lt;p&gt;As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones for the project&#39;s overall health.
This article outlines some planned changes for the Kubernetes v1.31 release that the release team feels you should be aware of for the continued maintenance of your Kubernetes environment.
The information listed below is based on the current status of the v1.31 release.
It may change before the actual release date.&lt;/p&gt;
&lt;h2 id=&#34;the-kubernetes-api-removal-and-deprecation-process&#34;&gt;The Kubernetes API removal and deprecation process&lt;/h2&gt;
&lt;p&gt;The Kubernetes project has a well-documented &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/deprecation-policy/&#34;&gt;deprecation policy&lt;/a&gt; for features.
This policy states that stable APIs may only be deprecated when a newer, stable version of that API is available and that APIs have a minimum lifetime for each stability level.
A deprecated API has been marked for removal in a future Kubernetes release.
It will continue to function until removal (at least one year from the deprecation), but usage will display a warning.
Removed APIs are no longer available in the current version, so you must migrate to using the replacement.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Beta or pre-release API versions must be supported for 3 releases after the deprecation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Alpha or experimental API versions may be removed in any release without prior deprecation notice.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Whether an API is removed because a feature graduated from beta to stable or because that API did not succeed, all removals comply with this deprecation policy.
Whenever an API is removed, migration options are communicated in the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/deprecation-guide/&#34;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;a-note-about-sha-1-signature-support&#34;&gt;A note about SHA-1 signature support&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&#34;https://go.dev/doc/go1.18#sha1&#34;&gt;go1.18&lt;/a&gt; (released in March 2022), the crypto/x509 library started to reject certificates signed with a SHA-1 hash function.
While SHA-1 is established to be unsafe and publicly trusted Certificate Authorities have not issued SHA-1 certificates since 2015, there might still be cases in the context of Kubernetes where user-provided certificates are signed using a SHA-1 hash function through private authorities with them being used for Aggregated API Servers or webhooks.
If you have relied on SHA-1 based certificates, you must explicitly opt back into its support by setting &lt;code&gt;GODEBUG=x509sha1=1&lt;/code&gt; in your environment.&lt;/p&gt;
&lt;p&gt;Given Go&#39;s &lt;a href=&#34;https://go.dev/blog/compat&#34;&gt;compatibility policy for GODEBUGs&lt;/a&gt;, the &lt;code&gt;x509sha1&lt;/code&gt; GODEBUG and the support for SHA-1 certificates will &lt;a href=&#34;https://tip.golang.org/doc/go1.23&#34;&gt;fully go away in go1.24&lt;/a&gt; which will be released in the first half of 2025.
If you rely on SHA-1 certificates, please start moving off them.&lt;/p&gt;
&lt;p&gt;Please see &lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/125689&#34;&gt;Kubernetes issue #125689&lt;/a&gt; to get a better idea of timelines around the support for SHA-1 going away, when Kubernetes releases plans to adopt go1.24, and for more details on how to detect usage of SHA-1 certificates via metrics and audit logging.&lt;/p&gt;
&lt;h2 id=&#34;deprecations-and-removals-in-kubernetes-1-31&#34;&gt;Deprecations and removals in Kubernetes 1.31&lt;/h2&gt;
&lt;h3 id=&#34;deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004&#34;&gt;Deprecation of &lt;code&gt;status.nodeInfo.kubeProxyVersion&lt;/code&gt; field for Nodes (&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4004&#34;&gt;KEP 4004&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;.status.nodeInfo.kubeProxyVersion&lt;/code&gt; field of Nodes is being deprecated in Kubernetes v1.31,
and will be removed in a later release.
It&#39;s being deprecated because the value of this field wasn&#39;t (and isn&#39;t) accurate.
This field is set by the kubelet, which does not have reliable information about the kube-proxy version or whether kube-proxy is running.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;DisableNodeKubeProxyVersion&lt;/code&gt; &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;feature gate&lt;/a&gt; will be set to &lt;code&gt;true&lt;/code&gt; in by default in v1.31 and the kubelet will no longer attempt to set the &lt;code&gt;.status.kubeProxyVersion&lt;/code&gt; field for its associated Node.&lt;/p&gt;
&lt;h3 id=&#34;removal-of-all-in-tree-integrations-with-cloud-providers&#34;&gt;Removal of all in-tree integrations with cloud providers&lt;/h3&gt;
&lt;p&gt;As highlighted in a &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/20/completing-cloud-provider-migration/&#34;&gt;previous article&lt;/a&gt;, the last remaining in-tree support for cloud provider integration will be removed as part of the v1.31 release.
This doesn&#39;t mean you can&#39;t integrate with a cloud provider, however you now &lt;strong&gt;must&lt;/strong&gt; use the
recommended approach using an external integration. Some integrations are part of the Kubernetes
project and others are third party software.&lt;/p&gt;
&lt;p&gt;This milestone marks the completion of the externalization process for all cloud providers&#39; integrations from the Kubernetes core (&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md&#34;&gt;KEP-2395&lt;/a&gt;), a process started with Kubernetes v1.26.
This change helps Kubernetes to get closer to being a truly vendor-neutral platform.&lt;/p&gt;
&lt;p&gt;For further details on the cloud provider integrations, read our &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/14/cloud-provider-integration-changes/&#34;&gt;v1.29 Cloud Provider Integrations feature blog&lt;/a&gt;.
For additional context about the in-tree code removal, we invite you to check the (&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/11/16/kubernetes-1-29-upcoming-changes/#removal-of-in-tree-integrations-with-cloud-providers-kep-2395-https-kep-k8s-io-2395&#34;&gt;v1.29 deprecation blog&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;The latter blog also contains useful information for users who need to migrate to version v1.29 and later.&lt;/p&gt;
&lt;h3 id=&#34;removal-of-kubelet-keep-terminated-pod-volumes-command-line-flag&#34;&gt;Removal of kubelet &lt;code&gt;--keep-terminated-pod-volumes&lt;/code&gt; command line flag&lt;/h3&gt;
&lt;p&gt;The kubelet flag &lt;code&gt;--keep-terminated-pod-volumes&lt;/code&gt;, which was deprecated in 2017, will be removed as
part of the v1.31 release.&lt;/p&gt;
&lt;p&gt;You can find more details in the pull request &lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/122082&#34;&gt;#122082&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;removal-of-cephfs-volume-plugin&#34;&gt;Removal of CephFS volume plugin&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/volumes/#cephfs&#34;&gt;CephFS volume plugin&lt;/a&gt; was removed in this release and the &lt;code&gt;cephfs&lt;/code&gt; volume type became non-functional.&lt;/p&gt;
&lt;p&gt;It is recommended that you use the &lt;a href=&#34;https://github.com/ceph/ceph-csi/&#34;&gt;CephFS CSI driver&lt;/a&gt; as a third-party storage driver instead. If you were using the CephFS volume plugin before upgrading the cluster version to v1.31, you must re-deploy your application to use the new driver.&lt;/p&gt;
&lt;p&gt;CephFS volume plugin was formally marked as deprecated in v1.28.&lt;/p&gt;
&lt;h3 id=&#34;removal-of-ceph-rbd-volume-plugin&#34;&gt;Removal of Ceph RBD volume plugin&lt;/h3&gt;
&lt;p&gt;The v1.31 release will remove the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/volumes/#rbd&#34;&gt;Ceph RBD volume plugin&lt;/a&gt; and its CSI migration support, making the &lt;code&gt;rbd&lt;/code&gt; volume type non-functional.&lt;/p&gt;
&lt;p&gt;It&#39;s recommended that you use the &lt;a href=&#34;https://github.com/ceph/ceph-csi/&#34;&gt;RBD CSI driver&lt;/a&gt; in your clusters instead.
If you were using Ceph RBD volume plugin before upgrading the cluster version to v1.31, you must re-deploy your application to use the new driver.&lt;/p&gt;
&lt;p&gt;The Ceph RBD volume plugin was formally marked as deprecated in v1.28.&lt;/p&gt;
&lt;h3 id=&#34;deprecation-of-non-csi-volume-limit-plugins-in-kube-scheduler&#34;&gt;Deprecation of non-CSI volume limit plugins in kube-scheduler&lt;/h3&gt;
&lt;p&gt;The v1.31 release will deprecate all non-CSI volume limit scheduler plugins, and will remove some
already deprected plugins from the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/scheduling/config/&#34;&gt;default plugins&lt;/a&gt;, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;AzureDiskLimits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CinderLimits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;EBSLimits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GCEPDLimits&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It&#39;s recommended that you use the &lt;code&gt;NodeVolumeLimits&lt;/code&gt; plugin instead because it can handle the same functionality as the removed plugins since those volume types have been migrated to CSI.
Please replace the deprecated plugins with the &lt;code&gt;NodeVolumeLimits&lt;/code&gt; plugin if you explicitly use them in the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/scheduling/config/&#34;&gt;scheduler config&lt;/a&gt;.
The &lt;code&gt;AzureDiskLimits&lt;/code&gt;, &lt;code&gt;CinderLimits&lt;/code&gt;, &lt;code&gt;EBSLimits&lt;/code&gt;, and &lt;code&gt;GCEPDLimits&lt;/code&gt; plugins will be removed in a future release.&lt;/p&gt;
&lt;p&gt;These plugins will be removed from the default scheduler plugins list as they have been deprecated since Kubernetes v1.14.&lt;/p&gt;
&lt;h2 id=&#34;looking-ahead&#34;&gt;Looking ahead&lt;/h2&gt;
&lt;p&gt;The official list of API removals planned for &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/deprecation-guide/#v1-32&#34;&gt;Kubernetes v1.32&lt;/a&gt; include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;flowcontrol.apiserver.k8s.io/v1beta3&lt;/code&gt; API version of FlowSchema and PriorityLevelConfiguration will be removed.
To prepare for this, you can edit your existing manifests and rewrite client software to use the &lt;code&gt;flowcontrol.apiserver.k8s.io/v1 API&lt;/code&gt; version, available since v1.29.
All existing persisted objects are accessible via the new API. Notable changes in flowcontrol.apiserver.k8s.io/v1beta3 include that the PriorityLevelConfiguration &lt;code&gt;spec.limited.nominalConcurrencyShares&lt;/code&gt; field only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more information, please refer to the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/deprecation-guide/#v1-32&#34;&gt;API deprecation guide&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;want-to-know-more&#34;&gt;Want to know more?&lt;/h2&gt;
&lt;p&gt;The Kubernetes release notes announce deprecations.
We will formally announce the deprecations in &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#deprecation&#34;&gt;Kubernetes v1.31&lt;/a&gt; as part of the CHANGELOG for that release.&lt;/p&gt;
&lt;p&gt;You can see the announcements of pending deprecations in the release notes for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#deprecation&#34;&gt;Kubernetes v1.30&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#deprecation&#34;&gt;Kubernetes v1.29&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#deprecation&#34;&gt;Kubernetes v1.28&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#deprecation&#34;&gt;Kubernetes v1.27&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Spotlight on SIG Node</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/06/20/sig-node-spotlight-2024/</link>
      <pubDate>Thu, 20 Jun 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/06/20/sig-node-spotlight-2024/</guid>
      <description>
        
        
        &lt;p&gt;In the world of container orchestration, &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/&#34;&gt;Kubernetes&lt;/a&gt; reigns
supreme, powering some of the most complex and dynamic applications across the globe. Behind the
scenes, a network of Special Interest Groups (SIGs) drives Kubernetes&#39; innovation and stability.&lt;/p&gt;
&lt;p&gt;Today, I have the privilege of speaking with &lt;a href=&#34;https://www.linkedin.com/in/matthias-bertschy-b427b815/&#34;&gt;Matthias
Bertschy&lt;/a&gt;, &lt;a href=&#34;https://www.linkedin.com/in/gunju-kim-916b33190/&#34;&gt;Gunju
Kim&lt;/a&gt;, and &lt;a href=&#34;https://www.linkedin.com/in/sergeykanzhelev/&#34;&gt;Sergey
Kanzhelev&lt;/a&gt;, members of &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-node/README.md&#34;&gt;SIG
Node&lt;/a&gt;, who will shed some
light on their roles, challenges, and the exciting developments within SIG Node.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Answers given collectively by all interviewees will be marked by their initials.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id=&#34;introductions&#34;&gt;Introductions&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; Thank you for joining us today. Could you please introduce yourselves and provide a brief
overview of your roles within SIG Node?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Matthias:&lt;/strong&gt; My name is Matthias Bertschy, I am French and live next to Lake Geneva, near the
French Alps. I have been a Kubernetes contributor since 2017, a reviewer for SIG Node and a
maintainer of &lt;a href=&#34;https://docs.prow.k8s.io/docs/overview/&#34;&gt;Prow&lt;/a&gt;. I work as a Senior Kubernetes
Developer for a security startup named &lt;a href=&#34;https://www.armosec.io/&#34;&gt;ARMO&lt;/a&gt;, which donated
&lt;a href=&#34;https://www.cncf.io/projects/kubescape/&#34;&gt;Kubescape&lt;/a&gt; to the CNCF.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;Lake_Geneva_and_the_Alps.jpg&#34; alt=&#34;Lake Geneva and the Alps&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gunju:&lt;/strong&gt; My name is Gunju Kim. I am a software engineer at
&lt;a href=&#34;https://www.navercorp.com/naver/naverMain&#34;&gt;NAVER&lt;/a&gt;, where I focus on developing a cloud platform for
search services. I have been contributing to the Kubernetes project in my free time since 2021.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sergey:&lt;/strong&gt; My name is Sergey Kanzhelev. I have worked on Kubernetes and &lt;a href=&#34;https://cloud.google.com/kubernetes-engine&#34;&gt;Google Kubernetes
Engine&lt;/a&gt; for 3 years and have worked on open-source
projects for many years now. I am a chair of SIG Node.&lt;/p&gt;
&lt;h2 id=&#34;understanding-sig-node&#34;&gt;Understanding SIG Node&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; Thank you! Could you provide our readers with an overview of SIG Node&#39;s responsibilities
within the Kubernetes ecosystem?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; SIG Node is one of the first if not the very first SIG in Kubernetes. The SIG is
responsible for all iterations between Kubernetes and node resources, as well as node maintenance
itself. This is quite a large scope, and the SIG owns a large part of the Kubernetes codebase. Because
of this wide ownership, SIG Node is always in contact with other SIGs such as SIG Network, SIG
Storage, and SIG Security and almost any new features and developments in Kubernetes involves SIG
Node in some way.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;: How does SIG Node contribute to Kubernetes&#39; performance and stability?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; Kubernetes works on nodes of many different sizes and shapes, from small physical VMs
with cheap hardware to large AI/ML-optimized GPU-enabled nodes. Nodes may stay online for months or
maybe be short-lived and be preempted at any moment as they are running on excess compute of a cloud
provider.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/overview/components/#kubelet&#34;&gt;&lt;code&gt;kubelet&lt;/code&gt;&lt;/a&gt; — the
Kubernetes agent on a node — must work in all these environments reliably. As for the performance
of kubelet operations, this is becoming increasingly important today. On one hand, as Kubernetes is
being used on extra small nodes more and more often in telecom and retail environments, it needs to
scale into the smallest footprint possible. On the other hand, with AI/ML workloads where every node
is extremely expensive, every second of delayed operations can visibly change the price of
computation.&lt;/p&gt;
&lt;h2 id=&#34;challenges-and-opportunities&#34;&gt;Challenges and Opportunities&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; What upcoming challenges and opportunities is SIG Node keeping an eye on?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; As Kubernetes enters the second decade of its life, we see a huge demand to support new
workload types. And SIG Node will play a big role in this. The Sidecar KEP, which we will be talking
about later, is one of the examples of increased emphasis on supporting new workload types.&lt;/p&gt;
&lt;p&gt;The key challenge we will have in the next few years is how to keep innovations while maintaining
high quality and backward compatibility of existing scenarios. SIG Node will continue to play a
central role in Kubernetes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; And are there any ongoing research or development areas within SIG Node that excite you?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; Supporting new workload types is a fascinating area for us. Our recent exploration of
sidecar containers is a testament to this. Sidecars offer a versatile solution for enhancing
application functionality without altering the core codebase.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; What are some of the challenges you&#39;ve faced while maintaining SIG Node, and how have you
overcome them?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; The biggest challenge of SIG Node is its size and the many feature requests it
receives. We are encouraging more people to join as reviewers and are always open to improving
processes and addressing feedback. For every release, we run the feedback session at the SIG Node
meeting and identify problematic areas and action items.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; Are there specific technologies or advancements that SIG Node is closely monitoring or
integrating into Kubernetes?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; Developments in components that the SIG depends on, like
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/setup/production-environment/container-runtimes/&#34;&gt;container runtimes&lt;/a&gt;
(e.g. &lt;a href=&#34;https://containerd.io/&#34;&gt;containerd&lt;/a&gt; and &lt;a href=&#34;https://cri-o.io/&#34;&gt;CRI-O&lt;/a&gt;, and OS features are
something we contribute to and monitor closely. For example, there is an upcoming &lt;em&gt;cgroup v1&lt;/em&gt;
deprecation and removal that Kubernetes and SIG Node will need to guide Kubernetes users
through. Containerd is also releasing version &lt;code&gt;2.0&lt;/code&gt;, which removes deprecated features, which will
affect Kubernetes users.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; Could you share a memorable experience or achievement from your time as a SIG Node
maintainer that you&#39;re particularly proud of?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mathias:&lt;/strong&gt; I think the best moment was when my first KEP (introducing the
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/#container-probes&#34;&gt;&lt;code&gt;startupProbe&lt;/code&gt;&lt;/a&gt;)
finally graduated to GA (General Availability). I also enjoy seeing my contributions being used
daily by contributors, such as the comment containing the GitHub tree hash used to retain LGTM
despite squash commits.&lt;/p&gt;
&lt;h2 id=&#34;sidecar-containers&#34;&gt;Sidecar containers&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; Can you provide more context on the concept of sidecar containers and their evolution in
the context of Kubernetes?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; The concept of
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/pods/sidecar-containers/&#34;&gt;sidecar containers&lt;/a&gt; dates back to
2015 when Kubernetes introduced the idea of composite containers. These additional containers,
running alongside the main application container within the same pod, were seen as a way to extend
and enhance application functionality without modifying the core codebase. Early adopters of
sidecars employed custom scripts and configurations to manage them, but this approach presented
challenges in terms of consistency and scalability.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; Can you share specific use cases or examples where sidecar containers are particularly
beneficial?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; Sidecar containers are a versatile tool that can be used to enhance the functionality of
applications in a variety of ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Logging and monitoring:&lt;/strong&gt; Sidecar containers can be used to collect logs and metrics from the
primary application container and send them to a centralized logging and monitoring system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Traffic filtering and routing:&lt;/strong&gt; Sidecar containers can be used to filter and route traffic to
and from the primary application container.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Encryption and decryption:&lt;/strong&gt; Sidecar containers can be used to encrypt and decrypt data as it
flows between the primary application container and external services.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data synchronization:&lt;/strong&gt; Sidecar containers can be used to synchronize data between the primary
application container and external databases or services.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fault injection:&lt;/strong&gt; Sidecar containers can be used to inject faults into the primary application
container in order to test its resilience to failures.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; The proposal mentions that some companies are using a fork of Kubernetes with sidecar
functionality added. Can you provide insights into the level of adoption and community interest in
this feature?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; While we lack concrete metrics to measure adoption rates, the KEP has garnered
significant interest from the community, particularly among service mesh vendors like Istio, who
actively participated in its alpha testing phase. The KEP&#39;s visibility through numerous blog posts,
interviews, talks, and workshops further demonstrates its widespread appeal. The KEP addresses the
growing demand for additional capabilities alongside main containers in Kubernetes pods, such as
network proxies, logging systems, and security measures. The community acknowledges the importance
of providing easy migration paths for existing workloads to facilitate widespread adoption of the
feature.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; Are there any notable examples or success stories from companies using sidecar containers
in production?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; It is still too early to expect widespread adoption in production environments. The 1.29
release has only been available in Google Kubernetes Engine (GKE) since January 11, 2024, and there
still needs to be comprehensive documentation on how to enable and use them effectively via
universal injector. Istio, a popular service mesh platform, also lacks proper documentation for
enabling native sidecars, making it difficult for developers to get started with this new
feature. However, as native sidecar support matures and documentation improves, we can expect to see
wider adoption of this technology in production environments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; The proposal suggests introducing a &lt;code&gt;restartPolicy&lt;/code&gt; field for init containers to indicate
sidecar functionality. Can you explain how this solution addresses the outlined challenges?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; The proposal to introduce a &lt;code&gt;restartPolicy&lt;/code&gt; field for init containers addresses the
outlined challenges by utilizing existing infrastructure and simplifying sidecar management. This
approach avoids adding new fields to the pod specification, keeping it manageable and avoiding more
clutter. By leveraging the existing init container mechanism, sidecars can be run alongside regular
init containers during pod startup, ensuring a consistent ordering of initialization. Additionally,
setting the restart policy of sidecar init containers to &lt;code&gt;Always&lt;/code&gt; explicitly states that they continue
running even after the main application container terminates, enabling persistent services like
logging and monitoring until the end of the workload.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; How will the introduction of the &lt;code&gt;restartPolicy&lt;/code&gt; field for init containers affect
backward compatibility with existing Kubernetes configurations?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; The introduction of the &lt;code&gt;restartPolicy&lt;/code&gt; field for init containers will maintain backward
compatibility with existing Kubernetes configurations. Existing init containers will continue to
function as they have before, and the new &lt;code&gt;restartPolicy&lt;/code&gt; field will only apply to init containers
explicitly marked as sidecars. This approach ensures that existing applications and deployments will
not be disrupted by the new feature, and provides a more streamlined way to define and manage
sidecars.&lt;/p&gt;
&lt;h2 id=&#34;contributing-to-sig-node&#34;&gt;Contributing to SIG Node&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arpit:&lt;/strong&gt; What is the best place for the new members and especially beginners to contribute?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;M/G/S:&lt;/strong&gt; New members and beginners can contribute to the Sidecar KEP (Kubernetes Enhancement
Proposal) by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Raising awareness:&lt;/strong&gt; Create content that highlights the benefits and use cases of sidecars. This
can educate others about the feature and encourage its adoption.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Providing feedback:&lt;/strong&gt; Share your experiences with sidecars, both positive and negative. This
feedback can be used to improve the feature and make it more widely usable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sharing your use cases:&lt;/strong&gt; If you are using sidecars in production,
share your experiences with others. This can help to demonstrate the
real-world value of the feature and encourage others to adopt it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Improving the documentation:&lt;/strong&gt; Help to clarify and expand the documentation for the
feature. This can make it easier for others to understand and use sidecars.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In addition to the Sidecar KEP, there are many other areas where SIG Node needs more contributors:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Test coverage:&lt;/strong&gt; SIG Node is always looking for ways to improve the test coverage of Kubernetes
components.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CI maintenance:&lt;/strong&gt; SIG Node maintains a suite of e2e tests ensuring Kubernetes components
function as intended across a variety of scenarios.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;In conclusion, SIG Node stands as a cornerstone in Kubernetes&#39; journey, ensuring its reliability and
adaptability in the ever-changing landscape of cloud-native computing. With dedicated members like
Matthias, Gunju, and Sergey leading the charge, SIG Node remains at the forefront of innovation,
driving Kubernetes towards new horizons.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>10 Years of Kubernetes</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/06/06/10-years-of-kubernetes/</link>
      <pubDate>Thu, 06 Jun 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/06/06/10-years-of-kubernetes/</guid>
      <description>
        
        
        &lt;p&gt;&lt;img src=&#34;kcseu2024.jpg&#34; alt=&#34;KCSEU 2024 group photo&#34;&gt;&lt;/p&gt;
&lt;p&gt;Ten (10) years ago, on June 6th, 2014, the
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56&#34;&gt;first commit&lt;/a&gt;
of Kubernetes was pushed to GitHub. That first commit with 250 files and 47,501 lines of go, bash
and markdown kicked off the project we have today. Who could have predicted that 10 years later,
Kubernetes would grow to become one of the largest Open Source projects to date with over
&lt;a href=&#34;https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1&#34;&gt;88,000 contributors&lt;/a&gt; from
more than &lt;a href=&#34;https://www.cncf.io/reports/kubernetes-project-journey-report/&#34;&gt;8,000 companies&lt;/a&gt;, across
44 countries.&lt;/p&gt;
&lt;img src=&#34;kcscn2019.jpg&#34; alt=&#34;KCSCN 2019&#34; class=&#34;left&#34; style=&#34;max-width: 20em; margin: 1em&#34; &gt;
&lt;p&gt;This milestone isn&#39;t just for Kubernetes but for the Cloud Native ecosystem that blossomed from
it. There are close to &lt;a href=&#34;https://all.devstats.cncf.io/d/18/overall-project-statistics-table?orgId=1&#34;&gt;200 projects&lt;/a&gt;
within the CNCF itself, with contributions from
&lt;a href=&#34;https://all.devstats.cncf.io/d/18/overall-project-statistics-table?orgId=1&#34;&gt;240,000+ individual contributors&lt;/a&gt; and
thousands more in the greater ecosystem. Kubernetes would not be where it is today without them, the
&lt;a href=&#34;https://www.cncf.io/blog/2022/05/18/slashdata-cloud-native-continues-to-grow-with-more-than-7-million-developers-worldwide/&#34;&gt;7M+ Developers&lt;/a&gt;,
and the even larger user community that have all helped shape the ecosystem that it is today.&lt;/p&gt;
&lt;h2 id=&#34;kubernetes-beginnings-a-converging-of-technologies&#34;&gt;Kubernetes&#39; beginnings - a converging of technologies&lt;/h2&gt;
&lt;p&gt;The ideas underlying Kubernetes started well before the first commit, or even the first prototype
(&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/&#34;&gt;which came about in 2013&lt;/a&gt;).
In the early 2000s, Moore&#39;s Law was well in effect. Computing hardware was becoming more and more
powerful at an incredibly fast rate. Correspondingly, applications were growing more and more
complex. This combination of hardware commoditization and application complexity pointed to a need
to further abstract software from hardware, and solutions started to emerge.&lt;/p&gt;
&lt;p&gt;Like many companies at the time, Google was scaling rapidly, and its engineers were interested in
the idea of creating a form of isolation in the Linux kernel. Google engineer Rohit Seth described
the concept in an &lt;a href=&#34;https://lwn.net/Articles/199643/&#34;&gt;email in 2006&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We use the term container to indicate a structure against which we track and charge utilization of
system resources like memory, tasks, etc. for a Workload.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;img src=&#34;future.png&#34; alt=&#34;The future of Linux containers&#34; class=&#34;right&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;
&lt;p&gt;In March of 2013, a 5-minute lightning talk called
&lt;a href=&#34;https://youtu.be/wW9CAH9nSLs?si=VtK_VFQHymOT7BIB&#34;&gt;&amp;quot;The future of Linux Containers,&amp;quot; presented by Solomon Hykes at PyCon&lt;/a&gt;,
introduced an upcoming open source tool called &amp;quot;Docker&amp;quot; for creating and using Linux
Containers. Docker introduced a level of usability to Linux Containers that made them accessible to
more users than ever before, and the popularity of Docker, and thus of Linux Containers,
skyrocketed. With Docker making the abstraction of Linux Containers accessible to all, running
applications in much more portable and repeatable ways was suddenly possible, but the question of
scale remained.&lt;/p&gt;
&lt;p&gt;Google&#39;s Borg system for managing application orchestration at scale had adopted Linux containers as
they were developed in the mid-2000s. Since then, the company had also started working on a new
version of the system called &amp;quot;Omega.&amp;quot; Engineers at Google who were familiar with the Borg and Omega
systems saw the popularity of containerization driven by Docker. They recognized not only the need
for an open source container orchestration system but its &amp;quot;inevitability,&amp;quot; as described by Brendan
Burns in
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/&#34;&gt;this blog post&lt;/a&gt;.
That realization in the fall of 2013 inspired a small team to start working on a project that would
later become &lt;strong&gt;Kubernetes&lt;/strong&gt;. That team included Joe Beda, Brendan Burns, Craig McLuckie, Ville
Aikas, Tim Hockin, Dawn Chen, Brian Grant, and Daniel Smith.&lt;/p&gt;
&lt;h2 id=&#34;a-decade-of-kubernetes&#34;&gt;A decade of Kubernetes&lt;/h2&gt;
&lt;img src=&#34;kubeconeu2017.jpg&#34; alt=&#34;KubeCon EU 2017&#34; class=&#34;left&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;
&lt;p&gt;Kubernetes&#39; history begins with that historic commit on June 6th, 2014, and the subsequent
announcement of the project in a June 10th
&lt;a href=&#34;https://youtu.be/YrxnVKZeqK8?si=Q_wYBFn7dsS9H3k3&#34;&gt;keynote by Google engineer Eric Brewer at DockerCon 2014&lt;/a&gt;
(and its corresponding &lt;a href=&#34;https://cloudplatform.googleblog.com/2014/06/an-update-on-container-support-on-google-cloud-platform.html&#34;&gt;Google blog&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Over the next year, a small community of
&lt;a href=&#34;https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&amp;amp;var-period_name=Before%20joining%20CNCF&amp;amp;var-metric=contributors&#34;&gt;contributors, largely from Google and Red Hat&lt;/a&gt;,
worked hard on the project, culminating in a &lt;a href=&#34;https://cloudplatform.googleblog.com/2015/07/Kubernetes-V1-Released.html&#34;&gt;version 1.0 release on July 21st, 2015&lt;/a&gt;.
Alongside 1.0, Google announced that Kubernetes would be donated to a newly formed branch of the
Linux Foundation called the
&lt;a href=&#34;https://www.cncf.io/announcements/2015/06/21/new-cloud-native-computing-foundation-to-drive-alignment-among-container-technologies/&#34;&gt;Cloud Native Computing Foundation (CNCF)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Despite reaching 1.0, the Kubernetes project was still very challenging to use and
understand. Kubernetes contributor Kelsey Hightower took special note of the project&#39;s shortcomings
in ease of use and on July 7, 2016, he pushed the
&lt;a href=&#34;https://github.com/kelseyhightower/kubernetes-the-hard-way/commit/9d7ace8b186f6ebd2e93e08265f3530ec2fba81c&#34;&gt;first commit of his famed &amp;quot;Kubernetes the Hard Way&amp;quot; guide&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The project has changed enormously since its original 1.0 release; experiencing a number of big wins
such as
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2019/09/18/kubernetes-1-16-release-announcement/&#34;&gt;Custom Resource Definitions (CRD) going GA in 1.16&lt;/a&gt;
or &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/12/08/dual-stack-networking-ga/&#34;&gt;full dual stack support launching in 1.23&lt;/a&gt; and
community &amp;quot;lessons learned&amp;quot; from the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/&#34;&gt;removal of widely used beta APIs in 1.22&lt;/a&gt;
or the deprecation of &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2020/12/02/dockershim-faq/&#34;&gt;Dockershim&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Some notable updates, milestones and events since 1.0 include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;December 2016 - &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2016/12/kubernetes-1-5-supporting-production-workloads/&#34;&gt;Kubernetes 1.5&lt;/a&gt; introduces runtime pluggability with initial CRI support and alpha Windows node support. OpenAPI also appears for the first time, paving the way for clients to be able to discover extension APIs.
&lt;ul&gt;
&lt;li&gt;This release also introduced StatefulSets and PodDisruptionBudgets in Beta.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;April 2017 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2017/04/rbac-support-in-kubernetes/&#34;&gt;Introduction of Role-Based Access Controls or RBAC&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;June 2017 — In &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2017/06/kubernetes-1-7-security-hardening-stateful-application-extensibility-updates/&#34;&gt;Kubernetes 1.7&lt;/a&gt;, ThirdPartyResources or &amp;quot;TPRs&amp;quot; are replaced with CustomResourceDefinitions (CRDs).&lt;/li&gt;
&lt;li&gt;December 2017 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2017/12/kubernetes-19-workloads-expanded-ecosystem/&#34;&gt;Kubernetes 1.9&lt;/a&gt; sees the Workloads API becoming GA (Generally Available). The release blog states: &lt;em&gt;&amp;quot;Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback.&amp;quot;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;December 2018 — In 1.13, the Container Storage Interface (CSI) reaches GA, kubeadm tool for bootstrapping minimum viable clusters reaches GA, and CoreDNS becomes the default DNS server.&lt;/li&gt;
&lt;li&gt;September 2019 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2019/09/18/kubernetes-1-16-release-announcement/&#34;&gt;Custom Resource Definitions go GA&lt;/a&gt; in Kubernetes 1.16.&lt;/li&gt;
&lt;li&gt;August 2020 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/&#34;&gt;Kubernetes 1.19&lt;/a&gt; increases the support window for releases to 1 year.&lt;/li&gt;
&lt;li&gt;December 2020 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2020/12/18/kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi/&#34;&gt;Dockershim is deprecated&lt;/a&gt;  in 1.20&lt;/li&gt;
&lt;li&gt;April 2021 — the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/07/20/new-kubernetes-release-cadence/#:~:text=On%20April%2023%2C%202021%2C%20the,Kubernetes%20community&#39;s%20contributors%20and%20maintainers.&#34;&gt;Kubernetes release cadence changes&lt;/a&gt; from 4 releases per year to 3 releases per year.&lt;/li&gt;
&lt;li&gt;July 2021 — Widely used beta APIs are &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/&#34;&gt;removed&lt;/a&gt;  in Kubernetes 1.22.&lt;/li&gt;
&lt;li&gt;May 2022 — Kubernetes 1.24 sees  &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2022/05/03/kubernetes-1-24-release-announcement/&#34;&gt;beta APIs become disabled by default&lt;/a&gt; to reduce upgrade conflicts and removal of &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/dockershim&#34;&gt;Dockershim&lt;/a&gt;, leading to &lt;a href=&#34;https://www.youtube.com/watch?v=a03Hh1kd6KE&#34;&gt;widespread user confusion&lt;/a&gt; (we&#39;ve since &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/communication/contributor-comms&#34;&gt;improved our communication!&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;December 2022 — In 1.26, there was a significant batch and  &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2022/12/29/scalable-job-tracking-ga/&#34;&gt;Job API overhaul&lt;/a&gt; that paved the way for better support for AI  /ML / batch workloads.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;PS:&lt;/strong&gt; Curious to see how far the project has come for yourself? Check out this &lt;a href=&#34;https://github.com/spurin/kubernetes-v1.0-lab&#34;&gt;tutorial for spinning up a Kubernetes 1.0 cluster&lt;/a&gt; created by community members Carlos Santana, Amim Moises Salum Knabben, and James Spurin.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Kubernetes offers more extension points than we can count. Originally designed to work with Docker
and only Docker, now you can plug in any container runtime that adheres to the CRI standard. There
are other similar interfaces: CSI for storage and CNI for networking. And that&#39;s far from all you
can do. In the last decade, whole new patterns have emerged, such as using&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/&#34;&gt;Custom Resource Definitions&lt;/a&gt;
(CRDs) to support third-party controllers - now a huge part of the Kubernetes ecosystem.&lt;/p&gt;
&lt;p&gt;The community building the project has also expanded immensely over the last decade. Using
&lt;a href=&#34;https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1&#34;&gt;DevStats&lt;/a&gt;, we can see the
incredible volume of contribution over the last decade that has made Kubernetes the
&lt;a href=&#34;https://www.cncf.io/reports/kubernetes-project-journey-report/&#34;&gt;second-largest open source project in the world&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;88,474&lt;/strong&gt; contributors&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;15,121&lt;/strong&gt; code committers&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;4,228,347&lt;/strong&gt; contributions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;158,530&lt;/strong&gt; issues&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;311,787&lt;/strong&gt; pull requests&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;kubernetes-today&#34;&gt;Kubernetes today&lt;/h2&gt;
&lt;img src=&#34;welcome.jpg&#34; alt=&#34;KubeCon NA 2023&#34; class=&#34;left&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;
&lt;p&gt;Since its early days, the project has seen enormous growth in technical capability, usage, and
contribution. The project is still actively working to improve and better serve its users.&lt;/p&gt;
&lt;p&gt;In the upcoming 1.31 release, the project will celebrate the culmination of an important long-term
project: the removal of in-tree cloud provider code. In this
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/20/completing-cloud-provider-migration/&#34;&gt;largest migration in Kubernetes history&lt;/a&gt;,
roughly 1.5 million lines of code have been removed, reducing the binary sizes of core components
by approximately 40%. In the project&#39;s early days, it was clear that extensibility would be key to
success. However, it wasn&#39;t always clear how that extensibility should be achieved. This migration
removes a variety of vendor-specific capabilities from the core Kubernetes code
base. Vendor-specific capabilities can now be better served by other pluggable extensibility
features or patterns, such as
&lt;a href=&#34;https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/&#34;&gt;Custom Resource Definitions (CRDs)&lt;/a&gt;
or API standards like the &lt;a href=&#34;https://gateway-api.sigs.k8s.io/&#34;&gt;Gateway API&lt;/a&gt;.
Kubernetes also faces new challenges in serving its vast user base, and the community is adapting
accordingly. One example of this is the migration of image hosting to the new, community-owned
registry.k8s.io. The egress bandwidth and costs of providing pre-compiled binary images for user
consumption have become immense. This new registry change enables the community to continue
providing these convenient images in more cost- and performance-efficient ways. Make sure you check
out the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/&#34;&gt;blog post&lt;/a&gt; and
update any automation you have to use registry.k8s.io!&lt;/p&gt;
&lt;h2 id=&#34;the-future-of-kubernetes&#34;&gt;The future of Kubernetes&lt;/h2&gt;
&lt;img src=&#34;lts.jpg&#34; alt=&#34;&#34; class=&#34;right&#34; width=&#34;300px&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;
&lt;p&gt;A decade in, the future of Kubernetes still looks bright. The community is prioritizing changes that
both improve the user experiences, and enhance the sustainability of the project. The world of
application development continues to evolve, and Kubernetes is poised to change along with it.&lt;/p&gt;
&lt;p&gt;In 2024, the advent of AI changed a once-niche workload type into one of prominent
importance. Distributed computing and workload scheduling has always gone hand-in-hand with the
resource-intensive needs of Artificial Intelligence, Machine Learning, and High Performance
Computing workloads. Contributors are paying close attention to the needs of newly developed
workloads and how Kubernetes can best serve them. The new
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/wg-serving&#34;&gt;Serving Working Group&lt;/a&gt; is one
example of how the community is organizing to address these workloads&#39; needs. It&#39;s likely that the
next few years will see improvements to Kubernetes&#39; ability to manage various types of hardware, and
its ability to manage the scheduling of large batch-style workloads which are run across hardware in
chunks.&lt;/p&gt;
&lt;p&gt;The ecosystem around Kubernetes will continue to grow and evolve. In the future, initiatives to
maintain the sustainability of the project, like the migration of in-tree vendor code and the
registry change, will be ever more important.&lt;/p&gt;
&lt;p&gt;The next 10 years of Kubernetes will be guided by its users and the ecosystem, but most of all, by
the people who contribute to it. The community remains open to new contributors. You can find more
information about contributing in our New Contributor Course at
&lt;a href=&#34;https://k8s.dev/docs/onboarding&#34;&gt;https://k8s.dev/docs/onboarding&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We look forward to building the future of Kubernetes with you!&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/06/06/10-years-of-kubernetes/kcsna2023.jpg&#34;
         alt=&#34;KCSNA 2023&#34;/&gt; 
&lt;/figure&gt;

      </description>
    </item>
    
    <item>
      <title>Completing the largest migration in Kubernetes history</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/20/completing-cloud-provider-migration/</link>
      <pubDate>Mon, 20 May 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/20/completing-cloud-provider-migration/</guid>
      <description>
        
        
        &lt;p&gt;Since as early as Kubernetes v1.7, the Kubernetes project has pursued the ambitious goal of removing built-in cloud provider integrations (&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md&#34;&gt;KEP-2395&lt;/a&gt;).
While these integrations were instrumental in Kubernetes&#39; early development and growth, their removal was driven by two key factors:
the growing complexity of maintaining native support for every cloud provider across millions of lines of Go code, and the desire to establish
Kubernetes as a truly vendor-neutral platform.&lt;/p&gt;
&lt;p&gt;After many releases, we&#39;re thrilled to announce that all cloud provider integrations have been successfully migrated from the core Kubernetes repository to external plugins.
In addition to achieving our initial objectives, we&#39;ve also significantly streamlined Kubernetes by removing roughly 1.5 million lines of code and reducing the binary sizes of core components by approximately 40%.&lt;/p&gt;
&lt;p&gt;This migration was a complex and long-running effort due to the numerous impacted components and the critical code paths that relied on the built-in integrations for the
five initial cloud providers: Google Cloud, AWS, Azure, OpenStack, and vSphere. To successfully complete this migration, we had to build four new subsystems from the ground up:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Cloud controller manager&lt;/strong&gt; (&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2392-cloud-controller-manager/README.md&#34;&gt;KEP-2392&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API server network proxy&lt;/strong&gt; (&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1281-network-proxy&#34;&gt;KEP-1281&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;kubelet credential provider plugins&lt;/strong&gt; (&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2133-kubelet-credential-providers&#34;&gt;KEP-2133&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Storage migration to use &lt;a href=&#34;https://github.com/container-storage-interface/spec?tab=readme-ov-file#container-storage-interface-csi-specification-&#34;&gt;CSI&lt;/a&gt;&lt;/strong&gt; (&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/625-csi-migration/README.md&#34;&gt;KEP-625&lt;/a&gt;)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Each subsystem was critical to achieve full feature parity with built-in capabilities and required several releases to bring each subsystem to GA-level maturity with a safe and
reliable migration path. More on each subsystem below.&lt;/p&gt;
&lt;h3 id=&#34;cloud-controller-manager&#34;&gt;Cloud controller manager&lt;/h3&gt;
&lt;p&gt;The cloud controller manager was the first external component introduced in this effort, replacing functionality within the kube-controller-manager and kubelet that directly interacted with cloud APIs.
This essential component is responsible for initializing nodes by applying metadata labels that indicate the cloud region and zone a Node is running on, as well as IP addresses that are only known to the cloud provider.
Additionally, it runs the service controller, which is responsible for provisioning cloud load balancers for Services of type LoadBalancer.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/docs/components-of-kubernetes.svg&#34; alt=&#34;Kubernetes components&#34;&gt;&lt;/p&gt;
&lt;p&gt;To learn more, read &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/architecture/cloud-controller/&#34;&gt;Cloud Controller Manager&lt;/a&gt; in the Kubernetes documentation.&lt;/p&gt;
&lt;h3 id=&#34;api-server-network-proxy&#34;&gt;API server network proxy&lt;/h3&gt;
&lt;p&gt;The API Server Network Proxy project, initiated in 2018 in collaboration with SIG API Machinery, aimed to replace the SSH tunneler functionality within the kube-apiserver.
This tunneler had been used to securely proxy traffic between the Kubernetes control plane and nodes, but it heavily relied on provider-specific implementation details embedded in the kube-apiserver to establish these SSH tunnels.&lt;/p&gt;
&lt;p&gt;Now, the API Server Network Proxy is a GA-level extension point within the kube-apiserver. It offers a generic proxying mechanism that can route traffic from the API server to nodes through a secure proxy,
eliminating the need for the API server to have any knowledge of the specific cloud provider it is running on. This project also introduced the Konnectivity project, which has seen growing adoption in production environments.&lt;/p&gt;
&lt;p&gt;You can learn more about the API Server Network Proxy from its &lt;a href=&#34;https://github.com/kubernetes-sigs/apiserver-network-proxy#readme&#34;&gt;README&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;credential-provider-plugins-for-the-kubelet&#34;&gt;Credential provider plugins for the kubelet&lt;/h3&gt;
&lt;p&gt;The Kubelet credential provider plugin was developed to replace the kubelet&#39;s built-in functionality for dynamically fetching credentials for image registries hosted on Google Cloud, AWS, or Azure.
The legacy capability was convenient as it allowed the kubelet to seamlessly retrieve short-lived tokens for pulling images from GCR, ECR, or ACR. However, like other areas of Kubernetes, supporting
this required the kubelet to have specific knowledge of different cloud environments and APIs.&lt;/p&gt;
&lt;p&gt;Introduced in 2019, the credential provider plugin mechanism offers a generic extension point for the kubelet to execute plugin binaries that dynamically provide credentials for images hosted on various clouds.
This extensibility expands the kubelet&#39;s capabilities to fetch short-lived tokens beyond the initial three cloud providers.&lt;/p&gt;
&lt;p&gt;To learn more, read &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/containers/images/#kubelet-credential-provider&#34;&gt;kubelet credential provider for authenticated image pulls&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;storage-plugin-migration-from-in-tree-to-csi&#34;&gt;Storage plugin migration from in-tree to CSI&lt;/h3&gt;
&lt;p&gt;The Container Storage Interface (CSI) is a control plane standard for managing block and file storage systems in Kubernetes and other container orchestrators that went GA in 1.13.
It was designed to replace the in-tree volume plugins built directly into Kubernetes with drivers that can run as Pods within the Kubernetes cluster.
These drivers communicate with kube-controller-manager storage controllers via the Kubernetes API, and with kubelet through a local gRPC endpoint.
Now there are over 100 CSI drivers available across all major cloud and storage vendors, making stateful workloads in Kubernetes a reality.&lt;/p&gt;
&lt;p&gt;However, a major challenge remained on how to handle all the existing users of in-tree volume APIs. To retain API backwards compatibility,
we built an API translation layer into our controllers that will convert the in-tree volume API into the equivalent CSI API. This allowed us to redirect all storage operations to the CSI driver,
paving the way for us to remove the code for the built-in volume plugins without removing the API.&lt;/p&gt;
&lt;p&gt;You can learn more about In-tree Storage migration in &lt;a href=&#34;https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/&#34;&gt;Kubernetes In-Tree to CSI Volume Migration Moves to Beta&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;what-s-next&#34;&gt;What&#39;s next?&lt;/h2&gt;
&lt;p&gt;This migration has been the primary focus for SIG Cloud Provider over the past few years. With this significant milestone achieved, we will be shifting our efforts towards exploring new
and innovative ways for Kubernetes to better integrate with cloud providers, leveraging the external subsystems we&#39;ve built over the years. This includes making Kubernetes smarter in
hybrid environments where nodes in the cluster can run on both public and private clouds, as well as providing better tools and frameworks for developers of external providers to simplify and streamline their integration efforts.&lt;/p&gt;
&lt;p&gt;With all the new features, tools, and frameworks being planned, SIG Cloud Provider is not forgetting about the other side of the equation: testing. Another area of focus for the SIG&#39;s future activities is the improvement of
cloud controller testing to include more providers. The ultimate goal of this effort being to create a testing framework that will include as many providers as possible so that we give the Kubernetes community the highest
levels of confidence about their Kubernetes environments.&lt;/p&gt;
&lt;p&gt;If you&#39;re using a version of Kubernetes older than v1.29 and haven&#39;t migrated to an external cloud provider yet, we recommend checking out our previous blog post &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/14/cloud-provider-integration-changes/&#34;&gt;Kubernetes 1.29: Cloud Provider Integrations Are Now Separate Components&lt;/a&gt;.It provides detailed information on the changes we&#39;ve made and offers guidance on how to migrate to an external provider. Starting in v1.31, in-tree cloud providers will be permanently disabled and removed from core Kubernetes components.&lt;/p&gt;
&lt;p&gt;If you’re interested in contributing, come join our &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cloud-provider#meetings&#34;&gt;bi-weekly SIG meetings&lt;/a&gt;!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Gateway API v1.1: Service mesh, GRPCRoute, and a whole lot more</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/09/gateway-api-v1-1/</link>
      <pubDate>Thu, 09 May 2024 09:00:00 -0800</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/09/gateway-api-v1-1/</guid>
      <description>
        
        
        &lt;p&gt;&lt;img src=&#34;gateway-api-logo.svg&#34; alt=&#34;Gateway API logo&#34;&gt;&lt;/p&gt;
&lt;p&gt;Following the GA release of Gateway API last October, Kubernetes
SIG Network is pleased to announce the v1.1 release of
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/&#34;&gt;Gateway API&lt;/a&gt;. In this release, several features are graduating to
&lt;em&gt;Standard Channel&lt;/em&gt; (GA), notably including support for service mesh and
GRPCRoute. We&#39;re also introducing some new experimental features, including
session persistence and client certificate verification.&lt;/p&gt;
&lt;h2 id=&#34;what-s-new&#34;&gt;What&#39;s new&lt;/h2&gt;
&lt;h3 id=&#34;graduation-to-standard&#34;&gt;Graduation to Standard&lt;/h3&gt;
&lt;p&gt;This release includes the graduation to Standard of four eagerly awaited features.
This means they are no longer experimental concepts; inclusion in the Standard
release channel denotes a high level of confidence in the API surface and
provides guarantees of backward compatibility. Of course, as with any other
Kubernetes API, Standard Channel features can continue to evolve with
backward-compatible additions over time, and we certainly expect further
refinements and improvements to these new features in the future.
For more information on how all of this works, refer to the
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/versioning/&#34;&gt;Gateway API Versioning Policy&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id=&#34;service-mesh-support-https-gateway-api-sigs-k8s-io-mesh&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/mesh/&#34;&gt;Service Mesh Support&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;Service mesh support in Gateway API allows service mesh users to use the same
API to manage ingress traffic and mesh traffic, reusing the same policy and
routing interfaces. In Gateway API v1.1, routes (such as HTTPRoute) can now have
a Service as a &lt;code&gt;parentRef&lt;/code&gt;, to control how traffic to specific services behave.
For more information, read the
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/mesh/&#34;&gt;Gateway API service mesh documentation&lt;/a&gt;
or see the
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/implementations/#service-mesh-implementation-status&#34;&gt;list of Gateway API implementations&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As an example, one could do a canary deployment of a workload deep in an
application&#39;s call graph with an HTTPRoute as follows:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gateway.networking.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;HTTPRoute&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;color-canary&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;faces&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parentRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;color&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Service&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;group&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;80&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;rules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;backendRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;color&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;80&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;weight&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;50&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;color2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;80&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;weight&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;50&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;This would split traffic sent to the &lt;code&gt;color&lt;/code&gt; Service in the &lt;code&gt;faces&lt;/code&gt; namespace
50/50 between the original &lt;code&gt;color&lt;/code&gt; Service and the &lt;code&gt;color2&lt;/code&gt; Service, using a
portable configuration that&#39;s easy to move from one mesh to another.&lt;/p&gt;
&lt;h4 id=&#34;grpcroute-https-gateway-api-sigs-k8s-io-guides-grpc-routing&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/guides/grpc-routing/&#34;&gt;GRPCRoute&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;If you are already using the experimental version of GRPCRoute, we recommend holding
off on upgrading to the standard channel version of GRPCRoute until the
controllers you&#39;re using have been updated to support GRPCRoute v1. Until then,
it is safe to upgrade to the experimental channel version of GRPCRoute in v1.1
that includes both v1alpha2 and v1 API versions.&lt;/p&gt;
&lt;h4 id=&#34;parentreference-port-https-gateway-api-sigs-k8s-io-reference-spec-gateway-networking-k8s-io-2fv1-parentreference&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1.ParentReference&#34;&gt;ParentReference Port&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;port&lt;/code&gt; field was added to ParentReference, allowing you to attach resources
to Gateway Listeners, Services, or other parent resources
(depending on the implementation). Binding to a port also allows you to attach
to multiple Listeners at once.&lt;/p&gt;
&lt;p&gt;For example, you can attach an HTTPRoute to one or more specific Listeners of a
Gateway as specified by the Listener &lt;code&gt;port&lt;/code&gt;, instead of the Listener &lt;code&gt;name&lt;/code&gt; field.&lt;/p&gt;
&lt;p&gt;For more information, see
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/api-types/httproute/#attaching-to-gateways&#34;&gt;Attaching to Gateways&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id=&#34;conformance-profiles-and-reports-https-gateway-api-sigs-k8s-io-concepts-conformance-conformance-profiles&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/conformance/#conformance-profiles&#34;&gt;Conformance Profiles and Reports&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;The conformance report API has been expanded with the &lt;code&gt;mode&lt;/code&gt; field (intended to
specify the working mode of the implementation), and the &lt;code&gt;gatewayAPIChannel&lt;/code&gt;
(standard or experimental). The &lt;code&gt;gatewayAPIVersion&lt;/code&gt; and &lt;code&gt;gatewayAPIChannel&lt;/code&gt; are
now filled in automatically by the suite machinery, along with a brief
description of the testing outcome. The Reports have been reorganized in a more
structured way, and the implementations can now add information on how the tests
have been run and provide reproduction steps.&lt;/p&gt;
&lt;h3 id=&#34;new-additions-to-experimental-channel&#34;&gt;New additions to Experimental channel&lt;/h3&gt;
&lt;h4 id=&#34;gateway-client-certificate-verification-https-gateway-api-sigs-k8s-io-geps-gep-91&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-91/&#34;&gt;Gateway Client Certificate Verification&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;Gateways can now configure client cert verification for each Gateway Listener by
introducing a new &lt;code&gt;frontendValidation&lt;/code&gt; field within &lt;code&gt;tls&lt;/code&gt;. This field
supports configuring a list of CA Certificates that can be used as a trust
anchor to validate the certificates presented by the client.&lt;/p&gt;
&lt;p&gt;The following example shows how the CACertificate stored in
the &lt;code&gt;foo-example-com-ca-cert&lt;/code&gt; ConfigMap can be used to validate the certificates
presented by clients connecting to the &lt;code&gt;foo-https&lt;/code&gt; Gateway Listener.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gateway.networking.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Gateway&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;client-validation-basic&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;gatewayClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;acme-lb&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;listeners&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-https&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;protocol&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;HTTPS&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;443&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;hostname&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo.example.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;tls&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;certificateRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Secret&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;group&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-example-com-cert&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;frontendValidation&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;caCertificateRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ConfigMap&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;group&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-example-com-ca-cert&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h4 id=&#34;session-persistence-and-backendlbpolicy-https-gateway-api-sigs-k8s-io-geps-gep-1619&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-1619/&#34;&gt;Session Persistence and BackendLBPolicy&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1.SessionPersistence&#34;&gt;Session Persistence&lt;/a&gt;
is being introduced to Gateway API via a new policy
(&lt;a href=&#34;https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.BackendLBPolicy&#34;&gt;BackendLBPolicy&lt;/a&gt;)
for Service-level configuration and as fields within HTTPRoute
and GRPCRoute for route-level configuration. The BackendLBPolicy and route-level
APIs provide the same session persistence configuration, including session
timeouts, session name, session type, and cookie lifetime type.&lt;/p&gt;
&lt;p&gt;Below is an example configuration of &lt;code&gt;BackendLBPolicy&lt;/code&gt; that enables cookie-based
session persistence for the &lt;code&gt;foo&lt;/code&gt; service. It sets the session name to
&lt;code&gt;foo-session&lt;/code&gt;, defines absolute and idle timeouts, and configures the cookie to
be a session cookie:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gateway.networking.k8s.io/v1alpha2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;BackendLBPolicy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;lb-policy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-ns&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;targetRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;group&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;core&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;service&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;sessionPersistence&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;sessionName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-session&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;absoluteTimeout&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;1h&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;idleTimeout&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;30m&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Cookie&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;cookieConfig&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;lifetimeType&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Session&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id=&#34;everything-else&#34;&gt;Everything else&lt;/h3&gt;
&lt;h4 id=&#34;tls-terminology-clarifications-https-gateway-api-sigs-k8s-io-geps-gep-2907&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-2907/&#34;&gt;TLS Terminology Clarifications&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;As part of a broader goal of making our TLS terminology more consistent
throughout the API, we&#39;ve introduced some breaking changes to BackendTLSPolicy.
This has resulted in a new API version (v1alpha3) and will require any existing
implementations of this policy to properly handle the version upgrade, e.g.
by backing up data and uninstalling the v1alpha2 version before installing this
newer version.&lt;/p&gt;
&lt;p&gt;Any references to v1alpha2 BackendTLSPolicy fields will need to be updated to
v1alpha3. Specific changes to fields include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;targetRef&lt;/code&gt; becomes &lt;code&gt;targetRefs&lt;/code&gt; to allow a BackendTLSPolicy to attach to
multiple targets&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tls&lt;/code&gt; becomes &lt;code&gt;validation&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tls.caCertRefs&lt;/code&gt; becomes &lt;code&gt;validation.caCertificateRefs&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tls.wellKnownCACerts&lt;/code&gt; becomes &lt;code&gt;validation.wellKnownCACertificates&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For a full list of the changes included in this release, please refer to the
&lt;a href=&#34;https://github.com/kubernetes-sigs/gateway-api/releases/tag/v1.1.0&#34;&gt;v1.1.0 release notes&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;gateway-api-background&#34;&gt;Gateway API background&lt;/h2&gt;
&lt;p&gt;The idea of Gateway API was initially &lt;a href=&#34;https://youtu.be/Ne9UJL6irXY?si=wgtC9w8PMB5ZHil2&#34;&gt;proposed&lt;/a&gt;
at the 2019 KubeCon San Diego as the next generation
of Ingress API. Since then, an incredible community has formed to develop what
has likely become the
&lt;a href=&#34;https://www.youtube.com/watch?v=V3Vu_FWb4l4&#34;&gt;most collaborative API in Kubernetes history&lt;/a&gt;.
Over 200 people have contributed to this API so far, and that number continues to grow.&lt;/p&gt;
&lt;p&gt;The maintainers would like to thank &lt;em&gt;everyone&lt;/em&gt; who&#39;s contributed to Gateway API, whether in the
form of commits to the repo, discussion, ideas, or general support. We literally
couldn&#39;t have gotten this far without the support of this dedicated and active
community.&lt;/p&gt;
&lt;h2 id=&#34;try-it-out&#34;&gt;Try it out&lt;/h2&gt;
&lt;p&gt;Unlike other Kubernetes APIs, you don&#39;t need to upgrade to the latest version of
Kubernetes to get the latest version of Gateway API. As long as you&#39;re running
Kubernetes 1.26 or later, you&#39;ll be able to get up and running with this
version of Gateway API.&lt;/p&gt;
&lt;p&gt;To try out the API, follow our &lt;a href=&#34;https://gateway-api.sigs.k8s.io/guides/&#34;&gt;Getting Started Guide&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;get-involved&#34;&gt;Get involved&lt;/h2&gt;
&lt;p&gt;There are lots of opportunities to get involved and help define the future of
Kubernetes routing APIs for both ingress and service mesh.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Check out the &lt;a href=&#34;https://gateway-api.sigs.k8s.io/guides&#34;&gt;user guides&lt;/a&gt; to see what use-cases can be addressed.&lt;/li&gt;
&lt;li&gt;Try out one of the &lt;a href=&#34;https://gateway-api.sigs.k8s.io/implementations/&#34;&gt;existing Gateway controllers&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Or &lt;a href=&#34;https://gateway-api.sigs.k8s.io/contributing/&#34;&gt;join us in the community&lt;/a&gt;
and help us build the future of Gateway API together!&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;related-kubernetes-blog-articles&#34;&gt;Related Kubernetes blog articles&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/11/28/gateway-api-ga/&#34;&gt;New Experimental Features in Gateway API v1.0&lt;/a&gt;
11/2023&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/31/gateway-api-ga/&#34;&gt;Gateway API v1.0: GA Release&lt;/a&gt;
10/2023&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/25/introducing-ingress2gateway/&#34;&gt;Introducing ingress2gateway; Simplifying Upgrades to Gateway API&lt;/a&gt;
10/2023&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/08/29/gateway-api-v0-8/&#34;&gt;Gateway API v0.8.0: Introducing Service Mesh Support&lt;/a&gt;
08/2023&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Container Runtime Interface streaming explained</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/01/cri-streaming-explained/</link>
      <pubDate>Wed, 01 May 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/01/cri-streaming-explained/</guid>
      <description>
        
        
        &lt;p&gt;The Kubernetes &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/architecture/cri&#34;&gt;Container Runtime Interface (CRI)&lt;/a&gt;
acts as the main connection between the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/kubelet&#34;&gt;kubelet&lt;/a&gt;
and the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/setup/production-environment/container-runtimes&#34;&gt;Container Runtime&lt;/a&gt;.
Those runtimes have to provide a &lt;a href=&#34;https://grpc.io&#34;&gt;gRPC&lt;/a&gt; server which has to
fulfill a Kubernetes defined &lt;a href=&#34;https://protobuf.dev&#34;&gt;Protocol Buffer&lt;/a&gt; interface.
&lt;a href=&#34;https://github.com/kubernetes/cri-api/blob/63929b3/pkg/apis/runtime/v1/api.proto&#34;&gt;This API definition&lt;/a&gt;
evolves over time, for example when contributors add new features or fields are
going to become deprecated.&lt;/p&gt;
&lt;p&gt;In this blog post, I&#39;d like to dive into the functionality and history of three
extraordinary Remote Procedure Calls (RPCs), which are truly outstanding in
terms of how they work: &lt;code&gt;Exec&lt;/code&gt;, &lt;code&gt;Attach&lt;/code&gt; and &lt;code&gt;PortForward&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Exec&lt;/strong&gt; can be used to run dedicated commands within the container and stream
the output to a client like &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/kubectl&#34;&gt;kubectl&lt;/a&gt; or
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/debug/debug-cluster/crictl&#34;&gt;crictl&lt;/a&gt;. It also allows interaction with
that process using standard input (stdin), for example if users want to run a
new shell instance within an existing workload.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Attach&lt;/strong&gt; streams the output of the currently running process via &lt;a href=&#34;https://en.wikipedia.org/wiki/Standard_streams&#34;&gt;standard I/O&lt;/a&gt;
from the container to the client and also allows interaction with them. This is
particularly useful if users want to see what is going on in the container and
be able to interact with the process.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;PortForward&lt;/strong&gt; can be utilized to forward a port from the host to the container
to be able to interact with it using third party network tools. This allows it
to bypass &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/services-networking/service&#34;&gt;Kubernetes services&lt;/a&gt;
for a certain workload and interact with its network interface.&lt;/p&gt;
&lt;h2 id=&#34;what-is-so-special-about-them&#34;&gt;What is so special about them?&lt;/h2&gt;
&lt;p&gt;All RPCs of the CRI either use the &lt;a href=&#34;https://grpc.io/docs/what-is-grpc/core-concepts/#unary-rpc&#34;&gt;gRPC unary calls&lt;/a&gt;
for communication or the &lt;a href=&#34;https://grpc.io/docs/what-is-grpc/core-concepts/#server-streaming-rpc&#34;&gt;server side streaming&lt;/a&gt;
feature (only &lt;code&gt;GetContainerEvents&lt;/code&gt; right now). This means that mainly all RPCs
retrieve a single client request and have to return a single server response.
The same applies to &lt;code&gt;Exec&lt;/code&gt;, &lt;code&gt;Attach&lt;/code&gt;, and &lt;code&gt;PortForward&lt;/code&gt;, where their &lt;a href=&#34;https://github.com/kubernetes/cri-api/blob/63929b3/pkg/apis/runtime/v1/api.proto#L94-L99&#34;&gt;protocol definition&lt;/a&gt;
looks like this:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-protobuf&#34; data-lang=&#34;protobuf&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Exec prepares a streaming endpoint to execute a command in the container.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;rpc&lt;/span&gt; Exec(ExecRequest) &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;returns&lt;/span&gt; (ExecResponse) {}&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-protobuf&#34; data-lang=&#34;protobuf&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Attach prepares a streaming endpoint to attach to a running container.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;rpc&lt;/span&gt; Attach(AttachRequest) &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;returns&lt;/span&gt; (AttachResponse) {}&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-protobuf&#34; data-lang=&#34;protobuf&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// PortForward prepares a streaming endpoint to forward ports from a PodSandbox.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;rpc&lt;/span&gt; PortForward(PortForwardRequest) &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;returns&lt;/span&gt; (PortForwardResponse) {}&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The requests carry everything required to allow the server to do the work,
for example, the &lt;code&gt;ContainerId&lt;/code&gt; or command (&lt;code&gt;Cmd&lt;/code&gt;) to be run in case of &lt;code&gt;Exec&lt;/code&gt;.
More interestingly, all of their responses only contain a &lt;code&gt;url&lt;/code&gt;:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-protobuf&#34; data-lang=&#34;protobuf&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;message&lt;/span&gt; &lt;span style=&#34;color:#00f&#34;&gt;ExecResponse&lt;/span&gt; {&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Fully qualified URL of the exec streaming server.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt; url &lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;1&lt;/span&gt;;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;}&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-protobuf&#34; data-lang=&#34;protobuf&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;message&lt;/span&gt; &lt;span style=&#34;color:#00f&#34;&gt;AttachResponse&lt;/span&gt; {&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Fully qualified URL of the attach streaming server.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt; url &lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;1&lt;/span&gt;;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;}&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-protobuf&#34; data-lang=&#34;protobuf&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;message&lt;/span&gt; &lt;span style=&#34;color:#00f&#34;&gt;PortForwardResponse&lt;/span&gt; {&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Fully qualified URL of the port-forward streaming server.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt; url &lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;1&lt;/span&gt;;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;}&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Why is it implemented like that? Well, &lt;a href=&#34;https://docs.google.com/document/d/1MreuHzNvkBW6q7o_zehm1CBOBof3shbtMTGtUpjpRmY&#34;&gt;the original design document&lt;/a&gt;
for those RPCs even predates &lt;a href=&#34;https://github.com/kubernetes/enhancements&#34;&gt;Kubernetes Enhancements Proposals (KEPs)&lt;/a&gt;
and was originally outlined back in 2016. The kubelet had a native
implementation for &lt;code&gt;Exec&lt;/code&gt;, &lt;code&gt;Attach&lt;/code&gt;, and &lt;code&gt;PortForward&lt;/code&gt; before the
initiative to bring the functionality to the CRI started. Before that,
everything was bound to &lt;a href=&#34;https://www.docker.com&#34;&gt;Docker&lt;/a&gt; or the later abandoned
container runtime &lt;a href=&#34;https://github.com/rkt/rkt&#34;&gt;rkt&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The CRI related design document also elaborates on the option to use native RPC
streaming for exec, attach, and port forward. The downsides outweighed this
approach: the kubelet would still create a network bottleneck and future
runtimes would not be free in choosing the server implementation details. Also,
another option that the Kubelet implements a portable, runtime-agnostic solution
has been abandoned over the final one, because this would mean another project
to maintain which nevertheless would be runtime dependent.&lt;/p&gt;
&lt;p&gt;This means, that the basic flow for &lt;code&gt;Exec&lt;/code&gt;, &lt;code&gt;Attach&lt;/code&gt; and &lt;code&gt;PortForward&lt;/code&gt;
was proposed to look like this:&lt;/p&gt;


&lt;figure class=&#34;diagram-large &#34;&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/01/cri-streaming-explained/flow.svg&#34;
         alt=&#34;CRI Streaming flow&#34;/&gt; 
&lt;/figure&gt;
&lt;p&gt;Clients like crictl or the kubelet (via kubectl) request a new exec, attach or
port forward session from the runtime using the gRPC interface. The runtime
implements a streaming server that also manages the active sessions. This
streaming server provides an HTTP endpoint for the client to connect to. The
client upgrades the connection to use the &lt;a href=&#34;https://en.wikipedia.org/wiki/SPDY&#34;&gt;SPDY&lt;/a&gt;
streaming protocol or (in the future) to a &lt;a href=&#34;https://en.wikipedia.org/wiki/WebSocket&#34;&gt;WebSocket&lt;/a&gt;
connection and starts to stream the data back and forth.&lt;/p&gt;
&lt;p&gt;This implementation allows runtimes to have the flexibility to implement
&lt;code&gt;Exec&lt;/code&gt;, &lt;code&gt;Attach&lt;/code&gt; and &lt;code&gt;PortForward&lt;/code&gt; the way they want, and also allows a
simple test path. Runtimes can change the underlying implementation to support
any kind of feature without having a need to modify the CRI at all.&lt;/p&gt;
&lt;p&gt;Many smaller enhancements to this overall approach have been merged into
Kubernetes in the past years, but the general pattern has always stayed the
same. The kubelet source code transformed into &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/db9fcfe/staging/src/k8s.io/kubelet/pkg/cri/streaming&#34;&gt;a reusable library&lt;/a&gt;,
which is nowadays usable from container runtimes to implement the basic
streaming capability.&lt;/p&gt;
&lt;h2 id=&#34;how-does-the-streaming-actually-work&#34;&gt;How does the streaming actually work?&lt;/h2&gt;
&lt;p&gt;At a first glance, it looks like all three RPCs work the same way, but that&#39;s
not the case. It&#39;s possible to group the functionality of &lt;strong&gt;Exec&lt;/strong&gt; and
&lt;strong&gt;Attach&lt;/strong&gt;, while &lt;strong&gt;PortForward&lt;/strong&gt; follows a distinct internal protocol
definition.&lt;/p&gt;
&lt;h3 id=&#34;exec-and-attach&#34;&gt;Exec and Attach&lt;/h3&gt;
&lt;p&gt;Kubernetes defines &lt;strong&gt;Exec&lt;/strong&gt; and &lt;strong&gt;Attach&lt;/strong&gt; as &lt;em&gt;remote commands&lt;/em&gt;, where its
protocol definition exists in &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/9791f0d/staging/src/k8s.io/apimachinery/pkg/util/remotecommand/constants.go#L28-L52&#34;&gt;five different versions&lt;/a&gt;:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;th&gt;Note&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;code&gt;channel.k8s.io&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Initial (unversioned) SPDY sub protocol (&lt;a href=&#34;https://issues.k8s.io/13394&#34;&gt;#13394&lt;/a&gt;, &lt;a href=&#34;https://issues.k8s.io/13395&#34;&gt;#13395&lt;/a&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;code&gt;v2.channel.k8s.io&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Resolves the issues present in the first version (&lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/15961&#34;&gt;#15961&lt;/a&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;&lt;code&gt;v3.channel.k8s.io&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Adds support for resizing container terminals (&lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/25273&#34;&gt;#25273&lt;/a&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;&lt;code&gt;v4.channel.k8s.io&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Adds support for exit codes using JSON errors (&lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/26541&#34;&gt;#26541&lt;/a&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;&lt;code&gt;v5.channel.k8s.io&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Adds support for a CLOSE signal (&lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/119157&#34;&gt;#119157&lt;/a&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;On top of that, there is an overall effort to replace the SPDY transport
protocol using WebSockets as part &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4006&#34;&gt;KEP #4006&lt;/a&gt;.
Runtimes have to satisfy those protocols over their life cycle to stay up to
date with the Kubernetes implementation.&lt;/p&gt;
&lt;p&gt;Let&#39;s assume that a client uses the latest (&lt;code&gt;v5&lt;/code&gt;) version of the protocol as
well as communicating over WebSockets. In that case, the general flow would be:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The client requests an URL endpoint for &lt;strong&gt;Exec&lt;/strong&gt; or &lt;strong&gt;Attach&lt;/strong&gt; using the CRI.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The server (runtime) validates the request, inserts it into a connection
tracking cache, and provides the HTTP endpoint URL for that request.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The client connects to that URL, upgrades the connection to establish
a WebSocket, and starts to stream data.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the case of &lt;strong&gt;Attach&lt;/strong&gt;, the server has to stream the main container process
data to the client.&lt;/li&gt;
&lt;li&gt;In the case of &lt;strong&gt;Exec&lt;/strong&gt;, the server has to create the subprocess command within
the container and then streams the output to the client.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If stdin is required, then the server needs to listen for that as well and
redirect it to the corresponding process.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Interpreting data for the defined protocol is fairly simple: The first
byte of every input and output packet &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/9791f0d/staging/src/k8s.io/apimachinery/pkg/util/remotecommand/constants.go#L57-L64&#34;&gt;defines&lt;/a&gt;
the actual stream:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;First Byte&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;0&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;standard input&lt;/td&gt;
&lt;td&gt;Data streamed from stdin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;standard output&lt;/td&gt;
&lt;td&gt;Data streamed to stdout&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;2&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;standard error&lt;/td&gt;
&lt;td&gt;Data streamed to stderr&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;stream error&lt;/td&gt;
&lt;td&gt;A streaming error occurred&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;4&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;stream resize&lt;/td&gt;
&lt;td&gt;A terminal resize event&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;255&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;stream close&lt;/td&gt;
&lt;td&gt;Stream should be closed (for WebSockets)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;How should runtimes now implement the streaming server methods for &lt;strong&gt;Exec&lt;/strong&gt; and
&lt;strong&gt;Attach&lt;/strong&gt; by using the provided kubelet library? The key is that the streaming
server implementation in the kubelet &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/db9fcfe/staging/src/k8s.io/kubelet/pkg/cri/streaming/server.go#L63-L68&#34;&gt;outlines an interface&lt;/a&gt;
called &lt;code&gt;Runtime&lt;/code&gt; which has to be fulfilled by the actual container runtime if it
wants to use that library:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-go&#34; data-lang=&#34;go&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Runtime is the interface to execute the commands and provide the streams.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;type&lt;/span&gt; Runtime &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;interface&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#00a000&#34;&gt;Exec&lt;/span&gt;(ctx context.Context, containerID &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt;, cmd []&lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt;, in io.Reader, out, err io.WriteCloser, tty &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;bool&lt;/span&gt;, resize &lt;span style=&#34;color:#666&#34;&gt;&amp;lt;-&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;chan&lt;/span&gt; remotecommand.TerminalSize) &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;error&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#00a000&#34;&gt;Attach&lt;/span&gt;(ctx context.Context, containerID &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt;, in io.Reader, out, err io.WriteCloser, tty &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;bool&lt;/span&gt;, resize &lt;span style=&#34;color:#666&#34;&gt;&amp;lt;-&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;chan&lt;/span&gt; remotecommand.TerminalSize) &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;error&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#00a000&#34;&gt;PortForward&lt;/span&gt;(ctx context.Context, podSandboxID &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt;, port &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;int32&lt;/span&gt;, stream io.ReadWriteCloser) &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;error&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Everything related to the protocol interpretation is
already in place and runtimes only have to implement the actual &lt;code&gt;Exec&lt;/code&gt; and
&lt;code&gt;Attach&lt;/code&gt; logic. For example, the container runtime &lt;a href=&#34;https://github.com/cri-o/cri-o&#34;&gt;CRI-O&lt;/a&gt;
does it &lt;a href=&#34;https://github.com/cri-o/cri-o/blob/2a0867/server/container_exec.go#L27-L46&#34;&gt;like this pseudo code&lt;/a&gt;:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-go&#34; data-lang=&#34;go&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;func&lt;/span&gt; (s StreamService) &lt;span style=&#34;color:#00a000&#34;&gt;Exec&lt;/span&gt;(
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    ctx context.Context,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    containerID &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    cmd []&lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    stdin io.Reader, stdout, stderr io.WriteCloser,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    tty &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;bool&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    resizeChan &lt;span style=&#34;color:#666&#34;&gt;&amp;lt;-&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;chan&lt;/span&gt; remotecommand.TerminalSize,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;) &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;error&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Retrieve the container by the provided containerID
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// …
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Update the container status and verify that the workload is running
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// …
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Execute the command and stream the data
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;return&lt;/span&gt; s.runtimeServer.&lt;span style=&#34;color:#00a000&#34;&gt;Runtime&lt;/span&gt;().&lt;span style=&#34;color:#00a000&#34;&gt;ExecContainer&lt;/span&gt;(
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        s.ctx, c, cmd, stdin, stdout, stderr, tty, resizeChan,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    )
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id=&#34;portforward&#34;&gt;PortForward&lt;/h3&gt;
&lt;p&gt;Forwarding ports to a container works a bit differently when comparing it to
streaming IO data from a workload. The server still has to provide a URL
endpoint for the client to connect to, but then the container runtime has to
enter the network namespace of the container, allocate the port as well as
stream the data back and forth. There is no simple protocol definition available
like for &lt;strong&gt;Exec&lt;/strong&gt; or &lt;strong&gt;Attach&lt;/strong&gt;. This means that the client will stream the
plain SPDY frames (with or without an additional WebSocket connection) which can
be interpreted using libraries like &lt;a href=&#34;https://github.com/moby/spdystream&#34;&gt;moby/spdystream&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Luckily, the kubelet library already provides the &lt;code&gt;PortForward&lt;/code&gt; interface method
which has to be implemented by the runtime. &lt;a href=&#34;&#34;&gt;CRI-O does that&lt;/a&gt; by (simplified):&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-go&#34; data-lang=&#34;go&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;func&lt;/span&gt; (s StreamService) &lt;span style=&#34;color:#00a000&#34;&gt;PortForward&lt;/span&gt;(
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    ctx context.Context,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    podSandboxID &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;string&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    port &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;int32&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    stream io.ReadWriteCloser,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;) &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;error&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Retrieve the pod sandbox by the provided podSandboxID
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;    sandboxID, err &lt;span style=&#34;color:#666&#34;&gt;:=&lt;/span&gt; s.runtimeServer.&lt;span style=&#34;color:#00a000&#34;&gt;PodIDIndex&lt;/span&gt;().&lt;span style=&#34;color:#00a000&#34;&gt;Get&lt;/span&gt;(podSandboxID)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    sb &lt;span style=&#34;color:#666&#34;&gt;:=&lt;/span&gt; s.runtimeServer.&lt;span style=&#34;color:#00a000&#34;&gt;GetSandbox&lt;/span&gt;(sandboxID)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// …
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Get the network namespace path on disk for that sandbox
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;    netNsPath &lt;span style=&#34;color:#666&#34;&gt;:=&lt;/span&gt; sb.&lt;span style=&#34;color:#00a000&#34;&gt;NetNsPath&lt;/span&gt;()
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// …
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;// Enter the network namespace and stream the data
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;&lt;/span&gt;    &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;return&lt;/span&gt; s.runtimeServer.&lt;span style=&#34;color:#00a000&#34;&gt;Runtime&lt;/span&gt;().&lt;span style=&#34;color:#00a000&#34;&gt;PortForwardContainer&lt;/span&gt;(
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        ctx, sb.&lt;span style=&#34;color:#00a000&#34;&gt;InfraContainer&lt;/span&gt;(), netNsPath, port, stream,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    )
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;future-work&#34;&gt;Future work&lt;/h2&gt;
&lt;p&gt;The flexibility Kubernetes provides for the RPCs &lt;code&gt;Exec&lt;/code&gt;, &lt;code&gt;Attach&lt;/code&gt; and
&lt;code&gt;PortForward&lt;/code&gt; is truly outstanding compared to other methods. Nevertheless,
container runtimes have to keep up with the latest and greatest implementations
to support those features in a meaningful way. The general effort to support
WebSockets is not only a plain Kubernetes thing, it also has to be supported by
container runtimes as well as clients like &lt;code&gt;crictl&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;For example, &lt;code&gt;crictl&lt;/code&gt; v1.30 features a new &lt;code&gt;--transport&lt;/code&gt; flag for the
subcommands &lt;code&gt;exec&lt;/code&gt;, &lt;code&gt;attach&lt;/code&gt; and &lt;code&gt;port-forward&lt;/code&gt;
(&lt;a href=&#34;https://github.com/kubernetes-sigs/cri-tools/pull/1383&#34;&gt;#1383&lt;/a&gt;,
&lt;a href=&#34;https://github.com/kubernetes-sigs/cri-tools/pull/1385&#34;&gt;#1385&lt;/a&gt;)
to allow choosing between &lt;code&gt;websocket&lt;/code&gt; and &lt;code&gt;spdy&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;CRI-O is going an experimental path by moving the streaming server
implementation into &lt;a href=&#34;https://github.com/containers/conmon-rs&#34;&gt;conmon-rs&lt;/a&gt;
(a substitute for the container monitor &lt;a href=&#34;https://github.com/containers/conmon&#34;&gt;conmon&lt;/a&gt;). conmon-rs is
a &lt;a href=&#34;https://www.rust-lang.org&#34;&gt;Rust&lt;/a&gt; implementation of the original container
monitor and allows streaming WebSockets directly using supported libraries
(&lt;a href=&#34;https://github.com/containers/conmon-rs/pull/2070&#34;&gt;#2070&lt;/a&gt;). The major benefit
of this approach is that CRI-O does not even have to be running while conmon-rs
can keep active &lt;strong&gt;Exec&lt;/strong&gt;, &lt;strong&gt;Attach&lt;/strong&gt; and &lt;strong&gt;PortForward&lt;/strong&gt; sessions open. The
simplified flow when using crictl directly will then look like this:&lt;/p&gt;
&lt;figure&gt;
&lt;div class=&#34;mermaid&#34;&gt;
    
sequenceDiagram
    autonumber
    participant crictl
    participant runtime as Container Runtime
    participant conmon-rs
    Note over crictl,runtime: Container Runtime Interface (CRI)
    crictl-&gt;&gt;runtime: Exec, Attach, PortForward
    Note over runtime,conmon-rs: Cap’n Proto
    runtime-&gt;&gt;conmon-rs: Serve Exec, Attach, PortForward
    conmon-rs-&gt;&gt;runtime: HTTP endpoint (URL)
    runtime-&gt;&gt;crictl: Response URL
    crictl--&gt;&gt;conmon-rs: Connection upgrade to WebSocket
    conmon-rs-)crictl: Stream data

&lt;/div&gt;
&lt;/figure&gt;

&lt;noscript&gt;
  &lt;div class=&#34;alert alert-secondary callout&#34; role=&#34;alert&#34;&gt;
    &lt;em class=&#34;javascript-required&#34;&gt;JavaScript must be &lt;a href=&#34;https://www.enable-javascript.com/&#34;&gt;enabled&lt;/a&gt; to view this content&lt;/em&gt;
  &lt;/div&gt;
&lt;/noscript&gt;
&lt;p&gt;All of those enhancements require iterative design decisions, while the original
well-conceived implementation acts as the foundation for those. I really hope
you&#39;ve enjoyed this compact journey through the history of CRI RPCs. Feel free
to reach out to me anytime for suggestions or feedback using the
&lt;a href=&#34;https://kubernetes.slack.com/team/U53SUDBD4&#34;&gt;official Kubernetes Slack&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30: Preventing unauthorized volume mode conversion moves to GA</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/30/prevent-unauthorized-volume-mode-conversion-ga/</link>
      <pubDate>Tue, 30 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/30/prevent-unauthorized-volume-mode-conversion-ga/</guid>
      <description>
        
        
        &lt;p&gt;With the release of Kubernetes 1.30, the feature to prevent the modification of the volume mode
of a &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/&#34;&gt;PersistentVolumeClaim&lt;/a&gt; that was created from
an existing VolumeSnapshot in a Kubernetes cluster, has moved to GA!&lt;/p&gt;
&lt;h2 id=&#34;the-problem&#34;&gt;The problem&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#volume-mode&#34;&gt;Volume Mode&lt;/a&gt; of a PersistentVolumeClaim
refers to whether the underlying volume on the storage device is formatted into a filesystem or
presented as a raw block device to the Pod that uses it.&lt;/p&gt;
&lt;p&gt;Users can leverage the VolumeSnapshot feature, which has been stable since Kubernetes v1.20,
to create a PersistentVolumeClaim (shortened as PVC) from an existing VolumeSnapshot in
the Kubernetes cluster. The PVC spec includes a dataSource field, which can point to an
existing VolumeSnapshot instance.
Visit &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#create-persistent-volume-claim-from-volume-snapshot&#34;&gt;Create a PersistentVolumeClaim from a Volume Snapshot&lt;/a&gt;
for more details on how to create a PVC from an existing VolumeSnapshot in a Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;When leveraging the above capability, there is no logic that validates whether the mode of the
original volume, whose snapshot was taken, matches the mode of the newly created volume.&lt;/p&gt;
&lt;p&gt;This presents a security gap that allows malicious users to potentially exploit an
as-yet-unknown vulnerability in the host operating system.&lt;/p&gt;
&lt;p&gt;There is a valid use case to allow some users to perform such conversions. Typically, storage backup
vendors convert the volume mode during the course of a backup operation, to retrieve changed blocks
for greater efficiency of operations. This prevents Kubernetes from blocking the operation completely
and presents a challenge in distinguishing trusted users from malicious ones.&lt;/p&gt;
&lt;h2 id=&#34;preventing-unauthorized-users-from-converting-the-volume-mode&#34;&gt;Preventing unauthorized users from converting the volume mode&lt;/h2&gt;
&lt;p&gt;In this context, an authorized user is one who has access rights to perform &lt;strong&gt;update&lt;/strong&gt;
or &lt;strong&gt;patch&lt;/strong&gt; operations on VolumeSnapshotContents, which is a cluster-level resource.&lt;br&gt;
It is up to the cluster administrator to provide these rights only to trusted users
or applications, like backup vendors.
Users apart from such authorized ones will never be allowed to modify the volume mode
of a PVC when it is being created from a VolumeSnapshot.&lt;/p&gt;
&lt;p&gt;To convert the volume mode, an authorized user must do the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Identify the VolumeSnapshot that is to be used as the data source for a newly
created PVC in the given namespace.&lt;/li&gt;
&lt;li&gt;Identify the VolumeSnapshotContent bound to the above VolumeSnapshot.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl describe volumesnapshot -n &amp;lt;namespace&amp;gt; &amp;lt;name&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Add the annotation &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/labels-annotations-taints/#snapshot-storage-kubernetes-io-allowvolumemodechange&#34;&gt;&lt;code&gt;snapshot.storage.kubernetes.io/allow-volume-mode-change: &amp;quot;true&amp;quot;&lt;/code&gt;&lt;/a&gt;
to the above VolumeSnapshotContent. The VolumeSnapshotContent annotations must include one similar to the following manifest fragment:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeSnapshotContent&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;annotations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;snapshot.storage.kubernetes.io/allow-volume-mode-change&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;true&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: For pre-provisioned VolumeSnapshotContents, you must take an extra
step of setting &lt;code&gt;spec.sourceVolumeMode&lt;/code&gt; field to either &lt;code&gt;Filesystem&lt;/code&gt; or &lt;code&gt;Block&lt;/code&gt;,
depending on the mode of the volume from which this snapshot was taken.&lt;/p&gt;
&lt;p&gt;An example is shown below:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;snapshot.storage.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeSnapshotContent&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;annotations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;snapshot.storage.kubernetes.io/allow-volume-mode-change&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;true&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;volume-snapshot-content-name&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;deletionPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Delete&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driver&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;hostpath.csi.k8s.io&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;source&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;snapshotHandle&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;snapshot-handle&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;sourceVolumeMode&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Filesystem&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeSnapshotRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;volume-snapshot-name&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;namespace&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Repeat steps 1 to 3 for all VolumeSnapshotContents whose volume mode needs to be
converted during a backup or restore operation. This can be done either via software
with credentials of an authorized user or manually by the authorized user(s).&lt;/p&gt;
&lt;p&gt;If the annotation shown above is present on a VolumeSnapshotContent object,
Kubernetes will not prevent the volume mode from being converted.
Users should keep this in mind before they attempt to add the annotation
to any VolumeSnapshotContent.&lt;/p&gt;
&lt;h2 id=&#34;action-required&#34;&gt;Action required&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;prevent-volume-mode-conversion&lt;/code&gt; feature flag is enabled by default in the
external-provisioner &lt;code&gt;v4.0.0&lt;/code&gt; and external-snapshotter &lt;code&gt;v7.0.0&lt;/code&gt;. Volume mode change
will be rejected when creating a PVC from a VolumeSnapshot unless the steps
described above have been performed.&lt;/p&gt;
&lt;h2 id=&#34;what-s-next&#34;&gt;What&#39;s next&lt;/h2&gt;
&lt;p&gt;To determine which CSI external sidecar versions support this feature, please head
over to the &lt;a href=&#34;https://kubernetes-csi.github.io/docs/&#34;&gt;CSI docs page&lt;/a&gt;.
For any queries or issues, join &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;Kubernetes on Slack&lt;/a&gt; and
create a thread in the #csi or #sig-storage channel. Alternately, create an issue in the
CSI external-snapshotter &lt;a href=&#34;https://github.com/kubernetes-csi/external-snapshotter&#34;&gt;repository&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30: Multi-Webhook and Modular Authorization Made Much Easier</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/26/multi-webhook-and-modular-authorization-made-much-easier/</link>
      <pubDate>Fri, 26 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/26/multi-webhook-and-modular-authorization-made-much-easier/</guid>
      <description>
        
        
        &lt;p&gt;With Kubernetes 1.30, we (SIG Auth) are moving Structured Authorization
Configuration to beta.&lt;/p&gt;
&lt;p&gt;Today&#39;s article is about &lt;em&gt;authorization&lt;/em&gt;: deciding what someone can and cannot
access. Check a previous article from yesterday to find about what&#39;s new in
Kubernetes v1.30 around &lt;em&gt;authentication&lt;/em&gt; (finding out who&#39;s performing a task,
and checking that they are who they say they are).&lt;/p&gt;
&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Kubernetes continues to evolve to meet the intricate requirements of system
administrators and developers alike. A critical aspect of Kubernetes that
ensures the security and integrity of the cluster is the API server
authorization. Until recently, the configuration of the authorization chain in
kube-apiserver was somewhat rigid, limited to a set of command-line flags and
allowing only a single webhook in the authorization chain. This approach, while
functional, restricted the flexibility needed by cluster administrators to
define complex, fine-grained authorization policies. The latest Structured
Authorization Configuration feature (&lt;a href=&#34;https://kep.k8s.io/3221&#34;&gt;KEP-3221&lt;/a&gt;) aims
to revolutionize this aspect by introducing a more structured and versatile way
to configure the authorization chain, focusing on enabling multiple webhooks and
providing explicit control mechanisms.&lt;/p&gt;
&lt;h2 id=&#34;the-need-for-improvement&#34;&gt;The Need for Improvement&lt;/h2&gt;
&lt;p&gt;Cluster administrators have long sought the ability to specify multiple
authorization webhooks within the API Server handler chain and have control over
detailed behavior like timeout and failure policy for each webhook. This need
arises from the desire to create layered security policies, where requests can
be validated against multiple criteria or sets of rules in a specific order. The
previous limitations also made it difficult to dynamically configure the
authorizer chain, leaving no room to manage complex authorization scenarios
efficiently.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file&#34;&gt;Structured Authorization Configuration
feature&lt;/a&gt;
addresses these limitations by introducing a configuration file format to
configure the Kubernetes API Server Authorization chain. This format allows
specifying multiple webhooks in the authorization chain (all other authorization
types are specified no more than once). Each webhook authorizer has well-defined
parameters, including timeout settings, failure policies, and conditions for
invocation with &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/cel/&#34;&gt;CEL&lt;/a&gt; rules to pre-filter
requests before they are dispatched to webhooks, helping you prevent unnecessary
invocations. The configuration also supports automatic reloading, ensuring
changes can be applied dynamically without restarting the kube-apiserver. This
feature addresses current limitations and opens up new possibilities for
securing and managing Kubernetes clusters more effectively.&lt;/p&gt;
&lt;h2 id=&#34;sample-configurations&#34;&gt;Sample Configurations&lt;/h2&gt;
&lt;p&gt;Here is a sample structured authorization configuration along with descriptions
for all fields, their defaults, and possible values.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;apiserver.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;AuthorizationConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;authorizers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Webhook&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Name used to describe the authorizer&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# This is explicitly used in monitoring machinery for metrics&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Note:&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#   - Validation for this field is similar to how K8s labels are validated today.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Required, with no default&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;webhook&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;webhook&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# The duration to cache &amp;#39;authorized&amp;#39; responses from the webhook&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# authorizer.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Same as setting `--authorization-webhook-cache-authorized-ttl` flag&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Default: 5m0s&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;authorizedTTL&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;30s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# The duration to cache &amp;#39;unauthorized&amp;#39; responses from the webhook&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# authorizer.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Same as setting `--authorization-webhook-cache-unauthorized-ttl` flag&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Default: 30s&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;unauthorizedTTL&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;30s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Timeout for the webhook request&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Maximum allowed is 30s.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Required, with no default.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;timeout&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;3s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# The API version of the authorization.k8s.io SubjectAccessReview to&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# send to and expect from the webhook.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Same as setting `--authorization-webhook-version` flag&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Required, with no default&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Valid values: v1beta1, v1&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;subjectAccessReviewVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# version the CEL expressions are evaluated against&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Valid values: v1&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Required, no default value&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConditionSubjectAccessReviewVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Controls the authorization decision when a webhook request fails to&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# complete or returns a malformed response or errors evaluating&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# matchConditions.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Valid values:&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#   - NoOpinion: continue to subsequent authorizers to see if one of&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#     them allows the request&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#   - Deny: reject the request without consulting subsequent authorizers&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Required, with no default.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Deny&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;connectionInfo&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Controls how the webhook should communicate with the server.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Valid values:&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# - KubeConfig: use the file specified in kubeConfigFile to locate the&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#   server.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# - InClusterConfig: use the in-cluster configuration to call the&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#   SubjectAccessReview API hosted by kube-apiserver. This mode is not&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#   allowed for kube-apiserver.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;KubeConfig&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Path to KubeConfigFile for connection info&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Required, if connectionInfo.Type is KubeConfig&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kubeConfigFile&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/kube-system-authz-webhook.yaml&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# matchConditions is a list of conditions that must be met for a request to be sent to this&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# webhook. An empty list of matchConditions matches all requests.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# There are a maximum of 64 match conditions allowed.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# The exact matching logic is (in order):&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#   1. If at least one matchCondition evaluates to FALSE, then the webhook is skipped.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#   2. If ALL matchConditions evaluate to TRUE, then the webhook is called.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#   3. If at least one matchCondition evaluates to an error (but none are FALSE):&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#      - If failurePolicy=Deny, then the webhook rejects the request&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#      - If failurePolicy=NoOpinion, then the error is ignored and the webhook is skipped&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConditions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# expression represents the expression which will be evaluated by CEL. Must evaluate to bool.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# CEL expressions have access to the contents of the SubjectAccessReview in v1 version.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# If version specified by subjectAccessReviewVersion in the request variable is v1beta1,&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# the contents would be converted to the v1 version before evaluating the CEL expression.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;#&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only send resource requests to the webhook&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;has(request.resourceAttributes)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only intercept requests to kube-system&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;request.resourceAttributes.namespace == &amp;#39;kube-system&amp;#39;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# don&amp;#39;t intercept requests from kube-system service accounts&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;!(&amp;#39;system:serviceaccounts:kube-system&amp;#39; in request.user.groups)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Node&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;node&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;RBAC&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;rbac&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Webhook&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;in-cluster-authorizer&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;webhook&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;authorizedTTL&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;5m&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;unauthorizedTTL&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;30s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;timeout&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;3s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;subjectAccessReviewVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;NoOpinion&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;connectionInfo&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;InClusterConfig&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The following configuration examples illustrate real-world scenarios that need
the ability to specify multiple webhooks with distinct settings, precedence
order, and failure modes.&lt;/p&gt;
&lt;h3 id=&#34;protecting-installed-crds&#34;&gt;Protecting Installed CRDs&lt;/h3&gt;
&lt;p&gt;Ensuring of Custom Resource Definitions (CRDs) availability at cluster startup
has been a key demand. One of the blockers of having a controller reconcile
those CRDs is having a protection mechanism for them, which can be achieved
through multiple authorization webhooks. This was not possible before as
specifying multiple authorization webhooks in the Kubernetes API Server
authorization chain was simply not possible. Now, with the Structured
Authorization Configuration feature, administrators can specify multiple
webhooks, offering a solution where RBAC falls short, especially when denying
permissions to &#39;non-system&#39; users for certain CRDs.&lt;/p&gt;
&lt;p&gt;Assuming the following for this scenario:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &amp;quot;protected&amp;quot; CRDs are installed.&lt;/li&gt;
&lt;li&gt;They can only be modified by users in the group &lt;code&gt;admin&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;apiserver.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;AuthorizationConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;authorizers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Webhook&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;system-crd-protector&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;webhook&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;unauthorizedTTL&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;30s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;timeout&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;3s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;subjectAccessReviewVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConditionSubjectAccessReviewVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Deny&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;connectionInfo&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;KubeConfig&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kubeConfigFile&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/files/kube-system-authz-webhook.yaml&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConditions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only send resource requests to the webhook&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;has(request.resourceAttributes)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only intercept requests for CRDs&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;request.resourceAttributes.resource.resource = &amp;#34;customresourcedefinitions&amp;#34;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;request.resourceAttributes.resource.group = &amp;#34;&amp;#34;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only intercept update, patch, delete, or deletecollection requests&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;request.resourceAttributes.verb in [&amp;#39;update&amp;#39;, &amp;#39;patch&amp;#39;, &amp;#39;delete&amp;#39;,&amp;#39;deletecollection&amp;#39;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Node&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;RBAC&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id=&#34;preventing-unnecessarily-nested-webhooks&#34;&gt;Preventing unnecessarily nested webhooks&lt;/h3&gt;
&lt;p&gt;A system administrator wants to apply specific validations to requests before
handing them off to webhooks using frameworks like Open Policy Agent. In the
past, this would require running nested webhooks within the one added to the
authorization chain to achieve the desired result. The Structured Authorization
Configuration feature simplifies this process, offering a structured API to
selectively trigger additional webhooks when needed. It also enables
administrators to set distinct failure policies for each webhook, ensuring more
consistent and predictable responses.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;apiserver.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;AuthorizationConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;authorizers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Webhook&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;system-crd-protector&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;webhook&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;unauthorizedTTL&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;30s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;timeout&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;3s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;subjectAccessReviewVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConditionSubjectAccessReviewVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Deny&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;connectionInfo&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;KubeConfig&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kubeConfigFile&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/files/kube-system-authz-webhook.yaml&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConditions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only send resource requests to the webhook&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;has(request.resourceAttributes)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only intercept requests for CRDs&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;request.resourceAttributes.resource.resource = &amp;#34;customresourcedefinitions&amp;#34;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;request.resourceAttributes.resource.group = &amp;#34;&amp;#34;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only intercept update, patch, delete, or deletecollection requests&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;request.resourceAttributes.verb in [&amp;#39;update&amp;#39;, &amp;#39;patch&amp;#39;, &amp;#39;delete&amp;#39;,&amp;#39;deletecollection&amp;#39;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Node&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;RBAC&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;opa&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Webhook&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;webhook&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;unauthorizedTTL&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;30s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;timeout&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;3s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;subjectAccessReviewVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConditionSubjectAccessReviewVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Deny&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;connectionInfo&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;KubeConfig&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kubeConfigFile&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/files/opa-default-authz-webhook.yaml&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConditions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only send resource requests to the webhook&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;has(request.resourceAttributes)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# only intercept requests to default namespace&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;request.resourceAttributes.namespace == &amp;#39;default&amp;#39;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# don&amp;#39;t intercept requests from default service accounts&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;!(&amp;#39;system:serviceaccounts:default&amp;#39; in request.user.groups)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;what-s-next&#34;&gt;What&#39;s next?&lt;/h2&gt;
&lt;p&gt;From Kubernetes 1.30, the feature is in beta and enabled by default. For
Kubernetes v1.31, we expect the feature to stay in beta while we get more
feedback from users. Once it is ready for GA, the feature flag will be removed,
and the configuration file version will be promoted to v1.&lt;/p&gt;
&lt;p&gt;Learn more about this feature on the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file&#34;&gt;structured authorization
configuration&lt;/a&gt;
Kubernetes doc website. You can also follow along with
&lt;a href=&#34;https://kep.k8s.io/3221&#34;&gt;KEP-3221&lt;/a&gt; to track progress in coming Kubernetes
releases.&lt;/p&gt;
&lt;h2 id=&#34;call-to-action&#34;&gt;Call to action&lt;/h2&gt;
&lt;p&gt;In this post, we have covered the benefits of the Structured Authorization
Configuration feature in Kubernetes v1.30 and a few sample configurations for
real-world scenarios. To use this feature, you must specify the path to the
authorization configuration using the &lt;code&gt;--authorization-config&lt;/code&gt; command line
argument. From Kubernetes 1.30, the feature is in beta and enabled by default.
If you want to keep using command line flags instead of a configuration file,
those will continue to work as-is. Specifying both &lt;code&gt;--authorization-config&lt;/code&gt; and
&lt;code&gt;--authorization-modes&lt;/code&gt;/&lt;code&gt;--authorization-webhook-*&lt;/code&gt; won&#39;t work. You need to drop
the older flags from your kube-apiserver command.&lt;/p&gt;
&lt;p&gt;The following kind Cluster configuration sets that command argument on the
APIserver to load an AuthorizationConfiguration from a file
(&lt;code&gt;authorization_config.yaml&lt;/code&gt;) in the files folder. Any needed kubeconfig and
certificate files can also be put in the files directory.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Cluster&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;kind.x-k8s.io/v1alpha4&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;featureGates&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;StructuredAuthorizationConfiguration&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# enabled by default in v1.30&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kubeadmConfigPatches&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- |&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;    kind: ClusterConfiguration
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;    metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;      name: config
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;    apiServer:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;      extraArgs:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;        authorization-config: &amp;#34;/files/authorization_config.yaml&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;      extraVolumes:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;      - name: files
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;        hostPath: &amp;#34;/files&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;        mountPath: &amp;#34;/files&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;        readOnly: true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;role&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;control-plane&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;extraMounts&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;hostPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;files&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containerPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/files&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;We would love to hear your feedback on this feature. In particular, we would
like feedback from Kubernetes cluster administrators and authorization webhook
implementors as they build their integrations with this new API. Please reach
out to us on the
&lt;a href=&#34;https://kubernetes.slack.com/archives/C05EZFX1Z2L&#34;&gt;#sig-auth-authorizers-dev&lt;/a&gt;
channel on Kubernetes Slack.&lt;/p&gt;
&lt;h2 id=&#34;how-to-get-involved&#34;&gt;How to get involved&lt;/h2&gt;
&lt;p&gt;If you are interested in helping develop this feature, sharing feedback, or
participating in any other ongoing SIG Auth projects, please reach out on the
&lt;a href=&#34;https://kubernetes.slack.com/archives/C0EN96KUY&#34;&gt;#sig-auth&lt;/a&gt; channel on
Kubernetes Slack.&lt;/p&gt;
&lt;p&gt;You are also welcome to join the bi-weekly &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-auth/README.md#meetings&#34;&gt;SIG Auth
meetings&lt;/a&gt;
held every other Wednesday.&lt;/p&gt;
&lt;h2 id=&#34;acknowledgments&#34;&gt;Acknowledgments&lt;/h2&gt;
&lt;p&gt;This feature was driven by contributors from several different companies. We
would like to extend a huge thank you to everyone who contributed their time and
effort to make this possible.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30: Structured Authentication Configuration Moves to Beta</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/25/structured-authentication-moves-to-beta/</link>
      <pubDate>Thu, 25 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/25/structured-authentication-moves-to-beta/</guid>
      <description>
        
        
        &lt;p&gt;With Kubernetes 1.30, we (SIG Auth) are moving Structured Authentication Configuration to beta.&lt;/p&gt;
&lt;p&gt;Today&#39;s article is about &lt;em&gt;authentication&lt;/em&gt;: finding out who&#39;s performing a task, and checking
that they are who they say they are. Check back in tomorrow to find about what&#39;s new in
Kubernetes v1.30 around &lt;em&gt;authorization&lt;/em&gt; (deciding what someone can and can&#39;t access).&lt;/p&gt;
&lt;h2 id=&#34;motivation&#34;&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Kubernetes has had a long-standing need for a more flexible and extensible
authentication system. The current system, while powerful, has some limitations
that make it difficult to use in certain scenarios. For example, it is not
possible to use multiple authenticators of the same type (e.g., multiple JWT
authenticators) or to change the configuration without restarting the API server. The
Structured Authentication Configuration feature is the first step towards
addressing these limitations and providing a more flexible and extensible way
to configure authentication in Kubernetes.&lt;/p&gt;
&lt;h2 id=&#34;what-is-structured-authentication-configuration&#34;&gt;What is structured authentication configuration?&lt;/h2&gt;
&lt;p&gt;Kubernetes v1.30 builds on the experimental support for configurating authentication based on
a file, that was added as alpha in Kubernetes v1.30. At this beta stage, Kubernetes only supports configuring JWT
authenticators, which serve as the next iteration of the existing OIDC
authenticator. JWT authenticator is an authenticator to
authenticate Kubernetes users using JWT compliant tokens. The authenticator
will attempt to parse a raw ID token, verify it&#39;s been signed by the configured
issuer.&lt;/p&gt;
&lt;p&gt;The Kubernetes project added configuration from a file so that it can provide more
flexibility than using command line options (which continue to work, and are still supported).
Supporting a configuration file also makes it easy to deliver further improvements in upcoming
releases.&lt;/p&gt;
&lt;h3 id=&#34;benefits-of-structured-authentication-configuration&#34;&gt;Benefits of structured authentication configuration&lt;/h3&gt;
&lt;p&gt;Here&#39;s why using a configuration file to configure cluster authentication is a benefit:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Multiple JWT authenticators&lt;/strong&gt;: You can configure multiple JWT authenticators
simultaneously. This allows you to use multiple identity providers (e.g.,
Okta, Keycloak, GitLab) without needing to use an intermediary like Dex
that handles multiplexing between multiple identity providers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dynamic configuration&lt;/strong&gt;: You can change the configuration without
restarting the API server. This allows you to add, remove, or modify
authenticators without disrupting the API server.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Any JWT-compliant token&lt;/strong&gt;: You can use any JWT-compliant token for
authentication. This allows you to use tokens from any identity provider that
supports JWT. The minimum valid JWT payload must contain the claims documented
in &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/authentication/#using-authentication-configuration&#34;&gt;structured authentication configuration&lt;/a&gt;
page in the Kubernetes documentation.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CEL (Common Expression Language) support&lt;/strong&gt;: You can use &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/cel/&#34;&gt;CEL&lt;/a&gt;
to determine whether the token&#39;s claims match the user&#39;s attributes in Kubernetes (e.g.,
username, group). This allows you to use complex logic to determine whether a
token is valid.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multiple audiences&lt;/strong&gt;: You can configure multiple audiences for a single
authenticator. This allows you to use the same authenticator for multiple
audiences, such as using a different OAuth client for &lt;code&gt;kubectl&lt;/code&gt; and dashboard.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Using identity providers that don&#39;t support OpenID connect discovery&lt;/strong&gt;: You
can use identity providers that don&#39;t support &lt;a href=&#34;https://openid.net/specs/openid-connect-discovery-1_0.html&#34;&gt;OpenID Connect
discovery&lt;/a&gt;. The only
requirement is to host the discovery document at a different location than the
issuer (such as locally in the cluster) and specify the &lt;code&gt;issuer.discoveryURL&lt;/code&gt; in
the configuration file.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;how-to-use-structured-authentication-configuration&#34;&gt;How to use Structured Authentication Configuration&lt;/h2&gt;
&lt;p&gt;To use structured authentication configuration, you specify
the path to the authentication configuration using the &lt;code&gt;--authentication-config&lt;/code&gt;
command line argument in the API server. The configuration file is a YAML file
that specifies the authenticators and their configuration. Here is an example
configuration file that configures two JWT authenticators:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;apiserver.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;AuthenticationConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Someone with a valid token from either of these issuers could authenticate&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# against this cluster.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;jwt&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;issuer&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;url&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;https://issuer1.example.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audiences&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- audience1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- audience2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audienceMatchPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;MatchAny&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.hd == &amp;#34;example.com&amp;#34;&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;the hosted domain name must be example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimMappings&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;username&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.username&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;groups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.groups&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;uid&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.uid&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;extra&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;example.com/tenant&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.tenant&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;userValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;!user.username.startsWith(&amp;#39;system:&amp;#39;)&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;username cannot use reserved system: prefix&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# second authenticator that exposes the discovery document at a different location&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# than the issuer&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;issuer&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;url&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;https://issuer2.example.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;discoveryURL&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;https://discovery.example.com/.well-known/openid-configuration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audiences&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- audience3&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- audience4&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audienceMatchPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;MatchAny&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.hd == &amp;#34;example.com&amp;#34;&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;the hosted domain name must be example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimMappings&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;username&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.username&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;groups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.groups&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;uid&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.uid&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;extra&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;example.com/tenant&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.tenant&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;userValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;!user.username.startsWith(&amp;#39;system:&amp;#39;)&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;username cannot use reserved system: prefix&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;migration-from-command-line-arguments-to-configuration-file&#34;&gt;Migration from command line arguments to configuration file&lt;/h2&gt;
&lt;p&gt;The Structured Authentication Configuration feature is designed to be
backwards-compatible with the existing approach, based on command line options, for
configuring the JWT authenticator. This means that you can continue to use the existing
command-line options to configure the JWT authenticator. However, we (Kubernetes SIG Auth)
recommend migrating to the new configuration file-based approach, as it provides more
flexibility and extensibility.&lt;/p&gt;


&lt;div class=&#34;alert alert-primary&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;alert-heading&#34;&gt;Note&lt;/h4&gt;

    &lt;p&gt;If you specify &lt;code&gt;--authentication-config&lt;/code&gt; along with any of the &lt;code&gt;--oidc-*&lt;/code&gt; command line arguments, this is
a misconfiguration. In this situation, the API server reports an error and then immediately exits.&lt;/p&gt;
&lt;p&gt;If you want to switch to using structured authentication configuration, you have to remove the &lt;code&gt;--oidc-*&lt;/code&gt;
command line arguments, and use the configuration file instead.&lt;/p&gt;


&lt;/div&gt;

&lt;p&gt;Here is an example of how to migrate from the command-line flags to the
configuration file:&lt;/p&gt;
&lt;h3 id=&#34;command-line-arguments&#34;&gt;Command-line arguments&lt;/h3&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-issuer-url&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;https://issuer.example.com
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-client-id&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;example-client-id
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-username-claim&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;username
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-groups-claim&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;groups
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-username-prefix&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;oidc:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-groups-prefix&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;oidc:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-required-claim&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;hd=example.com&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-required-claim&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;admin=true&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-ca-file&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;/path/to/ca.pem
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;There is no equivalent in the configuration file for the &lt;code&gt;--oidc-signing-algs&lt;/code&gt;.
For Kubernetes v1.30, the authenticator supports all the asymmetric algorithms listed in
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/b4935d910dcf256288694391ef675acfbdb8e7a3/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/token/oidc/oidc.go#L222-L233&#34;&gt;&lt;code&gt;oidc.go&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;configuration-file&#34;&gt;Configuration file&lt;/h3&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;apiserver.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;AuthenticationConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;jwt&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;issuer&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;url&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;https://issuer.example.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audiences&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- example-client-id&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;certificateAuthority&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;value is the content of file /path/to/ca.pem&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimMappings&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;username&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claim&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;username&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;prefix&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;oidc:&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;groups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claim&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;groups&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;prefix&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;oidc:&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claim&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;hd&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredValue&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claim&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admin&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredValue&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;true&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;what-s-next&#34;&gt;What&#39;s next?&lt;/h2&gt;
&lt;p&gt;For Kubernetes v1.31, we expect the feature to stay in beta while we get more
feedback. In the coming releases, we want to investigate:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Making distributed claims work via CEL expressions.&lt;/li&gt;
&lt;li&gt;Egress selector configuration support for calls to &lt;code&gt;issuer.url&lt;/code&gt; and
&lt;code&gt;issuer.discoveryURL&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can learn more about this feature on the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/authentication/#using-authentication-configuration&#34;&gt;structured authentication
configuration&lt;/a&gt;
page in the Kubernetes documentation. You can also follow along on the
&lt;a href=&#34;https://kep.k8s.io/3331&#34;&gt;KEP-3331&lt;/a&gt; to track progress across the coming
Kubernetes releases.&lt;/p&gt;
&lt;h2 id=&#34;try-it-out&#34;&gt;Try it out&lt;/h2&gt;
&lt;p&gt;In this post, I have covered the benefits the Structured Authentication
Configuration feature brings in Kubernetes v1.30. To use this feature, you must specify the path to the
authentication configuration using the &lt;code&gt;--authentication-config&lt;/code&gt; command line
argument. From Kubernetes v1.30, the feature is in beta and enabled by default.
If you want to keep using command line arguments instead of a configuration file,
those will continue to work as-is.&lt;/p&gt;
&lt;p&gt;We would love to hear your feedback on this feature. Please reach out to us on the
&lt;a href=&#34;https://kubernetes.slack.com/archives/C04UMAUC4UA&#34;&gt;#sig-auth-authenticators-dev&lt;/a&gt;
channel on Kubernetes Slack (for an invitation, visit &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;https://slack.k8s.io/&lt;/a&gt;).&lt;/p&gt;
&lt;h2 id=&#34;how-to-get-involved&#34;&gt;How to get involved&lt;/h2&gt;
&lt;p&gt;If you are interested in getting involved in the development of this feature,
share feedback, or participate in any other ongoing SIG Auth projects, please
reach out on the &lt;a href=&#34;https://kubernetes.slack.com/archives/C0EN96KUY&#34;&gt;#sig-auth&lt;/a&gt;
channel on Kubernetes Slack.&lt;/p&gt;
&lt;p&gt;You are also welcome to join the bi-weekly &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-auth/README.md#meetings&#34;&gt;SIG Auth
meetings&lt;/a&gt;
held every-other Wednesday.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30: Validating Admission Policy Is Generally Available</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/24/validating-admission-policy-ga/</link>
      <pubDate>Wed, 24 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/24/validating-admission-policy-ga/</guid>
      <description>
        
        
        &lt;p&gt;On behalf of the Kubernetes project, I am excited to announce that ValidatingAdmissionPolicy has reached
&lt;strong&gt;general availability&lt;/strong&gt;
as part of Kubernetes 1.30 release. If you have not yet read about this new declarative alternative to
validating admission webhooks, it may be interesting to read our
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2022/12/20/validating-admission-policies-alpha/&#34;&gt;previous post&lt;/a&gt; about the new feature.
If you have already heard about ValidatingAdmissionPolicies and you are eager to try them out,
there is no better time to do it than now.&lt;/p&gt;
&lt;p&gt;Let&#39;s have a taste of a ValidatingAdmissionPolicy, by replacing a simple webhook.&lt;/p&gt;
&lt;h2 id=&#34;example-admission-webhook&#34;&gt;Example admission webhook&lt;/h2&gt;
&lt;p&gt;First, let&#39;s take a look at an example of a simple webhook. Here is an excerpt from a webhook that
enforces &lt;code&gt;runAsNonRoot&lt;/code&gt;, &lt;code&gt;readOnlyRootFilesystem&lt;/code&gt;, &lt;code&gt;allowPrivilegeEscalation&lt;/code&gt;, and &lt;code&gt;privileged&lt;/code&gt; to be set to the least permissive values.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-go&#34; data-lang=&#34;go&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;func&lt;/span&gt; &lt;span style=&#34;color:#00a000&#34;&gt;verifyDeployment&lt;/span&gt;(deploy &lt;span style=&#34;color:#666&#34;&gt;*&lt;/span&gt;appsv1.Deployment) &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;error&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;	&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;var&lt;/span&gt; errs []&lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;error&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;	&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;for&lt;/span&gt; i, c &lt;span style=&#34;color:#666&#34;&gt;:=&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;range&lt;/span&gt; deploy.Spec.Template.Spec.Containers {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.Name &lt;span style=&#34;color:#666&#34;&gt;==&lt;/span&gt; &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;return&lt;/span&gt; fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;container %d has no name&amp;#34;&lt;/span&gt;, i)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.SecurityContext &lt;span style=&#34;color:#666&#34;&gt;==&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;nil&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			errs = &lt;span style=&#34;color:#a2f&#34;&gt;append&lt;/span&gt;(errs, fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;container %q does not have SecurityContext&amp;#34;&lt;/span&gt;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.SecurityContext.RunAsNonRoot &lt;span style=&#34;color:#666&#34;&gt;==&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;nil&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;||&lt;/span&gt; !&lt;span style=&#34;color:#666&#34;&gt;*&lt;/span&gt;c.SecurityContext.RunAsNonRoot {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			errs = &lt;span style=&#34;color:#a2f&#34;&gt;append&lt;/span&gt;(errs, fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;container %q must set RunAsNonRoot to true in its SecurityContext&amp;#34;&lt;/span&gt;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.SecurityContext.ReadOnlyRootFilesystem &lt;span style=&#34;color:#666&#34;&gt;==&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;nil&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;||&lt;/span&gt; !&lt;span style=&#34;color:#666&#34;&gt;*&lt;/span&gt;c.SecurityContext.ReadOnlyRootFilesystem {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			errs = &lt;span style=&#34;color:#a2f&#34;&gt;append&lt;/span&gt;(errs, fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;&#34;&gt;&amp;#34;&lt;/span&gt;container &lt;span style=&#34;color:#666&#34;&gt;%&lt;/span&gt;q must set ReadOnlyRootFilesystem to &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt; in its SecurityContext&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;		if c.SecurityContext.AllowPrivilegeEscalation != nil &amp;amp;&amp;amp; *c.SecurityContext.AllowPrivilegeEscalation {
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;			errs = append(errs, fmt.Errorf(&amp;#34;&lt;/span&gt;container &lt;span style=&#34;color:#666&#34;&gt;%&lt;/span&gt;q must NOT set AllowPrivilegeEscalation to &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt; in its SecurityContext&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;		if c.SecurityContext.Privileged != nil &amp;amp;&amp;amp; *c.SecurityContext.Privileged {
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;			errs = append(errs, fmt.Errorf(&amp;#34;&lt;/span&gt;container &lt;span style=&#34;color:#666&#34;&gt;%&lt;/span&gt;q must NOT set Privileged to &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt; in its SecurityContext&lt;span style=&#34;&#34;&gt;&amp;#34;&lt;/span&gt;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;	}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;	&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;return&lt;/span&gt; errors.&lt;span style=&#34;color:#00a000&#34;&gt;NewAggregate&lt;/span&gt;(errs)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Check out &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks&#34;&gt;What are admission webhooks?&lt;/a&gt;
Or, see the &lt;a href=&#34;webhook.go&#34;&gt;full code&lt;/a&gt; of this webhook to follow along with this walkthrough.&lt;/p&gt;
&lt;h2 id=&#34;the-policy&#34;&gt;The policy&lt;/h2&gt;
&lt;p&gt;Now let&#39;s try to recreate the validation faithfully with a ValidatingAdmissionPolicy.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Fail&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConstraints&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resourceRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiGroups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;apps&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;v1&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;CREATE&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;UPDATE&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;deployments&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, has(c.securityContext) &amp;amp;&amp;amp; has(c.securityContext.runAsNonRoot) &amp;amp;&amp;amp; c.securityContext.runAsNonRoot)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set runAsNonRoot to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, has(c.securityContext) &amp;amp;&amp;amp; has(c.securityContext.readOnlyRootFilesystem) &amp;amp;&amp;amp; c.securityContext.readOnlyRootFilesystem)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set readOnlyRootFilesystem to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.allowPrivilegeEscalation) || !c.securityContext.allowPrivilegeEscalation)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set allowPrivilegeEscalation to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.Privileged) || !c.securityContext.Privileged)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set privileged to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Create the policy with &lt;code&gt;kubectl&lt;/code&gt;. Great, no complain so far. But let&#39;s get the policy object back and take a look at its status.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl get -oyaml validatingadmissionpolicies/pod-security.policy.example.com
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;status&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;typeChecking&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expressionWarnings&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;fieldRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;spec.validations[3].expression&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;warning&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;|&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;          apps/v1, Kind=Deployment: ERROR: &amp;lt;input&amp;gt;:1:76: undefined field &amp;#39;Privileged&amp;#39;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;           | object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.Privileged) || !c.securityContext.Privileged)
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;           | ...........................................................................^
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;          ERROR: &amp;lt;input&amp;gt;:1:128: undefined field &amp;#39;Privileged&amp;#39;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;           | object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.Privileged) || !c.securityContext.Privileged)
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;           | ...............................................................................................................................^&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The policy was checked against its matched type, which is &lt;code&gt;apps/v1.Deployment&lt;/code&gt;.
Looking at the &lt;code&gt;fieldRef&lt;/code&gt;, the problem was with the 3rd expression (index starts with 0)
The expression in question accessed an undefined &lt;code&gt;Privileged&lt;/code&gt; field.
Ahh, looks like it was a copy-and-paste error. The field name should be in lowercase.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Fail&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConstraints&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resourceRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiGroups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;apps&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;v1&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;CREATE&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;UPDATE&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;deployments&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, has(c.securityContext) &amp;amp;&amp;amp; has(c.securityContext.runAsNonRoot) &amp;amp;&amp;amp; c.securityContext.runAsNonRoot)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set runAsNonRoot to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, has(c.securityContext) &amp;amp;&amp;amp; has(c.securityContext.readOnlyRootFilesystem) &amp;amp;&amp;amp; c.securityContext.readOnlyRootFilesystem)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set readOnlyRootFilesystem to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.allowPrivilegeEscalation) || !c.securityContext.allowPrivilegeEscalation)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set allowPrivilegeEscalation to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.privileged) || !c.securityContext.privileged)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set privileged to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Check its status again, and you should see all warnings cleared.&lt;/p&gt;
&lt;p&gt;Next, let&#39;s create a namespace for our tests.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl create namespace policy-test
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Then, I bind the policy to the namespace. But at this point, I set the action to &lt;code&gt;Warn&lt;/code&gt;
so that the policy prints out &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2020/09/03/warnings/&#34;&gt;warnings&lt;/a&gt; instead of rejecting the requests.
This is especially useful to collect results from all expressions during development and automated testing.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicyBinding&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy-binding.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;policyName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validationActions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;Warn&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchResources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespaceSelector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchLabels&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;kubernetes.io/metadata.name&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;policy-test&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Tests out policy enforcement.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl create -n policy-test -f- &lt;span style=&#34;color:#b44&#34;&gt;&amp;lt;&amp;lt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;apiVersion: apps/v1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;kind: Deployment
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  labels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  name: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  selector:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    matchLabels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  template:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      labels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      containers:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      - image: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        name: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        securityContext:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;          privileged: true
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;          allowPrivilegeEscalation: true
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;EOF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Warning: Validation failed for ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39;: all containers must set runAsNonRoot to true
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Warning: Validation failed for ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39;: all containers must set readOnlyRootFilesystem to true
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Warning: Validation failed for ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39;: all containers must NOT set allowPrivilegeEscalation to true
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Warning: Validation failed for ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39;: all containers must NOT set privileged to true
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Error from server: error when creating &amp;#34;STDIN&amp;#34;: admission webhook &amp;#34;webhook.example.com&amp;#34; denied the request: [container &amp;#34;nginx&amp;#34; must set RunAsNonRoot to true in its SecurityContext, container &amp;#34;nginx&amp;#34; must set ReadOnlyRootFilesystem to true in its SecurityContext, container &amp;#34;nginx&amp;#34; must NOT set AllowPrivilegeEscalation to true in its SecurityContext, container &amp;#34;nginx&amp;#34; must NOT set Privileged to true in its SecurityContext]
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Looks great! The policy and the webhook give equivalent results.
After a few other cases, when we are confident with our policy, maybe it is time to do some cleanup.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For every expression, we repeat access to &lt;code&gt;object.spec.template.spec.containers&lt;/code&gt; and to each &lt;code&gt;securityContext&lt;/code&gt;;&lt;/li&gt;
&lt;li&gt;There is a pattern of checking presence of a field and then accessing it, which looks a bit verbose.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Fortunately, since Kubernetes 1.28, we have new solutions for both issues.
Variable Composition allows us to extract repeated sub-expressions into their own variables.
Kubernetes enables &lt;a href=&#34;https://github.com/google/cel-spec/wiki/proposal-246&#34;&gt;the optional library&lt;/a&gt; for CEL, which are excellent to work with fields that are, you guessed it, optional.&lt;/p&gt;
&lt;p&gt;With both features in mind, let&#39;s refactor the policy a bit.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Fail&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConstraints&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resourceRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiGroups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;apps&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;v1&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;CREATE&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;UPDATE&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;deployments&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;variables&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;containers&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;securityContexts&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;variables.containers.map(c, c.?securityContext)&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;variables.securityContexts.all(c, c.?runAsNonRoot == optional.of(true))&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set runAsNonRoot to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;variables.securityContexts.all(c, c.?readOnlyRootFilesystem == optional.of(true))&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set readOnlyRootFilesystem to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;variables.securityContexts.all(c, c.?allowPrivilegeEscalation != optional.of(true))&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set allowPrivilegeEscalation to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;variables.securityContexts.all(c, c.?privileged != optional.of(true))&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set privileged to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The policy is now much cleaner and more readable. Update the policy, and you should see
it function the same as before.&lt;/p&gt;
&lt;p&gt;Now let&#39;s change the policy binding from warning to actually denying requests that fail validation.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicyBinding&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy-binding.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;policyName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validationActions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;Deny&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchResources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespaceSelector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchLabels&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;kubernetes.io/metadata.name&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;policy-test&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;And finally, remove the webhook. Now the result should include only messages from
the policy.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl create -n policy-test -f- &lt;span style=&#34;color:#b44&#34;&gt;&amp;lt;&amp;lt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;apiVersion: apps/v1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;kind: Deployment
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  labels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  name: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  selector:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    matchLabels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  template:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      labels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      containers:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      - image: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        name: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        securityContext:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;          privileged: true
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;          allowPrivilegeEscalation: true
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;EOF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;The deployments &amp;#34;nginx&amp;#34; is invalid: : ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39; denied request: all containers must set runAsNonRoot to true
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Please notice that, by design, the policy will stop evaluation after the first expression that causes the request to be denied.
This is different from what happens when the expressions generate only warnings.&lt;/p&gt;
&lt;h2 id=&#34;set-up-monitoring&#34;&gt;Set up monitoring&lt;/h2&gt;
&lt;p&gt;Unlike a webhook, a policy is not a dedicated process that can expose its own metrics.
Instead, you can use metrics from the API server in their place.&lt;/p&gt;
&lt;p&gt;Here are some examples in Prometheus Query Language of common monitoring tasks.&lt;/p&gt;
&lt;p&gt;To find the 95th percentile execution duration of the policy shown above.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;histogram_quantile(0.95, sum(rate(apiserver_validating_admission_policy_check_duration_seconds_bucket{policy=&amp;#34;pod-security.policy.example.com&amp;#34;}[5m])) by (le)) 
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;To find the rate of the policy evaluation.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;rate(apiserver_validating_admission_policy_check_total{policy=&amp;#34;pod-security.policy.example.com&amp;#34;}[5m])
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;You can read &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/instrumentation/metrics/&#34;&gt;the metrics reference&lt;/a&gt; to learn more about the metrics above.
The metrics of ValidatingAdmissionPolicy are currently in alpha,
and more and better metrics will come while the stability graduates in the future release.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30: Read-only volume mounts can be finally literally read-only</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/23/recursive-read-only-mounts/</link>
      <pubDate>Tue, 23 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/23/recursive-read-only-mounts/</guid>
      <description>
        
        
        &lt;p&gt;Read-only volume mounts have been a feature of Kubernetes since the beginning.
Surprisingly, read-only mounts are not completely read-only under certain conditions on Linux.
As of the v1.30 release, they can be made completely read-only,
with alpha support for &lt;em&gt;recursive read-only mounts&lt;/em&gt;.&lt;/p&gt;
&lt;h2 id=&#34;read-only-volume-mounts-are-not-really-read-only-by-default&#34;&gt;Read-only volume mounts are not really read-only by default&lt;/h2&gt;
&lt;p&gt;Volume mounts can be deceptively complicated.&lt;/p&gt;
&lt;p&gt;You might expect that the following manifest makes everything under &lt;code&gt;/mnt&lt;/code&gt; in the containers read-only:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;hostPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;path&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeMounts&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;mountPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;readOnly&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;However, any sub-mounts beneath &lt;code&gt;/mnt&lt;/code&gt; may still be writable!
For example, consider that &lt;code&gt;/mnt/my-nfs-server&lt;/code&gt; is writeable on the host.
Inside the container, writes to &lt;code&gt;/mnt/*&lt;/code&gt; will be rejected but &lt;code&gt;/mnt/my-nfs-server/*&lt;/code&gt; will still be writeable.&lt;/p&gt;
&lt;h2 id=&#34;new-mount-option-recursivereadonly&#34;&gt;New mount option: recursiveReadOnly&lt;/h2&gt;
&lt;p&gt;Kubernetes 1.30 added a new mount option &lt;code&gt;recursiveReadOnly&lt;/code&gt; so as to make submounts recursively read-only.&lt;/p&gt;
&lt;p&gt;The option can be enabled as follows:
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;display:grid;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;hostPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;path&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeMounts&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;mountPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;readOnly&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex; background-color:#dfdfdf&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# NEW&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex; background-color:#dfdfdf&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Possible values are `Enabled`, `IfPossible`, and `Disabled`.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex; background-color:#dfdfdf&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Needs to be specified in conjunction with `readOnly: true`.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex; background-color:#dfdfdf&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;recursiveReadOnly&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Enabled&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;This is implemented by applying the &lt;code&gt;MOUNT_ATTR_RDONLY&lt;/code&gt; attribute with the &lt;code&gt;AT_RECURSIVE&lt;/code&gt; flag
using &lt;a href=&#34;https://man7.org/linux/man-pages/man2/mount_setattr.2.html&#34;&gt;&lt;code&gt;mount_setattr(2)&lt;/code&gt;&lt;/a&gt; added in
Linux kernel v5.12.&lt;/p&gt;
&lt;p&gt;For backwards compatibility, the &lt;code&gt;recursiveReadOnly&lt;/code&gt; field is not a replacement for &lt;code&gt;readOnly&lt;/code&gt;,
but is used &lt;em&gt;in conjunction&lt;/em&gt; with it.
To get a properly recursive read-only mount, you must set both fields.&lt;/p&gt;
&lt;h2 id=&#34;availability&#34;&gt;Feature availability&lt;/h2&gt;
&lt;p&gt;To enable &lt;code&gt;recursiveReadOnly&lt;/code&gt; mounts, the following components have to be used:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes: v1.30 or later, with the &lt;code&gt;RecursiveReadOnlyMounts&lt;/code&gt;
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;feature gate&lt;/a&gt; enabled.
As of v1.30, the gate is marked as alpha.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;CRI runtime:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;containerd: v2.0 or later&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OCI runtime:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;runc: v1.1 or later&lt;/li&gt;
&lt;li&gt;crun: v1.8.6 or later&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Linux kernel: v5.12 or later&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;what-s-next&#34;&gt;What&#39;s next?&lt;/h2&gt;
&lt;p&gt;Kubernetes SIG Node hope - and expect - that the feature will be promoted to beta and eventually
general availability (GA) in future releases of Kubernetes, so that users no longer need to enable
the feature gate manually.&lt;/p&gt;
&lt;p&gt;The default value of &lt;code&gt;recursiveReadOnly&lt;/code&gt; will still remain &lt;code&gt;Disabled&lt;/code&gt;, for backwards compatibility.&lt;/p&gt;
&lt;h2 id=&#34;how-can-i-learn-more&#34;&gt;How can I learn more?&lt;/h2&gt;
&lt;!-- https://github.com/kubernetes/website/pull/45159 --&gt;
&lt;p&gt;Please check out the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/volumes/#read-only-mounts&#34;&gt;documentation&lt;/a&gt;
for the further details of &lt;code&gt;recursiveReadOnly&lt;/code&gt; mounts.&lt;/p&gt;
&lt;h2 id=&#34;how-to-get-involved&#34;&gt;How to get involved?&lt;/h2&gt;
&lt;p&gt;This feature is driven by the SIG Node community. Please join us to connect with
the community and share your ideas and feedback around the above feature and
beyond. We look forward to hearing from you!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30: Beta Support For Pods With User Namespaces</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/22/userns-beta/</link>
      <pubDate>Mon, 22 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/22/userns-beta/</guid>
      <description>
        
        
        &lt;p&gt;Linux provides different namespaces to isolate processes from each other. For
example, a typical Kubernetes pod runs within a network namespace to isolate the
network identity and a PID namespace to isolate the processes.&lt;/p&gt;
&lt;p&gt;One Linux namespace that was left behind is the &lt;a href=&#34;https://man7.org/linux/man-pages/man7/user_namespaces.7.html&#34;&gt;user
namespace&lt;/a&gt;. This
namespace allows us to isolate the user and group identifiers (UIDs and GIDs) we
use inside the container from the ones on the host.&lt;/p&gt;
&lt;p&gt;This is a powerful abstraction that allows us to run containers as &amp;quot;root&amp;quot;: we
are root inside the container and can do everything root can inside the pod,
but our interactions with the host are limited to what a non-privileged user can
do. This is great for limiting the impact of a container breakout.&lt;/p&gt;
&lt;p&gt;A container breakout is when a process inside a container can break out
onto the host using some unpatched vulnerability in the container runtime or the
kernel and can access/modify files on the host or other containers. If we
run our pods with user namespaces, the privileges the container has over the
rest of the host are reduced, and the files outside the container it can access
are limited too.&lt;/p&gt;
&lt;p&gt;In Kubernetes v1.25, we introduced support for user namespaces only for stateless
pods. Kubernetes 1.28 lifted that restriction, and now, with Kubernetes 1.30, we
are moving to beta!&lt;/p&gt;
&lt;h2 id=&#34;what-is-a-user-namespace&#34;&gt;What is a user namespace?&lt;/h2&gt;
&lt;p&gt;Note: Linux user namespaces are a different concept from &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/overview/working-with-objects/namespaces/&#34;&gt;Kubernetes
namespaces&lt;/a&gt;.
The former is a Linux kernel feature; the latter is a Kubernetes feature.&lt;/p&gt;
&lt;p&gt;User namespaces are a Linux feature that isolates the UIDs and GIDs of the
containers from the ones on the host. The identifiers in the container can be
mapped to identifiers on the host in a way where the host UID/GIDs used for
different containers never overlap. Furthermore, the identifiers can be mapped
to unprivileged, non-overlapping UIDs and GIDs on the host. This brings two key
benefits:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Prevention of lateral movement&lt;/em&gt;: As the UIDs and GIDs for different
containers are mapped to different UIDs and GIDs on the host, containers have a
harder time attacking each other, even if they escape the container boundaries.
For example, suppose container A runs with different UIDs and GIDs on the host
than container B. In that case, the operations it can do on container B&#39;s files and processes
are limited: only read/write what a file allows to others, as it will never
have permission owner or group permission (the UIDs/GIDs on the host are
guaranteed to be different for different containers).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Increased host isolation&lt;/em&gt;: As the UIDs and GIDs are mapped to unprivileged
users on the host, if a container escapes the container boundaries, even if it
runs as root inside the container, it has no privileges on the host. This
greatly protects what host files it can read/write, which process it can send
signals to, etc. Furthermore, capabilities granted are only valid inside the
user namespace and not on the host, limiting the impact a container
escape can have.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;figure class=&#34;diagram-medium &#34;&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/blog/2024-04-22-userns-beta/image.svg&#34;
         alt=&#34;Image showing IDs 0-65535 are reserved to the host, pods use higher IDs&#34;/&gt; &lt;figcaption&gt;
            &lt;h4&gt;User namespace IDs allocation&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Without using a user namespace, a container running as root in the case of a
container breakout has root privileges on the node. If some capabilities
were granted to the container, the capabilities are valid on the host too. None
of this is true when using user namespaces (modulo bugs, of course 🙂).&lt;/p&gt;
&lt;h2 id=&#34;changes-in-1-30&#34;&gt;Changes in 1.30&lt;/h2&gt;
&lt;p&gt;In Kubernetes 1.30, besides moving user namespaces to beta, the contributors
working on this feature:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Introduced a way for the kubelet to use custom ranges for the UIDs/GIDs mapping&lt;/li&gt;
&lt;li&gt;Have added a way for Kubernetes to enforce that the runtime supports all the features
needed for user namespaces. If they are not supported, Kubernetes will show a
clear error when trying to create a pod with user namespaces. Before 1.30, if
the container runtime didn&#39;t support user namespaces, the pod could be created
without a user namespace.&lt;/li&gt;
&lt;li&gt;Added more tests, including &lt;a href=&#34;https://github.com/kubernetes-sigs/cri-tools/pull/1354&#34;&gt;tests in the
cri-tools&lt;/a&gt;
repository.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can check the
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/pods/user-namespaces/#set-up-a-node-to-support-user-namespaces&#34;&gt;documentation&lt;/a&gt;
on user namespaces for how to configure custom ranges for the mapping.&lt;/p&gt;
&lt;h2 id=&#34;demo&#34;&gt;Demo&lt;/h2&gt;
&lt;p&gt;A few months ago, &lt;a href=&#34;https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv&#34;&gt;CVE-2024-21626&lt;/a&gt; was disclosed. This &lt;strong&gt;vulnerability
score is 8.6 (HIGH)&lt;/strong&gt;. It allows an attacker to escape a container and
&lt;strong&gt;read/write to any path on the node and other pods hosted on the same node&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Rodrigo created a demo that exploits &lt;a href=&#34;https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv&#34;&gt;CVE 2024-21626&lt;/a&gt; and shows how
the exploit, which works without user namespaces, &lt;strong&gt;is mitigated when user
namespaces are in use.&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&#34;youtube-quote-sm&#34;&gt;
  &lt;iframe src=&#34;https://www.youtube.com/embed/07y5bl5UDdA&#34; allowfullscreen title=&#34;Mitigation of CVE-2024-21626 on Kubernetes by enabling User Namespace support&#34;&gt;&lt;/iframe&gt;
&lt;/div&gt;

&lt;p&gt;Please note that with user namespaces, an attacker can do on the host file system
what the permission bits for &amp;quot;others&amp;quot; allow. Therefore, the CVE is not
completely prevented, but the impact is greatly reduced.&lt;/p&gt;
&lt;h2 id=&#34;node-system-requirements&#34;&gt;Node system requirements&lt;/h2&gt;
&lt;p&gt;There are requirements on the Linux kernel version and the container
runtime to use this feature.&lt;/p&gt;
&lt;p&gt;On Linux you need Linux 6.3 or greater. This is because the feature relies on a
kernel feature named idmap mounts, and support for using idmap mounts with tmpfs
was merged in Linux 6.3.&lt;/p&gt;
&lt;p&gt;Suppose you are using &lt;a href=&#34;https://cri-o.io/&#34;&gt;CRI-O&lt;/a&gt; with crun; as always, you can expect support for
Kubernetes 1.30 with CRI-O 1.30. Please note you also need &lt;a href=&#34;https://github.com/containers/crun&#34;&gt;crun&lt;/a&gt; 1.9 or
greater. If you are using CRI-O with &lt;a href=&#34;https://github.com/opencontainers/runc/&#34;&gt;runc&lt;/a&gt;, this is still not supported.&lt;/p&gt;
&lt;p&gt;Containerd support is currently targeted for &lt;a href=&#34;https://containerd.io/&#34;&gt;containerd&lt;/a&gt; 2.0, and
the same crun version requirements apply. If you are using containerd with runc,
this is still not supported.&lt;/p&gt;
&lt;p&gt;Please note that containerd 1.7 added &lt;em&gt;experimental&lt;/em&gt; support for user
namespaces, as implemented in Kubernetes 1.25 and 1.26. We did a redesign in
Kubernetes 1.27, which requires changes in the container runtime. Those changes
are not present in containerd 1.7, so it only works with user namespaces
support in Kubernetes 1.25 and 1.26.&lt;/p&gt;
&lt;p&gt;Another limitation of containerd 1.7 is that it needs to change the
ownership of every file and directory inside the container image during Pod
startup. This has a storage overhead and can significantly impact the
container startup latency. Containerd 2.0 will probably include an implementation
that will eliminate the added startup latency and storage overhead. Consider
this if you plan to use containerd 1.7 with user namespaces in
production.&lt;/p&gt;
&lt;p&gt;None of these containerd 1.7 limitations apply to CRI-O.&lt;/p&gt;
&lt;h2 id=&#34;how-do-i-get-involved&#34;&gt;How do I get involved?&lt;/h2&gt;
&lt;p&gt;You can reach SIG Node by several means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Slack: &lt;a href=&#34;https://kubernetes.slack.com/messages/sig-node&#34;&gt;#sig-node&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-node&#34;&gt;Mailing list&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/community/labels/sig%2Fnode&#34;&gt;Open Community Issues/PRs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can also contact us directly:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;GitHub: @rata @giuseppe @saschagrunert&lt;/li&gt;
&lt;li&gt;Slack: @rata @giuseppe @sascha&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.30: Uwubernetes</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/17/kubernetes-v1-30-release/</link>
      <pubDate>Wed, 17 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/17/kubernetes-v1-30-release/</guid>
      <description>
        
        
        &lt;p&gt;&lt;strong&gt;Editors:&lt;/strong&gt; Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko&lt;/p&gt;
&lt;p&gt;Announcing the release of Kubernetes v1.30: Uwubernetes, the cutest release!&lt;/p&gt;
&lt;p&gt;Similar to previous releases, the release of Kubernetes v1.30 introduces new stable, beta, and alpha
features. The consistent delivery of top-notch releases underscores the strength of our development
cycle and the vibrant support from our community.&lt;/p&gt;
&lt;p&gt;This release consists of 45 enhancements. Of those enhancements, 17 have graduated to Stable, 18 are
entering Beta, and 10 have graduated to Alpha.&lt;/p&gt;
&lt;h2 id=&#34;release-theme-and-logo&#34;&gt;Release theme and logo&lt;/h2&gt;
&lt;p&gt;Kubernetes v1.30: &lt;em&gt;Uwubernetes&lt;/em&gt;&lt;/p&gt;


&lt;figure class=&#34;release-logo &#34;&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/blog/2024-04-17-kubernetes-1.30-release/k8s-1.30.png&#34;
         alt=&#34;Kubernetes v1.30 Uwubernetes logo&#34;/&gt; 
&lt;/figure&gt;
&lt;p&gt;Kubernetes v1.30 makes your clusters cuter!&lt;/p&gt;
&lt;p&gt;Kubernetes is built and released by thousands of people from all over the world and all walks of
life. Most contributors are not being paid to do this; we build it for fun, to solve a problem, to
learn something, or for the simple love of the community. Many of us found our homes, our friends,
and our careers here. The Release Team is honored to be a part of the continued growth of
Kubernetes.&lt;/p&gt;
&lt;p&gt;For the people who built it, for the people who release it, and for the furries who keep all of our
clusters online, we present to you Kubernetes v1.30: Uwubernetes, the cutest release to date. The
name is a portmanteau of “kubernetes” and “UwU,” an emoticon used to indicate happiness or cuteness.
We’ve found joy here, but we’ve also brought joy from our outside lives that helps to make this
community as weird and wonderful and welcoming as it is. We’re so happy to share our work with you.&lt;/p&gt;
&lt;p&gt;UwU ♥️&lt;/p&gt;
&lt;h2 id=&#34;graduations-to-stable&#34;&gt;Improvements that graduated to stable in Kubernetes v1.30&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;This is a selection of some of the improvements that are now stable following the v1.30 release.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=&#34;robust-volumemanager-reconstruction-after-kubelet-restart-sig-storage-https-github-com-kubernetes-community-tree-master-sig-storage&#34;&gt;Robust VolumeManager reconstruction after kubelet restart (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;SIG Storage&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;This is a volume manager refactoring that allows the kubelet to populate additional information
about how existing volumes are mounted during the kubelet startup. In general, this makes volume
cleanup after kubelet restart or machine reboot more robust.&lt;/p&gt;
&lt;p&gt;This does not bring any changes for user or cluster administrators. We used the feature process and
feature gate &lt;code&gt;NewVolumeManagerReconstruction&lt;/code&gt; to be able to fall back to the previous behavior in
case something goes wrong. Now that the feature is stable, the feature gate is locked and cannot be
disabled.&lt;/p&gt;
&lt;h3 id=&#34;prevent-unauthorized-volume-mode-conversion-during-volume-restore-sig-storage-https-github-com-kubernetes-community-tree-master-sig-storage&#34;&gt;Prevent unauthorized volume mode conversion during volume restore (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;SIG Storage&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;For Kubernetes v1.30, the control plane always prevents unauthorized changes to volume modes when
restoring a snapshot into a PersistentVolume. As a cluster administrator, you&#39;ll need to grant
permissions to the appropriate identity principals (for example: ServiceAccounts representing a
storage integration) if you need to allow that kind of change at restore time.&lt;/p&gt;
&lt;div class=&#34;alert alert-danger&#34; role=&#34;alert&#34;&gt;&lt;h4 class=&#34;alert-heading&#34;&gt;Warning:&lt;/h4&gt;Action required before upgrading. The &lt;code&gt;prevent-volume-mode-conversion&lt;/code&gt; feature flag is enabled by
default in the external-provisioner &lt;code&gt;v4.0.0&lt;/code&gt; and external-snapshotter &lt;code&gt;v7.0.0&lt;/code&gt;. Volume mode change
will be rejected when creating a PVC from a VolumeSnapshot unless you perform the steps described in
the &amp;quot;Urgent Upgrade Notes&amp;quot; sections for the &lt;a href=&#34;https://github.com/kubernetes-csi/external-provisioner/releases/tag/v4.0.0&#34;&gt;external-provisioner
4.0.0&lt;/a&gt; and the
&lt;a href=&#34;https://github.com/kubernetes-csi/external-snapshotter/releases/tag/v7.0.0&#34;&gt;external-snapshotter
v7.0.0&lt;/a&gt;.&lt;/div&gt;

&lt;p&gt;For more information on this feature also read &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/volume-snapshots/#convert-volume-mode&#34;&gt;converting the volume mode of a
Snapshot&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;pod-scheduling-readiness-sig-scheduling-https-github-com-kubernetes-community-tree-master-sig-scheduling&#34;&gt;Pod Scheduling Readiness (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-scheduling&#34;&gt;SIG Scheduling&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Pod scheduling readiness&lt;/em&gt; graduates to stable this release, after being promoted to beta in
Kubernetes v1.27.&lt;/p&gt;
&lt;p&gt;This now-stable feature lets Kubernetes avoid trying to schedule a Pod that has been defined, when
the cluster doesn&#39;t yet have the resources provisioned to allow actually binding that Pod to a node.
That&#39;s not the only use case; the custom control on whether a Pod can be allowed to schedule also
lets you implement quota mechanisms, security controls, and more.&lt;/p&gt;
&lt;p&gt;Crucially, marking these Pods as exempt from scheduling cuts the work that the scheduler would
otherwise do, churning through Pods that can&#39;t or won&#39;t schedule onto the nodes your cluster
currently has. If you have &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/cluster-administration/cluster-autoscaling/&#34;&gt;cluster
autoscaling&lt;/a&gt; active, using scheduling
gates doesn&#39;t just cut the load on the scheduler, it can also save money. Without scheduling gates,
the autoscaler might otherwise launch a node that doesn&#39;t need to be started.&lt;/p&gt;
&lt;p&gt;In Kubernetes v1.30, by specifying (or removing) a Pod&#39;s &lt;code&gt;.spec.schedulingGates&lt;/code&gt;, you can control
when a Pod is ready to be considered for scheduling. This is a stable feature and is now formally
part of the Kubernetes API definition for Pod.&lt;/p&gt;
&lt;h3 id=&#34;min-domains-in-podtopologyspread-sig-scheduling-https-github-com-kubernetes-community-tree-master-sig-scheduling&#34;&gt;Min domains in PodTopologySpread (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-scheduling&#34;&gt;SIG Scheduling&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;minDomains&lt;/code&gt; parameter for PodTopologySpread constraints graduates to stable this release, which
allows you to define the minimum number of domains. This feature is designed to be used with Cluster
Autoscaler.&lt;/p&gt;
&lt;p&gt;If you previously attempted use and there weren&#39;t enough domains already present, Pods would be
marked as unschedulable. The Cluster Autoscaler would then provision node(s) in new domain(s), and
you&#39;d eventually get Pods spreading over enough domains.&lt;/p&gt;
&lt;h3 id=&#34;go-workspaces-for-k-k-sig-architecture-https-github-com-kubernetes-community-tree-master-sig-architecture&#34;&gt;Go workspaces for k/k (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-architecture&#34;&gt;SIG Architecture&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;The Kubernetes repo now uses Go workspaces. This should not impact end users at all, but does have a
impact for developers of downstream projects. Switching to workspaces caused some breaking changes
in the flags to the various &lt;a href=&#34;https://github.com/kubernetes/code-generator&#34;&gt;k8s.io/code-generator&lt;/a&gt;
tools. Downstream consumers should look at
&lt;a href=&#34;https://github.com/kubernetes/code-generator/blob/master/kube_codegen.sh&#34;&gt;&lt;code&gt;staging/src/k8s.io/code-generator/kube_codegen.sh&lt;/code&gt;&lt;/a&gt;
to see the changes.&lt;/p&gt;
&lt;p&gt;For full details on the changes and reasons why Go workspaces was introduced, read &lt;a href=&#34;https://www.kubernetes.dev/blog/2024/03/19/go-workspaces-in-kubernetes/&#34;&gt;Using Go
workspaces in Kubernetes&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;graduations-to-beta&#34;&gt;Improvements that graduated to beta in Kubernetes v1.30&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;This is a selection of some of the improvements that are now beta following the v1.30 release.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=&#34;node-log-query-sig-windows-https-github-com-kubernetes-community-tree-master-sig-windows&#34;&gt;Node log query (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-windows&#34;&gt;SIG Windows&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows fetching
logs of services running on the node. To use the feature, ensure that the &lt;code&gt;NodeLogQuery&lt;/code&gt; feature
gate is enabled for that node, and that the kubelet configuration options &lt;code&gt;enableSystemLogHandler&lt;/code&gt;
and &lt;code&gt;enableSystemLogQuery&lt;/code&gt; are both set to true.&lt;/p&gt;
&lt;p&gt;Following the v1.30 release, this is now beta (you still need to enable the feature to use it,
though).&lt;/p&gt;
&lt;p&gt;On Linux the assumption is that service logs are available via journald. On Windows the assumption
is that service logs are available in the application log provider. Logs are also available by
reading files within &lt;code&gt;/var/log/&lt;/code&gt; (Linux) or &lt;code&gt;C:\var\log\&lt;/code&gt; (Windows). For more information, see the
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/cluster-administration/system-logs/#log-query&#34;&gt;log query&lt;/a&gt; documentation.&lt;/p&gt;
&lt;h3 id=&#34;crd-validation-ratcheting-sig-api-machinery-https-github-com-kubernetes-community-tree-master-sig-api-machinery&#34;&gt;CRD validation ratcheting (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-api-machinery&#34;&gt;SIG API Machinery&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;You need to enable the &lt;code&gt;CRDValidationRatcheting&lt;/code&gt; &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;feature
gate&lt;/a&gt; to use this behavior, which then
applies to all CustomResourceDefinitions in your cluster.&lt;/p&gt;
&lt;p&gt;Provided you enabled the feature gate, Kubernetes implements &lt;em&gt;validation ratcheting&lt;/em&gt; for
CustomResourceDefinitions. The API server is willing to accept updates to resources that are not valid
after the update, provided that each part of the resource that failed to validate was not changed by
the update operation. In other words, any invalid part of the resource that remains invalid must
have already been wrong. You cannot use this mechanism to update a valid resource so that it becomes
invalid.&lt;/p&gt;
&lt;p&gt;This feature allows authors of CRDs to confidently add new validations to the OpenAPIV3 schema under
certain conditions. Users can update to the new schema safely without bumping the version of the
object or breaking workflows.&lt;/p&gt;
&lt;h3 id=&#34;contextual-logging-sig-instrumentation-https-github-com-kubernetes-community-tree-master-sig-instrumentation&#34;&gt;Contextual logging (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-instrumentation&#34;&gt;SIG Instrumentation&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Contextual Logging advances to beta in this release, empowering developers and operators to inject
customizable, correlatable contextual details like service names and transaction IDs into logs
through &lt;code&gt;WithValues&lt;/code&gt; and &lt;code&gt;WithName&lt;/code&gt;. This enhancement simplifies the correlation and analysis of log
data across distributed systems, significantly improving the efficiency of troubleshooting efforts.
By offering a clearer insight into the workings of your Kubernetes environments, Contextual Logging
ensures that operational challenges are more manageable, marking a notable step forward in
Kubernetes observability.&lt;/p&gt;
&lt;h3 id=&#34;make-kubernetes-aware-of-the-loadbalancer-behaviour-sig-network-https-github-com-kubernetes-community-tree-master-sig-network&#34;&gt;Make Kubernetes aware of the LoadBalancer behaviour (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-network&#34;&gt;SIG Network&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;LoadBalancerIPMode&lt;/code&gt; feature gate is now beta and is now enabled by default. This feature allows
you to set the &lt;code&gt;.status.loadBalancer.ingress.ipMode&lt;/code&gt; for a Service with &lt;code&gt;type&lt;/code&gt; set to
&lt;code&gt;LoadBalancer&lt;/code&gt;. The &lt;code&gt;.status.loadBalancer.ingress.ipMode&lt;/code&gt; specifies how the load-balancer IP
behaves. It may be specified only when the &lt;code&gt;.status.loadBalancer.ingress.ip&lt;/code&gt; field is also
specified. See more details about &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/services-networking/service/#load-balancer-ip-mode&#34;&gt;specifying IPMode of load balancer
status&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;structured-authentication-configuration-sig-auth-https-github-com-kubernetes-community-tree-master-sig-auth&#34;&gt;Structured Authentication Configuration (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-auth&#34;&gt;SIG Auth&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Structured Authentication Configuration&lt;/em&gt; graduates to beta in this release.&lt;/p&gt;
&lt;p&gt;Kubernetes has had a long-standing need for a more flexible and extensible
authentication system. The current system, while powerful, has some limitations
that make it difficult to use in certain scenarios. For example, it is not
possible to use multiple authenticators of the same type (e.g., multiple JWT
authenticators) or to change the configuration without restarting the API server. The
Structured Authentication Configuration feature is the first step towards
addressing these limitations and providing a more flexible and extensible way
to configure authentication in Kubernetes. See more details about &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/authentication/#using-authentication-configuration&#34;&gt;structured
authentication configuration&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;structured-authorization-configuration-sig-auth-https-github-com-kubernetes-community-tree-master-sig-auth&#34;&gt;Structured Authorization Configuration (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-auth&#34;&gt;SIG Auth&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Structured Authorization Configuration&lt;/em&gt; graduates to beta in this release.&lt;/p&gt;
&lt;p&gt;Kubernetes continues to evolve to meet the intricate requirements of system
administrators and developers alike. A critical aspect of Kubernetes that
ensures the security and integrity of the cluster is the API server
authorization. Until recently, the configuration of the authorization chain in
kube-apiserver was somewhat rigid, limited to a set of command-line flags and
allowing only a single webhook in the authorization chain. This approach, while
functional, restricted the flexibility needed by cluster administrators to
define complex, fine-grained authorization policies. The latest Structured
Authorization Configuration feature aims to revolutionize this aspect by introducing
a more structured and versatile way to configure the authorization chain, focusing
on enabling multiple webhooks and providing explicit control mechanisms. See more
details about &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file&#34;&gt;structured authorization configuration&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;new-alpha-features&#34;&gt;New alpha features&lt;/h2&gt;
&lt;h3 id=&#34;speed-up-recursive-selinux-label-change-sig-storage-https-github-com-kubernetes-community-tree-master-sig-storage&#34;&gt;Speed up recursive SELinux label change (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;SIG Storage&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;From the v1.27 release, Kubernetes already included an optimization that sets SELinux labels on the
contents of volumes, using only constant time. Kubernetes achieves that speed up using a mount
option. The slower legacy behavior requires the container runtime to recursively walk through the
whole volumes and apply SELinux labelling individually to each file and directory; this is
especially noticable for volumes with large amount of files and directories.&lt;/p&gt;
&lt;p&gt;Kubernetes v1.27 graduated this feature as beta, but limited it to ReadWriteOncePod volumes. The
corresponding feature gate is &lt;code&gt;SELinuxMountReadWriteOncePod&lt;/code&gt;. It&#39;s still enabled by default and
remains beta in v1.30.&lt;/p&gt;
&lt;p&gt;Kubernetes v1.30 extends support for SELinux mount option to &lt;strong&gt;all&lt;/strong&gt; volumes as alpha, with a
separate feature gate: &lt;code&gt;SELinuxMount&lt;/code&gt;. This feature gate introduces a behavioral change when
multiple Pods with different SELinux labels share the same volume. See
&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling#behavioral-changes&#34;&gt;KEP&lt;/a&gt;
for details.&lt;/p&gt;
&lt;p&gt;We strongly encourage users that run Kubernetes with SELinux enabled to test this feature and
provide any feedback on the &lt;a href=&#34;https://kep.k8s.io/1710&#34;&gt;KEP issue&lt;/a&gt;.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature gate&lt;/th&gt;
&lt;th&gt;Stage in v1.30&lt;/th&gt;
&lt;th&gt;Behavior change&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SELinuxMountReadWriteOncePod&lt;/td&gt;
&lt;td&gt;Beta&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SELinuxMount&lt;/td&gt;
&lt;td&gt;Alpha&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Both feature gates &lt;code&gt;SELinuxMountReadWriteOncePod&lt;/code&gt; and &lt;code&gt;SELinuxMount&lt;/code&gt; must be enabled to test this
feature on all volumes.&lt;/p&gt;
&lt;p&gt;This feature has no effect on Windows nodes or on Linux nodes without SELinux support.&lt;/p&gt;
&lt;h3 id=&#34;recursive-read-only-rro-mounts-sig-node-https-github-com-kubernetes-community-tree-master-sig-node&#34;&gt;Recursive Read-only (RRO) mounts (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node&#34;&gt;SIG Node&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Introducing Recursive Read-Only (RRO) Mounts in alpha this release, you&#39;ll find a new layer of
security for your data. This feature lets you set volumes and their submounts as read-only,
preventing accidental modifications. Imagine deploying a critical application where data integrity
is key—RRO Mounts ensure that your data stays untouched, reinforcing your cluster&#39;s security with an
extra safeguard. This is especially crucial in tightly controlled environments, where even the
slightest change can have significant implications.&lt;/p&gt;
&lt;h3 id=&#34;job-success-completion-policy-sig-apps-https-github-com-kubernetes-community-tree-master-sig-apps&#34;&gt;Job success/completion policy (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-apps&#34;&gt;SIG Apps&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;From Kubernetes v1.30, indexed Jobs support &lt;code&gt;.spec.successPolicy&lt;/code&gt; to define when a Job can be
declared succeeded based on succeeded Pods. This allows you to define two types of criteria:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;succeededIndexes&lt;/code&gt; indicates that the Job can be declared succeeded when these indexes succeeded,
even if other indexes failed.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;succeededCount&lt;/code&gt; indicates that the Job can be declared succeeded when the number of succeeded
Indexes reaches this criterion.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After the Job meets the success policy, the Job controller terminates the lingering Pods.&lt;/p&gt;
&lt;h3 id=&#34;traffic-distribution-for-services-sig-network-https-github-com-kubernetes-community-tree-master-sig-network&#34;&gt;Traffic distribution for services (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-network&#34;&gt;SIG Network&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Kubernetes v1.30 introduces the &lt;code&gt;spec.trafficDistribution&lt;/code&gt; field within a Kubernetes Service as
alpha. This allows you to express preferences for how traffic should be routed to Service endpoints.
While &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/networking/virtual-ips/#traffic-policies&#34;&gt;traffic policies&lt;/a&gt; focus on strict
semantic guarantees, traffic distribution allows you to express &lt;em&gt;preferences&lt;/em&gt; (such as routing to
topologically closer endpoints). This can help optimize for performance, cost, or reliability. You
can use this field by enabling the &lt;code&gt;ServiceTrafficDistribution&lt;/code&gt; feature gate for your cluster and
all of its nodes. In Kubernetes v1.30, the following field value is supported:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PreferClose&lt;/code&gt;: Indicates a preference for routing traffic to endpoints that are topologically
proximate to the client. The interpretation of &amp;quot;topologically proximate&amp;quot; may vary across
implementations and could encompass endpoints within the same node, rack, zone, or even region.
Setting this value gives implementations permission to make different tradeoffs, for example
optimizing for proximity rather than equal distribution of load. You should not set this value if
such tradeoffs are not acceptable.&lt;/p&gt;
&lt;p&gt;If the field is not set, the implementation (like kube-proxy) will apply its default routing
strategy.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/networking/virtual-ips/#traffic-distribution&#34;&gt;Traffic Distribution&lt;/a&gt; for more
details.&lt;/p&gt;
&lt;h3 id=&#34;storage-version-migration-sig-api-machinery-https-github-com-kubernetes-community-tree-master-sig-api-machinery&#34;&gt;Storage Version Migration (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-api-machinery&#34;&gt;SIG API Machinery&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Kubernetes v1.30 introduces a new built-in API for &lt;em&gt;StorageVersionMigration&lt;/em&gt;.
Kubernetes relies on API data being actively re-written, to support some
maintenance activities related to at rest storage. Two prominent examples are
the versioned schema of stored resources (that is, the preferred storage schema
changing from v1 to v2 for a given resource) and encryption at rest (that is,
rewriting stale data based on a change in how the data should be encrypted).&lt;/p&gt;
&lt;p&gt;StorageVersionMigration is alpha API which was available &lt;a href=&#34;https://github.com/kubernetes-sigs/kube-storage-version-migrator&#34;&gt;out of tree&lt;/a&gt; before.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/manage-kubernetes-objects/storage-version-migration&#34;&gt;storage version migration&lt;/a&gt; for more details.&lt;/p&gt;
&lt;h2 id=&#34;graduations-deprecations-and-removals-for-kubernetes-v1-30&#34;&gt;Graduations, deprecations and removals for Kubernetes v1.30&lt;/h2&gt;
&lt;h3 id=&#34;graduated-to-stable&#34;&gt;Graduated to stable&lt;/h3&gt;
&lt;p&gt;This lists all the features that graduated to stable (also known as &lt;em&gt;general availability&lt;/em&gt;). For a
full list of updates including new features and graduations from alpha to beta, see the &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md&#34;&gt;release
notes&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This release includes a total of 17 enhancements promoted to Stable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/1610&#34;&gt;Container Resource based Pod Autoscaling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3458&#34;&gt;Remove transient node predicates from KCCM&#39;s service controller&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/4402&#34;&gt;Go workspaces for k/k&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/2799&#34;&gt;Reduction of Secret-based Service Account Tokens&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3488&#34;&gt;CEL for Admission Control&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3716&#34;&gt;CEL-based admission webhook match conditions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3521&#34;&gt;Pod Scheduling Readiness&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3022&#34;&gt;Min domains in PodTopologySpread&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3141&#34;&gt;Prevent unauthorised volume mode conversion during volume restore&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/647&#34;&gt;API Server Tracing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3705&#34;&gt;Cloud Dual-Stack --node-ip Handling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/24&#34;&gt;AppArmor support&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3756&#34;&gt;Robust VolumeManager reconstruction after kubelet restart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3895&#34;&gt;kubectl delete: Add interactive(-i) flag&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/2305&#34;&gt;Metric cardinality enforcement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/2681&#34;&gt;Field &lt;code&gt;status.hostIPs&lt;/code&gt; added for Pod&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3352&#34;&gt;Aggregated Discovery&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;deprecations-and-removals&#34;&gt;Deprecations and removals&lt;/h3&gt;
&lt;h4 id=&#34;removed-the-securitycontextdeny-admission-plugin-deprecated-since-v1-27&#34;&gt;Removed the SecurityContextDeny admission plugin, deprecated since v1.27&lt;/h4&gt;
&lt;p&gt;(&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-auth&#34;&gt;SIG Auth&lt;/a&gt;, &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-security&#34;&gt;SIG Security&lt;/a&gt;, and &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-testing&#34;&gt;SIG Testing&lt;/a&gt;)
With the removal of the SecurityContextDeny admission plugin, the Pod Security Admission plugin,
available since v1.25, is recommended instead.&lt;/p&gt;
&lt;h2 id=&#34;release-notes&#34;&gt;Release notes&lt;/h2&gt;
&lt;p&gt;Check out the full details of the Kubernetes v1.30 release in our &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md&#34;&gt;release
notes&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;availability&#34;&gt;Availability&lt;/h2&gt;
&lt;p&gt;Kubernetes v1.30 is available for download on
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/releases/tag/v1.30.0&#34;&gt;GitHub&lt;/a&gt;. To get started with
Kubernetes, check out these &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tutorials/&#34;&gt;interactive tutorials&lt;/a&gt; or run
local Kubernetes clusters using &lt;a href=&#34;https://minikube.sigs.k8s.io/&#34;&gt;minikube&lt;/a&gt;. You can also easily
install v1.30 using &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/setup/independent/create-cluster-kubeadm/&#34;&gt;kubeadm&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;release-team&#34;&gt;Release team&lt;/h2&gt;
&lt;p&gt;Kubernetes is only possible with the support, commitment, and hard work of its community. Each
release team is made up of dedicated community volunteers who work together to build the many pieces
that make up the Kubernetes releases you rely on. This requires the specialized skills of people
from all corners of our community, from the code itself to its documentation and project management.&lt;/p&gt;
&lt;p&gt;We would like to thank the entire &lt;a href=&#34;https://github.com/kubernetes/sig-release/blob/master/releases/release-1.30/release-team.md&#34;&gt;release team&lt;/a&gt;
for the hours spent hard at work to deliver the Kubernetes v1.30 release to our community. The
Release Team&#39;s membership ranges from first-time shadows to returning team leads with experience
forged over several release cycles. A very special thanks goes out our release lead, Kat Cosgrove,
for supporting us through a successful release cycle, advocating for us, making sure that we could
all contribute in the best way possible, and challenging us to improve the release process.&lt;/p&gt;
&lt;h2 id=&#34;project-velocity&#34;&gt;Project velocity&lt;/h2&gt;
&lt;p&gt;The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity
of Kubernetes and various sub-projects. This includes everything from individual contributions to
the number of companies that are contributing and is an illustration of the depth and breadth of
effort that goes into evolving this ecosystem.&lt;/p&gt;
&lt;p&gt;In the v1.30 release cycle, which ran for 14 weeks (January 8 to April 17), we saw contributions
from &lt;a href=&#34;https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&amp;amp;var-period_name=v1.29.0%20-%20now&amp;amp;var-metric=contributions&#34;&gt;863 companies&lt;/a&gt; and &lt;a href=&#34;https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&amp;amp;var-period_name=v1.29.0%20-%20now&amp;amp;var-metric=contributions&amp;amp;var-repogroup_name=Kubernetes&amp;amp;var-repo_name=kubernetes%2Fkubernetes&amp;amp;var-country_name=All&amp;amp;var-companies=All&#34;&gt;1391 individuals&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;event-update&#34;&gt;Event update&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;KubeCon + CloudNativeCon China 2024 will take place in Hong Kong, from 21 – 23 August 2024! You
can find more information about the conference and registration on the &lt;a href=&#34;https://events.linuxfoundation.org/kubecon-cloudnativecon-open-source-summit-ai-dev-china/&#34;&gt;event
site&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;KubeCon + CloudNativeCon North America 2024 will take place in Salt Lake City, Utah, The United
States of America, from 12 – 15 November 2024! You can find more information about the conference
and registration on the &lt;a href=&#34;https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/&#34;&gt;eventsite&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;upcoming-release-webinar&#34;&gt;Upcoming release webinar&lt;/h2&gt;
&lt;p&gt;Join members of the Kubernetes v1.30 release team on Thursday, May 23rd, 2024, at 9 A.M. PT to learn
about the major features of this release, as well as deprecations and removals to help plan for
upgrades. For more information and registration, visit the &lt;a href=&#34;https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-130-release/&#34;&gt;event
page&lt;/a&gt;
on the CNCF Online Programs site.&lt;/p&gt;
&lt;h2 id=&#34;get-involved&#34;&gt;Get involved&lt;/h2&gt;
&lt;p&gt;The simplest way to get involved
with Kubernetes is by joining one of the many &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-list.md&#34;&gt;Special Interest
Groups&lt;/a&gt; (SIGs) that align with your
interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at
our weekly &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/communication&#34;&gt;community meeting&lt;/a&gt;,
and through the channels below. Thank you for your continued feedback and support.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Follow us on 𝕏 &lt;a href=&#34;https://twitter.com/kubernetesio&#34;&gt;@Kubernetesio&lt;/a&gt; for latest updates&lt;/li&gt;
&lt;li&gt;Join the community discussion on &lt;a href=&#34;https://discuss.kubernetes.io/&#34;&gt;Discuss&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Join the community on &lt;a href=&#34;http://slack.k8s.io/&#34;&gt;Slack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Post questions (or answer questions) on &lt;a href=&#34;http://stackoverflow.com/questions/tagged/kubernetes&#34;&gt;Stack
Overflow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Share your Kubernetes
&lt;a href=&#34;https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform&#34;&gt;story&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Read more about what’s happening with Kubernetes on the &lt;a href=&#34;https://kubernetes.io/blog/&#34;&gt;blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Learn more about the &lt;a href=&#34;https://github.com/kubernetes/sig-release/tree/master/release-team&#34;&gt;Kubernetes Release
Team&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;This blog was updated on April 19th, 2024 to highlight two additional changes not originally included in the release blog.&lt;/em&gt;&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Spotlight on SIG Architecture: Code Organization</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/11/sig-architecture-code-spotlight-2024/</link>
      <pubDate>Thu, 11 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/11/sig-architecture-code-spotlight-2024/</guid>
      <description>
        
        
        &lt;p&gt;&lt;em&gt;This is the third interview of a SIG Architecture Spotlight series that will cover the different
subprojects. We will cover &lt;a href=&#34;https://github.com/kubernetes/community/blob/e44c2c9d0d3023e7111d8b01ac93d54c8624ee91/sig-architecture/README.md#code-organization&#34;&gt;SIG Architecture: Code Organization&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In this SIG Architecture spotlight I talked with &lt;a href=&#34;https://github.com/MadhavJivrajani&#34;&gt;Madhav Jivrajani&lt;/a&gt;
(VMware), a member of the Code Organization subproject.&lt;/p&gt;
&lt;h2 id=&#34;introducing-the-code-organization-subproject&#34;&gt;Introducing the Code Organization subproject&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM)&lt;/strong&gt;: Hello Madhav, thank you for your availability. Could you start by telling us a
bit about yourself, your role and how you got involved in Kubernetes?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Madhav Jivrajani (MJ)&lt;/strong&gt;: Hello! My name is Madhav Jivrajani, I serve as a technical lead for SIG
Contributor Experience and a GitHub Admin for the Kubernetes project. Apart from that I also
contribute to SIG API Machinery and SIG Etcd, but more recently, I’ve been helping out with the work
that is needed to help Kubernetes &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions&#34;&gt;stay on supported versions of
Go&lt;/a&gt;,
and it is through this that I am involved with the Code Organization subproject of SIG Architecture.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: A project the size of Kubernetes must have unique challenges in terms of code organization
-- is this a fair assumption?  If so, what would you pick as some of the main challenges that are
specific to Kubernetes?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;: That’s a fair assumption! The first interesting challenge comes from the sheer size of the
Kubernetes codebase. We have ≅2.2 million lines of Go code (which is steadily decreasing thanks to
&lt;a href=&#34;https://github.com/dims&#34;&gt;dims&lt;/a&gt; and other folks in this sub-project!), and a little over 240
dependencies that we rely on either directly or indirectly, which is why having a sub-project
dedicated to helping out with dependency management is crucial: we need to know what dependencies
we’re pulling in, what versions these dependencies are at, and tooling to help make sure we are
managing these dependencies across different parts of the codebase in a consistent manner.&lt;/p&gt;
&lt;p&gt;Another interesting challenge with Kubernetes is that we publish a lot of Go modules as part of the
Kubernetes release cycles, one example of this is
&lt;a href=&#34;https://github.com/kubernetes/client-go&#34;&gt;&lt;code&gt;client-go&lt;/code&gt;&lt;/a&gt;.However, we as a project would also like the
benefits of having everything in one repository to get the advantages of using a monorepo, like
atomic commits... so, because of this, code organization works with other SIGs (like SIG Release) to
automate the process of publishing code from the monorepo to downstream individual repositories
which are much easier to consume, and this way you won’t have to import the entire Kubernetes
codebase!&lt;/p&gt;
&lt;h2 id=&#34;code-organization-and-kubernetes&#34;&gt;Code organization and Kubernetes&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: For someone just starting contributing to Kubernetes code-wise, what are the main things
they should consider in terms of code organization? How would you sum up the key concepts?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;: I think one of the key things to keep in mind at least as you’re starting off is the concept
of staging directories. In the &lt;a href=&#34;https://github.com/kubernetes/kubernetes&#34;&gt;&lt;code&gt;kubernetes/kubernetes&lt;/code&gt;&lt;/a&gt;
repository, you will come across a directory called
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/tree/master/staging&#34;&gt;&lt;code&gt;staging/&lt;/code&gt;&lt;/a&gt;. The sub-folders in this
directory serve as a bunch of pseudo-repositories. For example, the
&lt;a href=&#34;https://github.com/kubernetes/client-go&#34;&gt;&lt;code&gt;kubernetes/client-go&lt;/code&gt;&lt;/a&gt; repository that publishes releases
for &lt;code&gt;client-go&lt;/code&gt; is actually a &lt;a href=&#34;https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go&#34;&gt;staging
repo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: So the concept of staging directories fundamentally impact contributions?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;: Precisely, because if you’d like to contribute to any of the staging repos, you will need to
send in a PR to its corresponding staging directory in &lt;code&gt;kubernetes/kubernetes&lt;/code&gt;. Once the code merges
there, we have a bot called the &lt;a href=&#34;https://github.com/kubernetes/publishing-bot&#34;&gt;&lt;code&gt;publishing-bot&lt;/code&gt;&lt;/a&gt;
that will sync the merged commits to the required staging repositories (like
&lt;code&gt;kubernetes/client-go&lt;/code&gt;). This way we get the benefits of a monorepo but we also can modularly
publish code for downstream consumption. PS: The &lt;code&gt;publishing-bot&lt;/code&gt; needs more folks to help out!&lt;/p&gt;
&lt;p&gt;For more information on staging repositories, please see the &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/staging.md&#34;&gt;contributor
documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: Speaking of contributions, the very high number of contributors, both individuals and
companies, must also be a challenge: how does the subproject operate in terms of making sure that
standards are being followed?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;: When it comes to dependency management in the project, there is a &lt;a href=&#34;https://github.com/kubernetes/org/blob/a106af09b8c345c301d072bfb7106b309c0ad8e9/config/kubernetes/org.yaml#L1329&#34;&gt;dedicated
team&lt;/a&gt;
that helps review and approve dependency changes. These are folks who have helped lay the foundation
of much of the
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/vendor.md&#34;&gt;tooling&lt;/a&gt;
that Kubernetes uses today for dependency management. This tooling helps ensure there is a
consistent way that contributors can make changes to dependencies. The project has also worked on
additional tooling to signal statistics of dependencies that is being added or removed:
&lt;a href=&#34;https://github.com/kubernetes-sigs/depstat&#34;&gt;&lt;code&gt;depstat&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Apart from dependency management, another crucial task that the project does is management of the
staging repositories. The tooling for achieving this (&lt;code&gt;publishing-bot&lt;/code&gt;) is completely transparent to
contributors and helps ensure that the staging repos get a consistent view of contributions that are
submitted to &lt;code&gt;kubernetes/kubernetes&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Code Organization also works towards making sure that Kubernetes &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions&#34;&gt;stays on supported versions of
Go&lt;/a&gt;. The
linked KEP provides more context on why we need to do this. We collaborate with SIG Release to
ensure that we are testing Kubernetes as rigorously and as early as we can on Go releases and
working on changes that break our CI as a part of this. An example of how we track this process can
be found &lt;a href=&#34;https://github.com/kubernetes/release/issues/3076&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;release-cycle-and-current-priorities&#34;&gt;Release cycle and current priorities&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: Is there anything that changes during the release cycle?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt; During the release cycle, specifically before code freeze, there are often changes that go in
that add/update/delete dependencies, fix code that needs fixing as part of our effort to stay on
supported versions of Go.&lt;/p&gt;
&lt;p&gt;Furthermore, some of these changes are also candidates for
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md&#34;&gt;backporting&lt;/a&gt;
to our supported release branches.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: Is there any major project or theme the subproject is working on right now that you would
like to highlight?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;: I think one very interesting and immensely useful change that
has been recently added (and I take the opportunity to specifically
highlight the work of &lt;a href=&#34;https://github.com/thockin&#34;&gt;Tim Hockin&lt;/a&gt; on
this) is the introduction of &lt;a href=&#34;https://www.kubernetes.dev/blog/2024/03/19/go-workspaces-in-kubernetes/&#34;&gt;Go workspaces to the Kubernetes
repo&lt;/a&gt;. A lot of our
current tooling for dependency management and code publishing, as well
as the experience of editing code in the Kubernetes repo, can be
significantly improved by this change.&lt;/p&gt;
&lt;h2 id=&#34;wrapping-up&#34;&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: How would someone interested in the topic start helping the subproject?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;: The first step, as is the first step with any project in Kubernetes, is to join our slack:
&lt;a href=&#34;https://slack.k8s.io&#34;&gt;slack.k8s.io&lt;/a&gt;, and after that join the &lt;code&gt;#k8s-code-organization&lt;/code&gt; channel. There is also a
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-architecture#meetings&#34;&gt;code-organization office
hours&lt;/a&gt; that takes
place that you can choose to attend. Timezones are hard, so feel free to also look at the recordings
or meeting notes and follow up on slack!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: Excellent, thank you! Any final comments you would like to share?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;: The Code Organization subproject always needs help! Especially areas like the publishing
bot, so don’t hesitate to get involved in the &lt;code&gt;#k8s-code-organization&lt;/code&gt; Slack channel.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>DIY: Create Your Own Cloud with Kubernetes (Part 3)</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/</link>
      <pubDate>Fri, 05 Apr 2024 07:40:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/</guid>
      <description>
        
        
        &lt;p&gt;Approaching the most interesting phase, this article delves into running Kubernetes within
Kubernetes. Technologies such as Kamaji and Cluster API are highlighted, along with their
integration with KubeVirt.&lt;/p&gt;
&lt;p&gt;Previous discussions have covered
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/&#34;&gt;preparing Kubernetes on bare metal&lt;/a&gt;
and
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2&#34;&gt;how to turn Kubernetes into virtual machines management system&lt;/a&gt;.
This article concludes the series by explaining how, using all of the above, you can build a
full-fledged managed Kubernetes and run virtual Kubernetes clusters with just a click.&lt;/p&gt;
&lt;p&gt;First up, let&#39;s dive into the Cluster API.&lt;/p&gt;
&lt;h2 id=&#34;cluster-api&#34;&gt;Cluster API&lt;/h2&gt;
&lt;p&gt;Cluster API is an extension for Kubernetes that allows the management of Kubernetes clusters as
custom resources within another Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;The main goal of the Cluster API is to provide a unified interface for describing the basic
entities of a Kubernetes cluster and managing their lifecycle. This enables the automation of
processes for creating, updating, and deleting clusters, simplifying scaling, and infrastructure
management.&lt;/p&gt;
&lt;p&gt;Within the context of Cluster API, there are two terms: &lt;strong&gt;management cluster&lt;/strong&gt; and
&lt;strong&gt;tenant clusters&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Management cluster&lt;/strong&gt; is a Kubernetes cluster used to deploy and manage other clusters.
This cluster contains all the necessary Cluster API components and is responsible for describing,
creating, and updating tenant clusters. It is often used just for this purpose.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tenant clusters&lt;/strong&gt; are the user clusters or clusters deployed using the Cluster API. They are
created by describing the relevant resources in the management cluster. They are then used for
deploying applications and services by end-users.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It&#39;s important to understand that physically, tenant clusters do not necessarily have to run on
the same infrastructure with the management cluster; more often, they are running elsewhere.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi1.svg&#34;
         alt=&#34;A diagram showing interaction of management Kubernetes cluster and tenant Kubernetes clusters using Cluster API&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing interaction of management Kubernetes cluster and tenant Kubernetes clusters using Cluster API&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;For its operation, Cluster API utilizes the concept of &lt;em&gt;providers&lt;/em&gt; which are separate controllers
responsible for specific components of the cluster being created. Within Cluster API, there are
several types of providers. The major ones are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure Provider&lt;/strong&gt;, which is responsible for providing the computing infrastructure, such as virtual machines or physical servers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Control Plane Provider&lt;/strong&gt;, which provides the Kubernetes control plane, namely the components kube-apiserver, kube-scheduler, and kube-controller-manager.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bootstrap Provider&lt;/strong&gt;, which is used for generating cloud-init configuration for the virtual machines and servers being created.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To get started, you will need to install the Cluster API itself and one provider of each type.
You can find a complete list of supported providers in the project&#39;s
&lt;a href=&#34;https://cluster-api.sigs.k8s.io/reference/providers.html&#34;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For installation, you can use the &lt;code&gt;clusterctl&lt;/code&gt; utility, or
&lt;a href=&#34;https://github.com/kubernetes-sigs/cluster-api-operator&#34;&gt;Cluster API Operator&lt;/a&gt;
as the more declarative method.&lt;/p&gt;
&lt;h2 id=&#34;choosing-providers&#34;&gt;Choosing providers&lt;/h2&gt;
&lt;h3 id=&#34;infrastructure-provider&#34;&gt;Infrastructure provider&lt;/h3&gt;
&lt;p&gt;To run Kubernetes clusters using KubeVirt, the
&lt;a href=&#34;https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt&#34;&gt;KubeVirt Infrastructure Provider&lt;/a&gt;
must be installed.
It enables the deployment of virtual machines for worker nodes in the same management cluster, where
the Cluster API operates.&lt;/p&gt;
&lt;h3 id=&#34;control-plane-provider&#34;&gt;Control plane provider&lt;/h3&gt;
&lt;p&gt;The &lt;a href=&#34;https://github.com/clastix/kamaji&#34;&gt;Kamaji&lt;/a&gt; project offers a ready solution for running the
Kubernetes control plane for tenant clusters as containers within the management cluster.
This approach has several significant advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cost-effectiveness&lt;/strong&gt;: Running the control plane in containers avoids the use of separate control
plane nodes for each cluster, thereby significantly reducing infrastructure costs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Stability&lt;/strong&gt;: Simplifying architecture by eliminating complex multi-layered deployment schemes.
Instead of sequentially launching a virtual machine and then installing etcd and Kubernetes components
inside it, there&#39;s a simple control plane that is deployed and run as a regular application inside
Kubernetes and managed by an operator.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security&lt;/strong&gt;: The cluster&#39;s control plane is hidden from the end user, reducing the possibility
of its components being compromised, and also eliminates user access to the cluster&#39;s certificate
store. This approach to organizing a control plane invisible to the user is often used by cloud providers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;bootstrap-provider&#34;&gt;Bootstrap provider&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes-sigs/cluster-api/tree/main/bootstrap&#34;&gt;Kubeadm&lt;/a&gt; as the Bootstrap
Provider - as the standard method for preparing clusters in Cluster API. This provider is developed
as part of the Cluster API itself. It requires only a prepared system image with kubelet and kubeadm
installed and allows generating configs in the cloud-init and ignition formats.&lt;/p&gt;
&lt;p&gt;It&#39;s worth noting that Talos Linux also supports provisioning via the Cluster API and
&lt;a href=&#34;https://github.com/siderolabs/cluster-api-bootstrap-provider-talos&#34;&gt;has&lt;/a&gt;
&lt;a href=&#34;https://github.com/siderolabs/cluster-api-bootstrap-provider-talos&#34;&gt;providers&lt;/a&gt; for this.
Although &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/&#34;&gt;previous articles&lt;/a&gt;
discussed using Talos Linux to set up a management cluster on bare-metal nodes, to provision tenant
clusters the Kamaji+Kubeadm approach has more advantages.
It facilitates the deployment of Kubernetes control planes in containers, thus removing the need for
separate virtual machines for control plane instances. This simplifies the management and reduces costs.&lt;/p&gt;
&lt;h2 id=&#34;how-it-works&#34;&gt;How it works&lt;/h2&gt;
&lt;p&gt;The primary object in Cluster API is the Cluster resource, which acts as the parent for all the others.
Typically, this resource references two others: a resource describing the &lt;strong&gt;control plane&lt;/strong&gt; and a
resource describing the &lt;strong&gt;infrastructure&lt;/strong&gt;, each managed by a separate provider.&lt;/p&gt;
&lt;p&gt;Unlike the Cluster, these two resources are not standardized, and their kind depends on the specific
provider you are using:&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi2.svg&#34;
         alt=&#34;A diagram showing the relationship of a Cluster resource and the resources it links to in Cluster API&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing the relationship of a Cluster resource and the resources it links to in Cluster API&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Within Cluster API, there is also a resource named MachineDeployment, which describes a group of nodes,
whether they are physical servers or virtual machines. This resource functions similarly to standard
Kubernetes resources such as Deployment, ReplicaSet, and Pod, providing a mechanism for the
declarative description of a group of nodes and automatic scaling.&lt;/p&gt;
&lt;p&gt;In other words, the MachineDeployment resource allows you to declaratively describe nodes for your
cluster, automating their creation, deletion, and updating according to specified parameters and
the requested number of replicas.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/machinedeploymentres.svg&#34;
         alt=&#34;A diagram showing the relationship of a Cluster resource and its children in Cluster API&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing the relationship of a MachineDeployment resource and its children in Cluster API&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;To create machines, MachineDeployment refers to a template for generating the machine itself and a
template for generating its cloud-init config:&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi3.svg&#34;
         alt=&#34;A diagram showing the relationship of a Cluster resource and the resources it links to in Cluster API&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing the relationship of a MachineDeployment resource and the resources it links to in Cluster API&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;To deploy a new Kubernetes cluster using Cluster API, you will need to prepare the following set of resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A general Cluster resource&lt;/li&gt;
&lt;li&gt;A KamajiControlPlane resource, responsible for the control plane operated by Kamaji&lt;/li&gt;
&lt;li&gt;A KubevirtCluster resource, describing the cluster configuration in KubeVirt&lt;/li&gt;
&lt;li&gt;A KubevirtMachineTemplate resource, responsible for the virtual machine template&lt;/li&gt;
&lt;li&gt;A KubeadmConfigTemplate resource, responsible for generating tokens and cloud-init&lt;/li&gt;
&lt;li&gt;At least one MachineDeployment to create some workers&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;polishing-the-cluster&#34;&gt;Polishing the cluster&lt;/h2&gt;
&lt;p&gt;In most cases, this is sufficient, but depending on the providers used, you may need other resources
as well. You can find examples of the resources created for each type of provider in the
&lt;a href=&#34;https://github.com/clastix/cluster-api-control-plane-provider-kamaji?tab=readme-ov-file#-supported-capi-infrastructure-providers&#34;&gt;Kamaji project documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;At this stage, you already have a ready tenant Kubernetes cluster, but so far, it contains nothing
but API workers and a few core plugins that are standardly included in the installation of any
Kubernetes cluster: &lt;strong&gt;kube-proxy&lt;/strong&gt; and &lt;strong&gt;CoreDNS&lt;/strong&gt;. For full integration, you will need to install
several more components:&lt;/p&gt;
&lt;p&gt;To install additional components, you can use a separate
&lt;a href=&#34;https://github.com/kubernetes-sigs/cluster-api-addon-provider-helm&#34;&gt;Cluster API Add-on Provider for Helm&lt;/a&gt;,
or the same &lt;a href=&#34;https://fluxcd.io/&#34;&gt;FluxCD&lt;/a&gt; discussed in
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/&#34;&gt;previous articles&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When creating resources in FluxCD, it&#39;s possible to specify the target cluster by referring to the
kubeconfig generated by Cluster API. Then, the installation will be performed directly into it.
Thus, FluxCD becomes a universal tool for managing resources both in the management cluster and
in the user tenant clusters.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/fluxcd.svg&#34;
         alt=&#34;A diagram showing the interaction scheme of fluxcd, which can install components in both management and tenant Kubernetes clusters&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing the interaction scheme of fluxcd, which can install components in both management and tenant Kubernetes clusters&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;What components are being discussed here? Generally, the set includes the following:&lt;/p&gt;
&lt;h3 id=&#34;cni-plugin&#34;&gt;CNI Plugin&lt;/h3&gt;
&lt;p&gt;To ensure communication between pods in a tenant Kubernetes cluster, it&#39;s necessary to deploy a
CNI plugin. This plugin creates a virtual network that allows pods to interact with each other
and is traditionally deployed as a Daemonset on the cluster&#39;s worker nodes. You can choose and
install any CNI plugin that you find suitable.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/components1.svg&#34;
         alt=&#34;A diagram showing a CNI plugin installed inside the tenant Kubernetes cluster on a scheme of nested Kubernetes clusters&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing a CNI plugin installed inside the tenant Kubernetes cluster on a scheme of nested Kubernetes clusters&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;cloud-controller-manager&#34;&gt;Cloud Controller Manager&lt;/h3&gt;
&lt;p&gt;The main task of the Cloud Controller Manager (CCM) is to integrate Kubernetes with the cloud
infrastructure provider&#39;s environment (in your case, it is the management Kubernetes cluster
in which all worksers of tenant Kubernetes are provisioned). Here are some tasks it performs:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;When a service of type LoadBalancer is created, the CCM initiates the process of creating a cloud load balancer, which directs traffic to your Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;If a node is removed from the cloud infrastructure, the CCM ensures its removal from your cluster as well, maintaining the cluster&#39;s current state.&lt;/li&gt;
&lt;li&gt;When using the CCM, nodes are added to the cluster with a special taint, &lt;code&gt;node.cloudprovider.kubernetes.io/uninitialized&lt;/code&gt;,
which allows for the processing of additional business logic if necessary. After successful initialization, this taint is removed from the node.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Depending on the cloud provider, the CCM can operate both inside and outside the tenant cluster.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubevirt/cloud-provider-kubevirt&#34;&gt;The KubeVirt Cloud Provider&lt;/a&gt; is designed
to be installed in the external parent management cluster. Thus, creating services of type
LoadBalancer in the tenant cluster initiates the creation of LoadBalancer services in the parent
cluster, which direct traffic into the tenant cluster.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/components2.svg&#34;
         alt=&#34;A diagram showing a Cloud Controller Manager installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of services it manages from the parent to the child Kubernetes cluster&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing a Cloud Controller Manager installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of services it manages from the parent to the child Kubernetes cluster&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;csi-driver&#34;&gt;CSI Driver&lt;/h3&gt;
&lt;p&gt;The Container Storage Interface (CSI) is divided into two main parts for interacting with storage
in Kubernetes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;csi-controller&lt;/strong&gt;: This component is responsible for interacting with the cloud provider&#39;s API
to create, delete, attach, detach, and resize volumes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;csi-node&lt;/strong&gt;: This component runs on each node and facilitates the mounting of volumes to pods
as requested by kubelet.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the context of using the &lt;a href=&#34;https://github.com/kubevirt/csi-driver&#34;&gt;KubeVirt CSI Driver&lt;/a&gt;, a unique
opportunity arises. Since virtual machines in KubeVirt runs within the management Kubernetes cluster,
where a full-fledged Kubernetes API is available, this opens the path for running the csi-controller
outside of the user&#39;s tenant cluster. This approach is popular in the KubeVirt community and offers
several key advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Security&lt;/strong&gt;: This method hides the internal cloud API from the end-user, providing access to
resources exclusively through the Kubernetes interface. Thus, it reduces the risk of direct access
to the management cluster from user clusters.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Simplicity and Convenience&lt;/strong&gt;: Users don&#39;t need to manage additional controllers in their clusters,
simplifying the architecture and reducing the management burden.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, the CSI-node must necessarily run inside the tenant cluster, as it directly interacts with
kubelet on each node. This component is responsible for the mounting and unmounting of volumes into pods,
requiring close integration with processes occurring directly on the cluster nodes.&lt;/p&gt;
&lt;p&gt;The KubeVirt CSI Driver acts as a proxy for ordering volumes. When a PVC is created inside the tenant
cluster, a PVC is created in the management cluster, and then the created PV is connected to the
virtual machine.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/components3.svg&#34;
         alt=&#34;A diagram showing a CSI plugin components installed on both inside and outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of persistent volumes it manages from the parent to the child Kubernetes cluster&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing a CSI plugin components installed on both inside and outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of persistent volumes it manages from the parent to the child Kubernetes cluster&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;cluster-autoscaler&#34;&gt;Cluster Autoscaler&lt;/h3&gt;
&lt;p&gt;The &lt;a href=&#34;https://github.com/kubernetes/autoscaler&#34;&gt;Cluster Autoscaler&lt;/a&gt; is a versatile component that
can work with various cloud APIs, and its integration with Cluster-API is just one of the available
functions. For proper configuration, it requires access to two clusters: the tenant cluster, to
track pods and determine the need for adding new nodes, and the managing Kubernetes cluster
(management kubernetes cluster), where it interacts with the MachineDeployment resource and adjusts
the number of replicas.&lt;/p&gt;
&lt;p&gt;Although Cluster Autoscaler usually runs inside the tenant Kubernetes cluster, in this situation,
it is suggested to install it outside for the same reasons described before. This approach is
simpler to maintain and more secure as it prevents users of tenant clusters from accessing the
management API of the management cluster.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/components4.svg&#34;
         alt=&#34;A diagram showing a Cloud Controller Manager installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing a Cluster Autoscaler installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;konnectivity&#34;&gt;Konnectivity&lt;/h3&gt;
&lt;p&gt;There&#39;s another additional component I&#39;d like to mention -
&lt;a href=&#34;https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/&#34;&gt;Konnectivity&lt;/a&gt;.
You will likely need it later on to get webhooks and the API aggregation layer working in your
tenant Kubernetes cluster. This topic is covered in detail in one of my
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/12/22/kubernetes-in-kubernetes-and-pxe-bootable-server-farm/#webhooks-and-api-aggregation-layer&#34;&gt;previous article&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Unlike the components presented above, Kamaji allows you to easily enable Konnectivity and manage
it as one of the core components of your tenant cluster, alongside kube-proxy and CoreDNS.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Now you have a fully functional Kubernetes cluster with the capability for dynamic scaling, automatic
provisioning of volumes, and load balancers.&lt;/p&gt;
&lt;p&gt;Going forward, you might consider metrics and logs collection from your tenant clusters, but that
goes beyond the scope of this article.&lt;/p&gt;
&lt;p&gt;Of course, all the components necessary for deploying a Kubernetes cluster can be packaged into a
single Helm chart and deployed as a unified application. This is precisely how we organize the
deployment of managed Kubernetes clusters with the click of a button on our open PaaS platform,
&lt;a href=&#34;https://cozystack.io/&#34;&gt;Cozystack&lt;/a&gt;, where you can try all the technologies described in the article
for free.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>DIY: Create Your Own Cloud with Kubernetes (Part 2)</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/</link>
      <pubDate>Fri, 05 Apr 2024 07:35:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/</guid>
      <description>
        
        
        &lt;p&gt;Continuing our series of posts on how to build your own cloud using just the Kubernetes ecosystem.
In the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/&#34;&gt;previous article&lt;/a&gt;, we
explained how we prepare a basic Kubernetes distribution based on Talos Linux and Flux CD.
In this article, we&#39;ll show you a few various virtualization technologies in Kubernetes and prepare
everything need to run virtual machines in Kubernetes, primarily storage and networking.&lt;/p&gt;
&lt;p&gt;We will talk about technologies such as KubeVirt, LINSTOR, and Kube-OVN.&lt;/p&gt;
&lt;p&gt;But first, let&#39;s explain what virtual machines are needed for, and why can&#39;t you just use docker
containers for building cloud?
The reason is that containers do not provide a sufficient level of isolation.
Although the situation improves year by year, we often encounter vulnerabilities that allow
escaping the container sandbox and elevating privileges in the system.&lt;/p&gt;
&lt;p&gt;On the other hand, Kubernetes was not originally designed to be a multi-tenant system, meaning
the basic usage pattern involves creating a separate Kubernetes cluster for every independent
project and development team.&lt;/p&gt;
&lt;p&gt;Virtual machines are the primary means of isolating tenants from each other in a cloud environment.
In virtual machines, users can execute code and programs with administrative privilege, but this
doesn&#39;t affect other tenants or the environment itself. In other words, virtual machines allow to
achieve &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/security/multi-tenancy/#isolation&#34;&gt;hard multi-tenancy isolation&lt;/a&gt;, and run
in environments where tenants do not trust each other.&lt;/p&gt;
&lt;h2 id=&#34;virtualization-technologies-in-kubernetes&#34;&gt;Virtualization technologies in Kubernetes&lt;/h2&gt;
&lt;p&gt;There are several different technologies that bring virtualization into the Kubernetes world:
&lt;a href=&#34;https://kubevirt.io/&#34;&gt;KubeVirt&lt;/a&gt; and &lt;a href=&#34;https://katacontainers.io/&#34;&gt;Kata Containers&lt;/a&gt;
are the most popular ones. But you should know that they work differently.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Kata Containers&lt;/strong&gt; implements the CRI (Container Runtime Interface) and provides an additional
level of isolation for standard containers by running them in virtual machines.
But they work in a same single Kubernetes-cluster.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/kata-containers.svg&#34;
         alt=&#34;A diagram showing how container isolation is ensured by running containers in virtual machines with Kata Containers&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing how container isolation is ensured by running containers in virtual machines with Kata Containers&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;strong&gt;KubeVirt&lt;/strong&gt; allows running traditional virtual machines using the Kubernetes API. KubeVirt virtual
machines are run as regular linux processes in containers. In other words, in KubeVirt, a container
is used as a sandbox for running virtual machine (QEMU) processes.
This can be clearly seen in the figure below, by looking at how live migration of virtual machines
is implemented in KubeVirt. When migration is needed, the virtual machine moves from one container
to another.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/kubevirt-migration.svg&#34;
         alt=&#34;A diagram showing live migration of a virtual machine from one container to another in KubeVirt&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing live migration of a virtual machine from one container to another in KubeVirt&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;There is also an alternative project - &lt;a href=&#34;https://github.com/smartxworks/virtink&#34;&gt;Virtink&lt;/a&gt;, which
implements lightweight virtualization using
&lt;a href=&#34;https://github.com/cloud-hypervisor/cloud-hypervisor&#34;&gt;Cloud-Hypervisor&lt;/a&gt; and is initially focused
on running virtual Kubernetes clusters using the Cluster API.&lt;/p&gt;
&lt;p&gt;Considering our goals, we decided to use KubeVirt as the most popular project in this area.
Besides we have extensive expertise and already made a lot of contributions to KubeVirt.&lt;/p&gt;
&lt;p&gt;KubeVirt is &lt;a href=&#34;https://kubevirt.io/user-guide/operations/installation/&#34;&gt;easy to install&lt;/a&gt; and allows
you to run virtual machines  out-of-the-box using
&lt;a href=&#34;https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#containerdisk&#34;&gt;containerDisk&lt;/a&gt;
feature - this allows you to store and distribute VM images directly as OCI images from container
image registry.
Virtual machines with containerDisk are well suited for creating Kubernetes worker nodes and other
VMs that do not require state persistence.&lt;/p&gt;
&lt;p&gt;For managing persistent data, KubeVirt offers a separate tool, Containerized Data Importer (CDI).
It allows for cloning PVCs and populating them with data from base images. The CDI is necessary
if you want to automatically provision persistent volumes for your virtual machines, and it is
also required for the KubeVirt CSI Driver, which is used to handle persistent volumes claims
from tenant Kubernetes clusters.&lt;/p&gt;
&lt;p&gt;But at first, you have to decide where and how you will store these data.&lt;/p&gt;
&lt;h2 id=&#34;storage-for-kubernetes-vms&#34;&gt;Storage for Kubernetes VMs&lt;/h2&gt;
&lt;p&gt;With the introduction of the CSI (Container Storage Interface), a wide range of technologies that
integrate with Kubernetes has become available.
In fact, KubeVirt fully utilizes the CSI interface, aligning the choice of storage for
virtualization closely with the choice of storage for Kubernetes itself.
However, there are nuances, which you need to consider. Unlike containers, which typically use a
standard filesystem, block devices are more efficient for virtual machine.&lt;/p&gt;
&lt;p&gt;Although the CSI interface in Kubernetes allows the request of both types of volumes: filesystems
and block devices, it&#39;s important to verify that your storage backend supports this.&lt;/p&gt;
&lt;p&gt;Using block devices for virtual machines eliminates the need for an additional abstraction layer,
such as a filesystem, that makes it more performant and in most cases enables the use of the
&lt;em&gt;ReadWriteMany&lt;/em&gt; mode. This mode allows concurrent access to the volume from multiple nodes, which
is a critical feature for enabling the live migration of virtual machines in KubeVirt.&lt;/p&gt;
&lt;p&gt;The storage system can be external or internal (in the case of hyper-converged infrastructure).
Using external storage in many cases makes the whole system more stable, as your data is stored
separately from compute nodes.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/storage-external.svg&#34;
         alt=&#34;A diagram showing external data storage communication with the compute nodes&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing external data storage communication with the compute nodes&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;External storage solutions are often popular in enterprise systems because such storage is
frequently provided by an external vendor, that takes care of its operations. The integration with
Kubernetes involves only a small component installed in the cluster - the CSI driver. This driver
is responsible for provisioning volumes in this storage and attaching them to pods run by Kubernetes.
However, such storage solutions can also be implemented using purely open-source technologies.
One of the popular solutions is &lt;a href=&#34;https://www.truenas.com/&#34;&gt;TrueNAS&lt;/a&gt; powered by
&lt;a href=&#34;https://github.com/democratic-csi/democratic-csi&#34;&gt;democratic-csi&lt;/a&gt; driver.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/storage-local.svg&#34;
         alt=&#34;A diagram showing local data storage running on the compute nodes&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing local data storage running on the compute nodes&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;On the other hand, hyper-converged systems are often implemented using local storage (when you do
not need replication) and with software-defined storages, often installed directly in Kubernetes,
such as &lt;a href=&#34;https://rook.io/&#34;&gt;Rook/Ceph&lt;/a&gt;, &lt;a href=&#34;https://openebs.io/&#34;&gt;OpenEBS&lt;/a&gt;,
&lt;a href=&#34;https://longhorn.io/&#34;&gt;Longhorn&lt;/a&gt;, &lt;a href=&#34;https://linbit.com/linstor/&#34;&gt;LINSTOR&lt;/a&gt;, and others.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/storage-clustered.svg&#34;
         alt=&#34;A diagram showing clustered data storage running on the compute nodes&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing clustered data storage running on the compute nodes&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;A hyper-converged system has its advantages. For example, data locality: when your data is stored
locally, access to such data is faster. But there are disadvantages as such a system is usually
more difficult to manage and maintain.&lt;/p&gt;
&lt;p&gt;At Ænix, we wanted to provide a ready-to-use solution that could be used without the need to
purchase and setup an additional external storage, and that was optimal in terms of speed and
resource utilization. LINSTOR became that solution.
The time-tested and industry-popular technologies such as LVM and ZFS as backend gives confidence
that data is securely stored. DRBD-based replication is incredible fast and consumes a small amount
of computing resources.&lt;/p&gt;
&lt;p&gt;For installing LINSTOR in Kubernetes, there is the Piraeus project, which already provides a
ready-made block storage to use with KubeVirt.&lt;/p&gt;

&lt;div class=&#34;alert alert-info&#34; role=&#34;alert&#34;&gt;&lt;h4 class=&#34;alert-heading&#34;&gt;Note:&lt;/h4&gt;In case you are using Talos Linux, as we described in the
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/&#34;&gt;previous article&lt;/a&gt;, you will
need to enable the necessary kernel modules in advance, and configure piraeus as described in the
&lt;a href=&#34;https://github.com/piraeusdatastore/piraeus-operator/blob/v2/docs/how-to/talos.md&#34;&gt;instruction&lt;/a&gt;.&lt;/div&gt;

&lt;h2 id=&#34;networking-for-kubernetes-vms&#34;&gt;Networking for Kubernetes VMs&lt;/h2&gt;
&lt;p&gt;Despite having the similar interface - CNI, The network architecture in Kubernetes is actually more
complex and typically consists of many independent components that are not directly connected to
each other. In fact, you can split Kubernetes networking into four layers, which are described below.&lt;/p&gt;
&lt;h3 id=&#34;node-network-data-center-network&#34;&gt;Node Network (Data Center Network)&lt;/h3&gt;
&lt;p&gt;The network through which nodes are interconnected with each other. This network is usually not
managed by Kubernetes, but it is an important one because, without it, nothing would work.
In practice, the bare metal infrastructure usually has more than one of such networks e.g.
one for node-to-node communication, second for storage replication, third for external access, etc.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/net-nodes.svg&#34;
         alt=&#34;A diagram showing the role of the node network (data center network) on the Kubernetes networking scheme&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing the role of the node network (data center network) on the Kubernetes networking scheme&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Configuring the physical network interaction between nodes goes beyond the scope of this article,
as in most situations, Kubernetes utilizes already existing network infrastructure.&lt;/p&gt;
&lt;h3 id=&#34;pod-network&#34;&gt;Pod Network&lt;/h3&gt;
&lt;p&gt;This is the network provided by your CNI plugin. The task of the CNI plugin is to ensure transparent
connectivity between all containers and nodes in the cluster. Most CNI plugins implement a flat
network from which separate blocks of IP addresses are allocated for use on each node.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/net-pods.svg&#34;
         alt=&#34;A diagram showing the role of the pod network (CNI-plugin) on the Kubernetes network scheme&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing the role of the pod network (CNI-plugin) on the Kubernetes network scheme&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;In practice, your cluster can have several CNI plugins managed by
&lt;a href=&#34;https://github.com/k8snetworkplumbingwg/multus-cni&#34;&gt;Multus&lt;/a&gt;. This approach is often used in
virtualization solutions based on KubeVirt - &lt;a href=&#34;https://www.rancher.com/&#34;&gt;Rancher&lt;/a&gt; and
&lt;a href=&#34;https://www.redhat.com/en/technologies/cloud-computing/openshift/virtualization&#34;&gt;OpenShift&lt;/a&gt;.
The primary CNI plugin is used for integration with Kubernetes services, while additional CNI
plugins are used to implement private networks (VPC) and integration with the physical networks
of your data center.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;https://github.com/containernetworking/plugins/tree/main/plugins&#34;&gt;default CNI-plugins&lt;/a&gt; can
be used to connect bridges or physical interfaces. Additionally, there are specialized plugins
such as &lt;a href=&#34;https://github.com/kubevirt/macvtap-cni&#34;&gt;macvtap-cni&lt;/a&gt; which are designed to provide
more performance.&lt;/p&gt;
&lt;p&gt;One additional aspect to keep in mind when running virtual machines in Kubernetes is the need for
IPAM (IP Address Management), especially for secondary interfaces provided by Multus. This is
commonly managed by a DHCP server operating within your infrastructure. Additionally, the allocation
of MAC addresses for virtual machines can be managed by
&lt;a href=&#34;https://github.com/k8snetworkplumbingwg/kubemacpool&#34;&gt;Kubemacpool&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Although in our platform, we decided to go another way and fully rely on
&lt;a href=&#34;https://www.kube-ovn.io/&#34;&gt;Kube-OVN&lt;/a&gt;. This CNI plugin is based on OVN (Open Virtual Network) which
was originally developed for OpenStack and it provides a complete network solution for virtual
machines in Kubernetes, features Custom Resources for managing IPs and MAC addresses, supports
live migration with preserving IP addresses between the nodes, and enables the creation of VPCs
for physical network separation between tenants.&lt;/p&gt;
&lt;p&gt;In Kube-OVN you can assign separate subnets to an entire namespace or connect them as additional
network interfaces using Multus.&lt;/p&gt;
&lt;h3 id=&#34;services-network&#34;&gt;Services Network&lt;/h3&gt;
&lt;p&gt;In addition to the CNI plugin, Kubernetes also has a services network, which is primarily needed
for service discovery.
Contrary to traditional virtual machines, Kubernetes is originally designed to run pods with a
random address.
And the services network provides a convenient abstraction (stable IP addresses and DNS names)
that will always direct traffic to the correct pod.
The same approach is also commonly used with virtual machines in clouds despite the fact that
their IPs are usually static.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/net-services.svg&#34;
         alt=&#34;A diagram showing the role of the services network (services network plugin) on the Kubernetes network scheme&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing the role of the services network (services network plugin) on the Kubernetes network scheme&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The implementation of the services network in Kubernetes is handled by the services network plugin,
The standard implementation is called &lt;strong&gt;kube-proxy&lt;/strong&gt; and is used in most clusters.
But nowadays, this functionality might be provided as part of the CNI plugin. The most advanced
implementation is offered by the &lt;a href=&#34;https://cilium.io/&#34;&gt;Cilium&lt;/a&gt; project, which can be run in kube-proxy replacement mode.&lt;/p&gt;
&lt;p&gt;Cilium is based on the eBPF technology, which allows for efficient offloading of the Linux
networking stack, thereby improving performance and security compared to traditional methods based
on iptables.&lt;/p&gt;
&lt;p&gt;In practice, Cilium and Kube-OVN can be easily
&lt;a href=&#34;https://kube-ovn.readthedocs.io/zh-cn/stable/en/advance/with-cilium/&#34;&gt;integrated&lt;/a&gt; to provide a
unified solution that offers seamless, multi-tenant networking for virtual machines, as well as
advanced network policies and combined services network functionality.&lt;/p&gt;
&lt;h3 id=&#34;external-traffic-load-balancer&#34;&gt;External Traffic Load Balancer&lt;/h3&gt;
&lt;p&gt;At this stage, you already have everything needed to run virtual machines in Kubernetes.
But there is actually one more thing.
You still need to access your services from outside your cluster, and an external load balancer
will help you with organizing this.&lt;/p&gt;
&lt;p&gt;For bare metal Kubernetes clusters, there are several load balancers available:
&lt;a href=&#34;https://metallb.universe.tf/&#34;&gt;MetalLB&lt;/a&gt;, &lt;a href=&#34;https://kube-vip.io/&#34;&gt;kube-vip&lt;/a&gt;,
&lt;a href=&#34;https://www.loxilb.io/&#34;&gt;LoxiLB&lt;/a&gt;, also &lt;a href=&#34;https://docs.cilium.io/en/latest/network/lb-ipam/&#34;&gt;Cilium&lt;/a&gt; and
&lt;a href=&#34;https://kube-ovn.readthedocs.io/zh-cn/latest/en/guide/loadbalancer-service/&#34;&gt;Kube-OVN&lt;/a&gt;
provides built-in implementation.&lt;/p&gt;
&lt;p&gt;The role of a external load balancer is to provide a stable address available externally and direct
external traffic to the services network.
The services network plugin will direct it to your pods and virtual machines as usual.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/net-loadbalancer.svg&#34;
         alt=&#34;The role of the external load balancer on the Kubernetes network scheme&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing the role of the external load balancer on the Kubernetes network scheme&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;In most cases, setting up a load balancer on bare metal is achieved by creating floating IP address
on the nodes within the cluster, and announce it externally using ARP/NDP or BGP protocols.&lt;/p&gt;
&lt;p&gt;After exploring various options, we decided that MetalLB is the simplest and most reliable solution,
although we do not strictly enforce the use of only it.&lt;/p&gt;
&lt;p&gt;Another benefit is that in L2 mode, MetalLB speakers continuously check their neighbour&#39;s state by
sending preforming liveness checks using a memberlist protocol.
This enables failover that works independently of Kubernetes control-plane.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This concludes our overview of virtualization, storage, and networking in Kubernetes.
The technologies mentioned here are available and already pre-configured on the
&lt;a href=&#34;https://github.com/aenix-io/cozystack&#34;&gt;Cozystack&lt;/a&gt; platform, where you can try them with no limitations.&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/&#34;&gt;next article&lt;/a&gt;,
I&#39;ll detail how, on top of this, you can implement the provisioning of fully functional Kubernetes
clusters with just the click of a button.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>DIY: Create Your Own Cloud with Kubernetes (Part 1)</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/</link>
      <pubDate>Fri, 05 Apr 2024 07:30:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/</guid>
      <description>
        
        
        &lt;p&gt;At Ænix, we have a deep affection for Kubernetes and dream that all modern technologies will soon
start utilizing its remarkable patterns.&lt;/p&gt;
&lt;p&gt;Have you ever thought about building your own cloud? I bet you have. But is it possible to do this
using only modern technologies and approaches, without leaving the cozy Kubernetes ecosystem?
Our experience in developing Cozystack required us to delve deeply into it.&lt;/p&gt;
&lt;p&gt;You might argue that Kubernetes is not intended for this purpose and why not simply use OpenStack
for bare metal servers and run Kubernetes inside it as intended. But by doing so, you would simply
shift the responsibility from your hands to the hands of OpenStack administrators.
This would add at least one more huge and complex system to your ecosystem.&lt;/p&gt;
&lt;p&gt;Why complicate things? - after all, Kubernetes already has everything needed to run tenant
Kubernetes clusters at this point.&lt;/p&gt;
&lt;p&gt;I want to share with you our experience in developing a cloud platform based on Kubernetes,
highlighting the open-source projects that we use ourselves and believe deserve your attention.&lt;/p&gt;
&lt;p&gt;In this series of articles, I will tell you our story about how we prepare managed Kubernetes
from bare metal using only open-source technologies. Starting from the basic level of data
center preparation, running virtual machines, isolating networks, setting up fault-tolerant
storage to provisioning full-featured Kubernetes clusters with dynamic volume provisioning,
load balancers, and autoscaling.&lt;/p&gt;
&lt;p&gt;With this article, I start a series consisting of several parts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Part 1&lt;/strong&gt;: Preparing the groundwork for your cloud. Challenges faced during the preparation
and operation of Kubernetes on bare metal and a ready-made recipe for provisioning infrastructure.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Part 2&lt;/strong&gt;: Networking, storage, and virtualization. How to turn Kubernetes into a tool for
launching virtual machines and what is needed for this.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Part 3&lt;/strong&gt;: Cluster API and how to start provisioning Kubernetes clusters at the push of a
button. How autoscaling works, dynamic provisioning of volumes, and load balancers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I will try to describe various technologies as independently as possible, but at the same time,
I will share our experience and why we came to one solution or another.&lt;/p&gt;
&lt;p&gt;To begin with, let&#39;s understand the main advantage of Kubernetes and how it has changed the
approach to using cloud resources.&lt;/p&gt;
&lt;p&gt;It is important to understand that the use of Kubernetes in the cloud and on bare metal differs.&lt;/p&gt;
&lt;h2 id=&#34;kubernetes-in-the-cloud&#34;&gt;Kubernetes in the cloud&lt;/h2&gt;
&lt;p&gt;When you operate Kubernetes in the cloud, you don&#39;t worry about persistent volumes,
cloud load balancers, or the process of provisioning nodes. All of this is handled by your cloud
provider, who accepts your requests in the form of Kubernetes objects. In other words, the server
side is completely hidden from you, and you don&#39;t really want to know how exactly the cloud
provider implements as it&#39;s not in your area of responsibility.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/cloud.svg&#34;
         alt=&#34;A diagram showing cloud Kubernetes, with load balancing and storage done outside the cluster&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing cloud Kubernetes, with load balancing and storage done outside the cluster&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Kubernetes offers convenient abstractions that work the same everywhere, allowing you to deploy
your application on any Kubernetes in any cloud.&lt;/p&gt;
&lt;p&gt;In the cloud, you very commonly have several separate entities: the Kubernetes control plane,
virtual machines, persistent volumes, and load balancers as distinct entities. Using these entities, you can create highly dynamic environments.&lt;/p&gt;
&lt;p&gt;Thanks to Kubernetes, virtual machines are now only seen as a utility entity for utilizing
cloud resources. You no longer store data inside virtual machines. You can delete all your virtual
machines at any moment and recreate them without breaking your application. The Kubernetes control
plane will continue to hold information about what should run in your cluster. The load balancer
will keep sending traffic to your workload, simply changing the endpoint to send traffic to a new
node. And your data will be safely stored in external persistent volumes provided by cloud.&lt;/p&gt;
&lt;p&gt;This approach is fundamental when using Kubernetes in clouds. The reason for it is quite obvious:
the simpler the system, the more stable it is, and for this simplicity you go buying Kubernetes
in the cloud.&lt;/p&gt;
&lt;h2 id=&#34;kubernetes-on-bare-metal&#34;&gt;Kubernetes on bare metal&lt;/h2&gt;
&lt;p&gt;Using Kubernetes in the clouds is really simple and convenient, which cannot be said about bare
metal installations. In the bare metal world, Kubernetes, on the contrary, becomes unbearably
complex. Firstly, because the entire network, backend storage, cloud balancers, etc. are usually
run not outside, but inside your cluster. As result such a system is much more difficult to
update and maintain.&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/baremetal.svg&#34;
         alt=&#34;A diagram showing bare metal Kubernetes, with load balancing and storage done inside the cluster&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A diagram showing bare metal Kubernetes, with load balancing and storage done inside the cluster&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Judge for yourself: in the cloud, to update a node, you typically delete the virtual machine
(or even use &lt;code&gt;kubectl delete node&lt;/code&gt;) and you let your node management tooling create a new
one, based on an immutable image. The new node will join the cluster and ”just work” as a  node;
following a very simple and commonly used pattern in the Kubernetes world.
Many clusters order new virtual machines every few minutes, simply because they can use
cheaper spot instances. However, when you have a physical server, you can&#39;t just delete and
recreate it, firstly because it often runs some cluster services, stores data, and its update process
is significantly more complicated.&lt;/p&gt;
&lt;p&gt;There are different approaches to solving this problem, ranging from in-place updates, as done by
kubeadm, kubespray, and k3s, to full automation of provisioning physical nodes through Cluster API
and Metal3.&lt;/p&gt;
&lt;p&gt;I like the hybrid approach offered by Talos Linux, where your entire system is described in a
single configuration file. Most parameters of this file can be applied without rebooting or
recreating the node, including the version of Kubernetes control-plane components. However, it
still keeps the maximum declarative nature of Kubernetes.
This approach minimizes unnecessary impact on cluster services when updating bare metal nodes.
In most cases, you won&#39;t need to migrate your virtual machines and rebuild the cluster filesystem
on minor updates.&lt;/p&gt;
&lt;h2 id=&#34;preparing-a-base-for-your-future-cloud&#34;&gt;Preparing a base for your future cloud&lt;/h2&gt;
&lt;p&gt;So, suppose you&#39;ve decided to build your own cloud. To start somewhere, you need a base layer.
You need to think not only about how you will install Kubernetes on your servers but also about how
you will update and maintain it. Consider the fact that you will have to think about things like
updating the kernel, installing necessary modules, as well packages and security patches.
Now you have to think much more that you don&#39;t have to worry about when using a ready-made
Kubernetes in the cloud.&lt;/p&gt;
&lt;p&gt;Of course you can use standard distributions like Ubuntu or Debian, or you can consider specialized
ones like Flatcar Container Linux, Fedora Core, and Talos Linux. Each has its advantages and
disadvantages.&lt;/p&gt;
&lt;p&gt;What about us? At Ænix, we use quite a few specific kernel modules like ZFS, DRBD, and OpenvSwitch,
so we decided to go the route of forming a system image with all the necessary modules in advance.
In this case, Talos Linux turned out to be the most convenient for us.
For example, such a config is enough to build a system image with all the necessary kernel modules:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;arch&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;amd64&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;platform&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;metal&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;secureboot&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;false&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;version&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1.6.4&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;input&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kernel&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;path&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/usr/install/amd64/vmlinuz&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;initramfs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;path&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/usr/install/amd64/initramfs.xz&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;baseInstaller&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/installer:v1.6.4&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;systemExtensions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/amd-ucode:20240115&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/amdgpu-firmware:20240115&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/bnx2-bnx2x:20240115&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/i915-ucode:20240115&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/intel-ice-firmware:20240115&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/intel-ucode:20231114&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/qlogic-firmware:20240115&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/drbd:9.2.6-v1.6.4&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imageRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ghcr.io/siderolabs/zfs:2.1.14-v1.6.4&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;output&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;installer&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;outFormat&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;raw&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Then we use the &lt;code&gt;docker&lt;/code&gt; command line tool to build an OS image:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;cat config.yaml | docker run --rm -i -v /dev:/dev --privileged &amp;#34;ghcr.io/siderolabs/imager:v1.6.4&amp;#34; - 
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And as a result, we get a Docker container image with everything we need, which we can use to
install Talos Linux on our servers. You can do the same; this image will contain all the necessary
firmware and kernel modules.&lt;/p&gt;
&lt;p&gt;But the question arises, how do you deliver the freshly formed image to your nodes?&lt;/p&gt;
&lt;p&gt;I have been contemplating the idea of PXE booting for quite some time. For example, the
&lt;strong&gt;Kubefarm&lt;/strong&gt; project that I wrote an
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/12/22/kubernetes-in-kubernetes-and-pxe-bootable-server-farm/&#34;&gt;article&lt;/a&gt; about
two years ago was entirely built using this approach. But unfortunately, it does help you to
deploy your very first parent cluster that will hold the others. So now you have prepared a
solution that will help you do this the same using PXE approach.&lt;/p&gt;
&lt;p&gt;Essentially, all you need to do is &lt;a href=&#34;https://cozystack.io/docs/get-started/&#34;&gt;run temporary&lt;/a&gt;
&lt;strong&gt;DHCP&lt;/strong&gt; and &lt;strong&gt;PXE&lt;/strong&gt; servers inside containers. Then your nodes will boot from your
image, and you can use a simple Debian-flavored script to help you bootstrap your nodes.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://asciinema.org/a/627123&#34;&gt;&lt;img src=&#34;asciicast.svg&#34; alt=&#34;asciicast&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;https://github.com/aenix-io/talos-bootstrap/&#34;&gt;source&lt;/a&gt; for that &lt;code&gt;talos-bootstrap&lt;/code&gt; script is
available on GitHub.&lt;/p&gt;
&lt;p&gt;This script allows you to deploy Kubernetes on bare metal in five minutes and obtain a kubeconfig
for accessing it. However, many unresolved issues still lie ahead.&lt;/p&gt;
&lt;h2 id=&#34;delivering-system-components&#34;&gt;Delivering system components&lt;/h2&gt;
&lt;p&gt;At this stage, you already have a Kubernetes cluster capable of running various workloads. However,
it is not fully functional yet. In other words, you need to set up networking and storage, as well
as install necessary cluster extensions, like KubeVirt to run virtual machines, as well the
monitoring stack and other system-wide components.&lt;/p&gt;
&lt;p&gt;Traditionally, this is solved by installing &lt;strong&gt;Helm charts&lt;/strong&gt; into your cluster. You can do this by
running &lt;code&gt;helm install&lt;/code&gt; commands locally, but this approach becomes inconvenient when you want to
track updates, and if you have multiple clusters and you want to keep them uniform. In fact, there
are plenty of ways to do this declaratively. To solve this, I recommend using best GitOps practices.
I mean tools like ArgoCD and FluxCD.&lt;/p&gt;
&lt;p&gt;While ArgoCD is more convenient for dev purposes with its graphical interface and a central control
plane, FluxCD, on the other hand, is better suited for creating Kubernetes distributions. With FluxCD,
you can specify which charts with what parameters should be launched and describe dependencies. Then,
FluxCD will take care of everything for you.&lt;/p&gt;
&lt;p&gt;It is suggested to perform a one-time installation of FluxCD in your newly created cluster and
provide it with the configuration. This will install everything necessary, bringing the cluster
to the expected state.&lt;/p&gt;
&lt;p&gt;By carrying out a single installation of FluxCD in your newly minted cluster and configuring it
accordingly, you enable it to automatically deploy all the essentials. This will allow your cluster
to upgrade itself into the desired state. For example, after installing our platform you&#39;ll see the
next pre-configured Helm charts with system components:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;NAMESPACE                        NAME                        AGE    READY   STATUS
cozy-cert-manager                cert-manager                4m1s   True    Release reconciliation succeeded
cozy-cert-manager                cert-manager-issuers        4m1s   True    Release reconciliation succeeded
cozy-cilium                      cilium                      4m1s   True    Release reconciliation succeeded
cozy-cluster-api                 capi-operator               4m1s   True    Release reconciliation succeeded
cozy-cluster-api                 capi-providers              4m1s   True    Release reconciliation succeeded
cozy-dashboard                   dashboard                   4m1s   True    Release reconciliation succeeded
cozy-fluxcd                      cozy-fluxcd                 4m1s   True    Release reconciliation succeeded
cozy-grafana-operator            grafana-operator            4m1s   True    Release reconciliation succeeded
cozy-kamaji                      kamaji                      4m1s   True    Release reconciliation succeeded
cozy-kubeovn                     kubeovn                     4m1s   True    Release reconciliation succeeded
cozy-kubevirt-cdi                kubevirt-cdi                4m1s   True    Release reconciliation succeeded
cozy-kubevirt-cdi                kubevirt-cdi-operator       4m1s   True    Release reconciliation succeeded
cozy-kubevirt                    kubevirt                    4m1s   True    Release reconciliation succeeded
cozy-kubevirt                    kubevirt-operator           4m1s   True    Release reconciliation succeeded
cozy-linstor                     linstor                     4m1s   True    Release reconciliation succeeded
cozy-linstor                     piraeus-operator            4m1s   True    Release reconciliation succeeded
cozy-mariadb-operator            mariadb-operator            4m1s   True    Release reconciliation succeeded
cozy-metallb                     metallb                     4m1s   True    Release reconciliation succeeded
cozy-monitoring                  monitoring                  4m1s   True    Release reconciliation succeeded
cozy-postgres-operator           postgres-operator           4m1s   True    Release reconciliation succeeded
cozy-rabbitmq-operator           rabbitmq-operator           4m1s   True    Release reconciliation succeeded
cozy-redis-operator              redis-operator              4m1s   True    Release reconciliation succeeded
cozy-telepresence                telepresence                4m1s   True    Release reconciliation succeeded
cozy-victoria-metrics-operator   victoria-metrics-operator   4m1s   True    Release reconciliation succeeded
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;As a result, you achieve a highly repeatable environment that you can provide to anyone, knowing
that it operates exactly as intended.
This is actually what the &lt;a href=&#34;https://github.com/aenix-io/cozystack&#34;&gt;Cozystack&lt;/a&gt; project does, which
you can try out for yourself absolutely free.&lt;/p&gt;
&lt;p&gt;In the following articles, I will discuss
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/&#34;&gt;how to prepare Kubernetes for running virtual machines&lt;/a&gt;
and &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/&#34;&gt;how to run Kubernetes clusters with the click of a button&lt;/a&gt;.
Stay tuned, it&#39;ll be fun!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Introducing the Windows Operational Readiness Specification</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/03/intro-windows-ops-readiness/</link>
      <pubDate>Wed, 03 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/04/03/intro-windows-ops-readiness/</guid>
      <description>
        
        
        &lt;p&gt;Since Windows support &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2019/03/25/kubernetes-1-14-release-announcement/&#34;&gt;graduated to stable&lt;/a&gt;
with Kubernetes 1.14 in 2019, the capability to run Windows workloads has been much
appreciated by the end user community. The level of and availability of Windows workload
support has consistently been a major differentiator for Kubernetes distributions used by
large enterprises. However, with more Windows workloads being migrated to Kubernetes
and new Windows features being continuously released, it became challenging to test
Windows worker nodes in an effective and standardized way.&lt;/p&gt;
&lt;p&gt;The Kubernetes project values the ability to certify conformance without requiring a
closed-source license for a certified distribution or service that has no intention
of offering Windows.&lt;/p&gt;
&lt;p&gt;Some notable examples brought to the attention of SIG Windows were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An issue with load balancer source address ranges functionality not operating correctly on
Windows nodes, detailed in a GitHub issue:
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/120033&#34;&gt;kubernetes/kubernetes#120033&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Reports of functionality issues with Windows features, such as
“&lt;a href=&#34;https://learn.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview&#34;&gt;GMSA&lt;/a&gt; not working with containerd,
discussed in &lt;a href=&#34;https://github.com/microsoft/Windows-Containers/issues/44&#34;&gt;microsoft/Windows-Containers#44&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Challenges developing networking policy tests that could objectively evaluate
Container Network Interface (CNI) plugins across different operating system configurations,
as discussed in &lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/97751&#34;&gt;kubernetes/kubernetes#97751&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;SIG Windows therefore recognized the need for a tailored solution to ensure Windows
nodes&#39; operational readiness &lt;em&gt;before&lt;/em&gt; their deployment into production environments.
Thus, the idea to develop a &lt;a href=&#34;https://kep.k8s.io/2578&#34;&gt;Windows Operational Readiness Specification&lt;/a&gt;
was born.&lt;/p&gt;
&lt;h2 id=&#34;can-t-we-just-run-the-official-conformance-tests&#34;&gt;Can’t we just run the official Conformance tests?&lt;/h2&gt;
&lt;p&gt;The Kubernetes project contains a set of &lt;a href=&#34;https://www.cncf.io/training/certification/software-conformance/#how&#34;&gt;conformance tests&lt;/a&gt;,
which are standardized tests designed to ensure that a Kubernetes cluster meets
the required Kubernetes specifications.&lt;/p&gt;
&lt;p&gt;However, these tests were originally defined at a time when Linux was the &lt;em&gt;only&lt;/em&gt;
operating system compatible with Kubernetes, and thus, they were not easily
extendable for use with Windows. Given that Windows workloads, despite their
importance, account for a smaller portion of the Kubernetes community, it was
important to ensure that the primary conformance suite relied upon by many
Kubernetes distributions to certify Linux conformance, didn&#39;t become encumbered
with Windows specific features or enhancements such as GMSA or multi-operating
system kube-proxy behavior.&lt;/p&gt;
&lt;p&gt;Therefore, since there was a specialized need for Windows conformance testing,
SIG Windows went down the path of offering Windows specific conformance tests
through the Windows Operational Readiness Specification.&lt;/p&gt;
&lt;h2 id=&#34;can-t-we-just-run-the-kubernetes-end-to-end-test-suite&#34;&gt;Can’t we just run the Kubernetes end-to-end test suite?&lt;/h2&gt;
&lt;p&gt;In the Linux world, tools such as &lt;a href=&#34;https://sonobuoy.io/&#34;&gt;Sonobuoy&lt;/a&gt; simplify execution of the
conformance suite, relieving users from needing to be aware of Kubernetes&#39;
compilation paths or the semantics of &lt;a href=&#34;https://onsi.github.io/ginkgo&#34;&gt;Ginkgo&lt;/a&gt; tags.&lt;/p&gt;
&lt;p&gt;Regarding needing to compile the Kubernetes tests, we realized that Windows
users might similarly find the process of compiling and running the Kubernetes
e2e suite from scratch similarly undesirable, hence, there was a clear need to
provide a user-friendly, &amp;quot;push-button&amp;quot; solution that is ready to go. Moreover,
regarding Ginkgo tags, applying conformance tests to Windows nodes through a set
of &lt;a href=&#34;https://onsi.github.io/ginkgo/&#34;&gt;Ginkgo&lt;/a&gt; tags would also be burdensome for
any user, including Linux enthusiasts or experienced Windows system admins alike.&lt;/p&gt;
&lt;p&gt;To bridge the gap and give users a straightforward way to confirm their clusters
support a variety of features, the Kubernetes SIG for Windows found it necessary to
therefore create the Windows Operational Readiness application. This application
written in Go, simplifies the process to run the necessary Windows specific tests
while delivering results in a clear, accessible format.&lt;/p&gt;
&lt;p&gt;This initiative has been a collaborative effort, with contributions from different
cloud providers and platforms, including Amazon, Microsoft, SUSE, and Broadcom.&lt;/p&gt;
&lt;h2 id=&#34;specification&#34;&gt;A closer look at the Windows Operational Readiness Specification&lt;/h2&gt;
&lt;p&gt;The Windows Operational Readiness specification specifically targets and executes
tests found within the Kubernetes repository in a more user-friendly way than
simply targeting &lt;a href=&#34;https://onsi.github.io/ginkgo/&#34;&gt;Ginkgo&lt;/a&gt; tags. It introduces a
structured test suite that is split into sets of core and extended tests, with
each set of tests containing categories directed at testing a specific area of
testing, such as networking. Core tests target fundamental and critical
functionalities that Windows nodes should support as defined by the Kubernetes
specification. On the other hand, extended tests cover more complex features,
more aligned with diving deeper into Windows-specific capabilities such as
integrations with Active Directory. These goal of these tests is to be extensive,
covering a wide array of Windows-specific capabilities to ensure compatibility
with a diverse set of workloads and configurations, extending beyond basic
requirements. Below is the current list of categories.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category Name&lt;/th&gt;
&lt;th&gt;Category Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Core.Network&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tests minimal networking functionality (ability to access pod-by-pod IP.)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Core.Storage&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tests minimal storage functionality, (ability to mount a hostPath storage volume.)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Core.Scheduling&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tests minimal scheduling functionality, (ability to schedule a pod with CPU limits.)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Core.Concurrent&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tests minimal concurrent functionality, (the ability of a node to handle traffic to multiple pods concurrently.)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Extend.HostProcess&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tests features related to Windows HostProcess pod functionality.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Extend.ActiveDirectory&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tests features related to Active Directory functionality.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Extend.NetworkPolicy&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tests features related to Network Policy functionality.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Extend.Network&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tests advanced networking functionality, (ability to support IPv6)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Extend.Worker&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tests features related to Windows worker node functionality, (ability for nodes to access TCP and UDP services in the same cluster)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id=&#34;how-to-conduct-operational-readiness-tests-for-windows-nodes&#34;&gt;How to conduct operational readiness tests for Windows nodes&lt;/h2&gt;
&lt;p&gt;To run the Windows Operational Readiness test suite, refer to the test suite&#39;s
&lt;a href=&#34;https://github.com/kubernetes-sigs/windows-operational-readiness/blob/main/README.md&#34;&gt;&lt;code&gt;README&lt;/code&gt;&lt;/a&gt;, which explains how to set it up and run it. The test suite offers
flexibility in how you can execute tests, either using a compiled binary or a
Sonobuoy plugin. You also have the choice to run the tests against the entire
test suite or by specifying a list of categories. Cloud providers have the
choice of uploading their conformance results, enhancing transparency and reliability.&lt;/p&gt;
&lt;p&gt;Once you have checked out that code, you can run a test. For example, this sample
command runs the tests from the &lt;code&gt;Core.Concurrent&lt;/code&gt; category:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;./op-readiness --kubeconfig &lt;span style=&#34;color:#b8860b&#34;&gt;$KUBE_CONFIG&lt;/span&gt; --category Core.Concurrent
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;As a contributor to Kubernetes, if you want to test your changes against a specific pull
request using the Windows Operational Readiness Specification, use the following bot
command in the new pull request.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;/test operational-tests-capz-windows-2019
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;looking-ahead&#34;&gt;Looking ahead&lt;/h2&gt;
&lt;p&gt;We’re looking to improve our curated list of Windows-specific tests by adding
new tests to the Kubernetes repository and also identifying existing test cases
that can be targetted. The long term goal for the specification is to continually
enhance test coverage for Windows worker nodes and improve the robustness of
Windows support, facilitating a seamless experience across diverse cloud
environments. We also have plans to integrate the Windows Operational Readiness
tests into the official Kubernetes conformance suite.&lt;/p&gt;
&lt;p&gt;If you are interested in helping us out, please reach out to us! We welcome help
in any form, from giving once-off feedback to making a code contribution,
to having long-term owners to help us drive changes. The Windows Operational
Readiness specification is owned by the SIG Windows team. You can reach out
to the team on the &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;Kubernetes Slack workspace&lt;/a&gt; &lt;strong&gt;#sig-windows&lt;/strong&gt;
channel. You can also explore the &lt;a href=&#34;https://github.com/kubernetes-sigs/windows-operational-readiness/#readme&#34;&gt;Windows Operational Readiness test suite&lt;/a&gt;
and make contributions directly to the GitHub repository.&lt;/p&gt;
&lt;p&gt;Special thanks to Kulwant Singh (AWS), Pramita Gautam Rana (VMWare), Xinqi Li
(Google) and Marcio Morales (AWS) for their help in making notable contributions to the specification. Additionally,
appreciation goes to James Sturtevant (Microsoft), Mark Rossetti (Microsoft),
Claudiu Belu (Cloudbase Solutions) and Aravindh Puthiyaparambil
(Softdrive Technologies Group Inc.) from the SIG Windows team for their guidance and support.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>A Peek at Kubernetes v1.30</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/03/12/kubernetes-1-30-upcoming-changes/</link>
      <pubDate>Tue, 12 Mar 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/03/12/kubernetes-1-30-upcoming-changes/</guid>
      <description>
        
        
        &lt;h2 id=&#34;a-quick-look-exciting-changes-in-kubernetes-v1-30&#34;&gt;A quick look: exciting changes in Kubernetes v1.30&lt;/h2&gt;
&lt;p&gt;It&#39;s a new year and a new Kubernetes release. We&#39;re halfway through the release cycle and
have quite a few interesting and exciting enhancements coming in v1.30. From brand new features
in alpha, to established features graduating to stable, to long-awaited improvements, this release
has something for everyone to pay attention to!&lt;/p&gt;
&lt;p&gt;To tide you over until the official release, here&#39;s a sneak peek of the enhancements we&#39;re most
excited about in this cycle!&lt;/p&gt;
&lt;h2 id=&#34;major-changes-for-kubernetes-v1-30&#34;&gt;Major changes for Kubernetes v1.30&lt;/h2&gt;
&lt;h3 id=&#34;structured-parameters-for-dynamic-resource-allocation-kep-4381-https-kep-k8s-io-4381&#34;&gt;Structured parameters for dynamic resource allocation (&lt;a href=&#34;https://kep.k8s.io/4381&#34;&gt;KEP-4381&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/scheduling-eviction/dynamic-resource-allocation/&#34;&gt;Dynamic resource allocation&lt;/a&gt; was
added to Kubernetes as an alpha feature in v1.26. It defines an alternative to the traditional
device-plugin API for requesting access to third-party resources. By design, dynamic resource
allocation uses parameters for resources that are completely opaque to core Kubernetes. This
approach poses a problem for the Cluster Autoscaler (CA) or any higher-level controller that
needs to make decisions for a group of pods (e.g. a job scheduler). It cannot simulate the effect of
allocating or deallocating claims over time. Only the third-party DRA drivers have the information
available to do this.&lt;/p&gt;
&lt;p&gt;​​Structured Parameters for dynamic resource allocation is an extension to the original
implementation that addresses this problem by building a framework to support making these claim
parameters less opaque. Instead of handling the semantics of all claim parameters themselves,
drivers could manage resources and describe them using a specific &amp;quot;structured model&amp;quot; pre-defined by
Kubernetes. This would allow components aware of this &amp;quot;structured model&amp;quot; to make decisions about
these resources without outsourcing them to some third-party controller. For example, the scheduler
could allocate claims rapidly without back-and-forth communication with dynamic resource
allocation drivers. Work done for this release centers on defining the framework necessary to enable
different &amp;quot;structured models&amp;quot; and to implement the &amp;quot;named resources&amp;quot; model. This model allows
listing individual resource instances and, compared to the traditional device plugin API, adds the
ability to select those instances individually via attributes.&lt;/p&gt;
&lt;h3 id=&#34;node-memory-swap-support-kep-2400-https-kep-k8s-io-2400&#34;&gt;Node memory swap support (&lt;a href=&#34;https://kep.k8s.io/2400&#34;&gt;KEP-2400&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;In Kubernetes v1.30, memory swap support on Linux nodes gets a big change to how it works - with a
strong emphasis on improving system stability. In previous Kubernetes versions, the &lt;code&gt;NodeSwap&lt;/code&gt;
feature gate was disabled by default, and when enabled, it used &lt;code&gt;UnlimitedSwap&lt;/code&gt; behavior as the
default behavior. To achieve better stability, &lt;code&gt;UnlimitedSwap&lt;/code&gt; behavior (which might compromise node
stability) will be removed in v1.30.&lt;/p&gt;
&lt;p&gt;The updated, still-beta support for swap on Linux nodes will be available by default. However, the
default behavior will be to run the node set to &lt;code&gt;NoSwap&lt;/code&gt; (not &lt;code&gt;UnlimitedSwap&lt;/code&gt;) mode. In &lt;code&gt;NoSwap&lt;/code&gt;
mode, the kubelet supports running on a node where swap space is active, but Pods don&#39;t use any of
the page file. You&#39;ll still need to set &lt;code&gt;--fail-swap-on=false&lt;/code&gt; for the kubelet to run on that node.
However, the big change is the other mode: &lt;code&gt;LimitedSwap&lt;/code&gt;. In this mode, the kubelet actually uses
the page file on that node and allows Pods to have some of their virtual memory paged out.
Containers (and their parent pods)  do not have access to swap beyond their memory limit, but the
system can still use the swap space if available.&lt;/p&gt;
&lt;p&gt;Kubernetes&#39; Node special interest group (SIG Node) will also update the documentation to help you
understand how to use the revised implementation, based on feedback from end users, contributors,
and the wider Kubernetes community.&lt;/p&gt;
&lt;p&gt;Read the previous &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/08/24/swap-linux-beta/&#34;&gt;blog post&lt;/a&gt; or the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/architecture/nodes/#swap-memory&#34;&gt;node swap
documentation&lt;/a&gt; for more details on
Linux node swap support in Kubernetes.&lt;/p&gt;
&lt;h3 id=&#34;support-user-namespaces-in-pods-kep-127-https-kep-k8s-io-127&#34;&gt;Support user namespaces in pods (&lt;a href=&#34;https://kep.k8s.io/127&#34;&gt;KEP-127&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/pods/user-namespaces&#34;&gt;User namespaces&lt;/a&gt; is a Linux-only feature that better
isolates pods to prevent or mitigate several CVEs rated high/critical, including
&lt;a href=&#34;https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv&#34;&gt;CVE-2024-21626&lt;/a&gt;,
published in January 2024. In Kubernetes 1.30, support for user namespaces is migrating to beta and
now supports pods with and without volumes, custom UID/GID ranges, and more!&lt;/p&gt;
&lt;h3 id=&#34;structured-authorization-configuration-kep-3221-https-kep-k8s-io-3221&#34;&gt;Structured authorization configuration (&lt;a href=&#34;https://kep.k8s.io/3221&#34;&gt;KEP-3221&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Support for &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file&#34;&gt;structured authorization
configuration&lt;/a&gt;
is moving to beta and will be enabled by default. This feature enables the creation of
authorization chains with multiple webhooks with well-defined parameters that validate requests in a
particular order and allows fine-grained control – such as explicit Deny on failures. The
configuration file approach even allows you to specify &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/cel/&#34;&gt;CEL&lt;/a&gt; rules
to pre-filter requests before they are dispatched to webhooks, helping you to prevent unnecessary
invocations. The API server also automatically reloads the authorizer chain when the configuration
file is modified.&lt;/p&gt;
&lt;p&gt;You must specify the path to that authorization configuration using the &lt;code&gt;--authorization-config&lt;/code&gt;
command line argument. If you want to keep using command line flags instead of a
configuration file, those will continue to work as-is. To gain access to new authorization webhook
capabilities like multiple webhooks, failure policy, and pre-filter rules, switch to putting options
in an &lt;code&gt;--authorization-config&lt;/code&gt; file. From Kubernetes 1.30, the configuration file format is
beta-level, and only requires specifying &lt;code&gt;--authorization-config&lt;/code&gt; since the feature gate is enabled by
default. An example configuration with all possible values is provided in the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file&#34;&gt;Authorization
docs&lt;/a&gt;.
For more details, read the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file&#34;&gt;Authorization
docs&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;container-resource-based-pod-autoscaling-kep-1610-https-kep-k8s-io-1610&#34;&gt;Container resource based pod autoscaling (&lt;a href=&#34;https://kep.k8s.io/1610&#34;&gt;KEP-1610&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Horizontal pod autoscaling based on &lt;code&gt;ContainerResource&lt;/code&gt; metrics will graduate to stable in v1.30.
This new behavior for HorizontalPodAutoscaler allows you to configure automatic scaling based on the
resource usage for individual containers, rather than the aggregate resource use over a Pod. See our
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/05/02/hpa-container-resource-metric/&#34;&gt;previous article&lt;/a&gt; for further details, or read
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics&#34;&gt;container resource metrics&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;cel-for-admission-control-kep-3488-https-kep-k8s-io-3488&#34;&gt;CEL for admission control (&lt;a href=&#34;https://kep.k8s.io/3488&#34;&gt;KEP-3488&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Integrating Common Expression Language (CEL) for admission control in Kubernetes introduces a more
dynamic and expressive way of evaluating admission requests. This feature allows complex,
fine-grained policies to be defined and enforced directly through the Kubernetes API, enhancing
security and governance capabilities without compromising performance or flexibility.&lt;/p&gt;
&lt;p&gt;CEL&#39;s addition to Kubernetes admission control empowers cluster administrators to craft intricate
rules that can evaluate the content of API requests against the desired state and policies of the
cluster without resorting to Webhook-based access controllers. This level of control is crucial for
maintaining the integrity, security, and efficiency of cluster operations, making Kubernetes
environments more robust and adaptable to various use cases and requirements. For more information
on using CEL for admission control, see the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/access-authn-authz/validating-admission-policy/&#34;&gt;API
documentation&lt;/a&gt; for
ValidatingAdmissionPolicy.&lt;/p&gt;
&lt;p&gt;We hope you&#39;re as excited for this release as we are. Keep an eye out for the official release
blog in a few weeks for more highlights!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>CRI-O: Applying seccomp profiles from OCI registries</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/03/07/cri-o-seccomp-oci-artifacts/</link>
      <pubDate>Thu, 07 Mar 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/03/07/cri-o-seccomp-oci-artifacts/</guid>
      <description>
        
        
        &lt;p&gt;Seccomp stands for secure computing mode and has been a feature of the Linux
kernel since version 2.6.12. It can be used to sandbox the privileges of a
process, restricting the calls it is able to make from userspace into the
kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a
node to your Pods and containers.&lt;/p&gt;
&lt;p&gt;But distributing those seccomp profiles is a major challenge in Kubernetes,
because the JSON files have to be available on all nodes where a workload can
possibly run. Projects like the &lt;a href=&#34;https://sigs.k8s.io/security-profiles-operator&#34;&gt;Security Profiles
Operator&lt;/a&gt; solve that problem by
running as a daemon within the cluster, which makes me wonder which part of that
distribution could be done by the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/setup/production-environment/container-runtimes&#34;&gt;container
runtime&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Runtimes usually apply the profiles from a local path, for example:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;container&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;nginx:1.25.3&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;securityContext&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;seccompProfile&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Localhost&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;localhostProfile&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;nginx-1.25.3.json&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The profile &lt;code&gt;nginx-1.25.3.json&lt;/code&gt; has to be available in the root directory of the
kubelet, appended by the &lt;code&gt;seccomp&lt;/code&gt; directory. This means the default location
for the profile on-disk would be &lt;code&gt;/var/lib/kubelet/seccomp/nginx-1.25.3.json&lt;/code&gt;.
If the profile is not available, then runtimes will fail on container creation
like this:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl get pods
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;NAME   READY   STATUS                 RESTARTS   AGE
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;pod    0/1     CreateContainerError   0          38s
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl describe pod/pod | tail
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Events:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Type     Reason     Age                 From               Message
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  ----     ------     ----                ----               -------
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Normal   Scheduled  117s                default-scheduler  Successfully assigned default/pod to 127.0.0.1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Normal   Pulling    117s                kubelet            Pulling image &amp;#34;nginx:1.25.3&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Normal   Pulled     111s                kubelet            Successfully pulled image &amp;#34;nginx:1.25.3&amp;#34; in 5.948s (5.948s including waiting)
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Warning  Failed     7s (x10 over 111s)  kubelet            Error: setup seccomp: unable to load local profile &amp;#34;/var/lib/kubelet/seccomp/nginx-1.25.3.json&amp;#34;: open /var/lib/kubelet/seccomp/nginx-1.25.3.json: no such file or directory
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Normal   Pulled     7s (x9 over 111s)   kubelet            Container image &amp;#34;nginx:1.25.3&amp;#34; already present on machine
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The major obstacle of having to manually distribute the &lt;code&gt;Localhost&lt;/code&gt; profiles
will lead many end-users to fall back to &lt;code&gt;RuntimeDefault&lt;/code&gt; or even running their
workloads as &lt;code&gt;Unconfined&lt;/code&gt; (with disabled seccomp).&lt;/p&gt;
&lt;h2 id=&#34;cri-o-to-the-rescue&#34;&gt;CRI-O to the rescue&lt;/h2&gt;
&lt;p&gt;The Kubernetes container runtime &lt;a href=&#34;https://github.com/cri-o/cri-o&#34;&gt;CRI-O&lt;/a&gt;
provides various features using custom annotations. The v1.30 release
&lt;a href=&#34;https://github.com/cri-o/cri-o/pull/7719&#34;&gt;adds&lt;/a&gt; support for a new set of
annotations called &lt;code&gt;seccomp-profile.kubernetes.cri-o.io/POD&lt;/code&gt; and
&lt;code&gt;seccomp-profile.kubernetes.cri-o.io/&amp;lt;CONTAINER&amp;gt;&lt;/code&gt;. Those annotations allow you
to specify:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a seccomp profile for a specific container, when used as:
&lt;code&gt;seccomp-profile.kubernetes.cri-o.io/&amp;lt;CONTAINER&amp;gt;&lt;/code&gt; (example:
&lt;code&gt;seccomp-profile.kubernetes.cri-o.io/webserver: &#39;registry.example/example/webserver:v1&#39;&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;a seccomp profile for every container within a pod, when used without the
container name suffix but the reserved name &lt;code&gt;POD&lt;/code&gt;:
&lt;code&gt;seccomp-profile.kubernetes.cri-o.io/POD&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;a seccomp profile for a whole container image, if the image itself contains
the annotation &lt;code&gt;seccomp-profile.kubernetes.cri-o.io/POD&lt;/code&gt; or
&lt;code&gt;seccomp-profile.kubernetes.cri-o.io/&amp;lt;CONTAINER&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;CRI-O will only respect the annotation if the runtime is configured to allow it,
as well as for workloads running as &lt;code&gt;Unconfined&lt;/code&gt;. All other workloads will still
use the value from the &lt;code&gt;securityContext&lt;/code&gt; with a higher priority.&lt;/p&gt;
&lt;p&gt;The annotations alone will not help much with the distribution of the profiles,
but the way they can be referenced will! For example, you can now specify
seccomp profiles like regular container images by using OCI artifacts:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;annotations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;seccomp-profile.kubernetes.cri-o.io/POD&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;quay.io/crio/seccomp:v2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;…&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The image &lt;code&gt;quay.io/crio/seccomp:v2&lt;/code&gt; contains a &lt;code&gt;seccomp.json&lt;/code&gt; file, which
contains the actual profile content. Tools like &lt;a href=&#34;https://oras.land&#34;&gt;ORAS&lt;/a&gt; or
&lt;a href=&#34;https://github.com/containers/skopeo&#34;&gt;Skopeo&lt;/a&gt; can be used to inspect the
contents of the image:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;oras pull quay.io/crio/seccomp:v2
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Downloading 92d8ebfa89aa seccomp.json
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Downloaded  92d8ebfa89aa seccomp.json
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Pulled [registry] quay.io/crio/seccomp:v2
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Digest: sha256:f0205dac8a24394d9ddf4e48c7ac201ca7dcfea4c554f7ca27777a7f8c43ec1b
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;jq . seccomp.json | head
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;{&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;defaultAction&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ACT_ERRNO&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;defaultErrnoRet&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;38&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;defaultErrno&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;ENOSYS&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;archMap&amp;#34;: &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;{&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;architecture&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ARCH_X86_64&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;subArchitectures&amp;#34;: &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ARCH_X86&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ARCH_X32&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Inspect the plain manifest of the image&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;skopeo inspect --raw docker://quay.io/crio/seccomp:v2 | jq .
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;{&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;schemaVersion&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;2&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;mediaType&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;application/vnd.oci.image.manifest.v1+json&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;config&amp;#34;&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;{&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;mediaType&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;application/vnd.cncf.seccomp-profile.config.v1+json&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;digest&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;sha256:ca3d163bab055381827226140568f3bef7eaac187cebd76878e0b63e9e442356&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;size&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;3&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;},&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;layers&amp;#34;&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;{&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;mediaType&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;application/vnd.oci.image.layer.v1.tar&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;digest&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;sha256:92d8ebfa89aa6dd752c6443c27e412df1b568d62b4af129494d7364802b2d476&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;size&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;18853&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;annotations&amp;#34;: { &amp;#34;org.opencontainers.image.title&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;seccomp.json&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;},&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;},&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;],&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;annotations&amp;#34;: { &amp;#34;org.opencontainers.image.created&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2024-02-26T09:03:30Z&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;},&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;}&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The image manifest contains a reference to a specific required config media type
(&lt;code&gt;application/vnd.cncf.seccomp-profile.config.v1+json&lt;/code&gt;) and a single layer
(&lt;code&gt;application/vnd.oci.image.layer.v1.tar&lt;/code&gt;) pointing to the &lt;code&gt;seccomp.json&lt;/code&gt; file.
But now, let&#39;s give that new feature a try!&lt;/p&gt;
&lt;h3 id=&#34;using-the-annotation-for-a-specific-container-or-whole-pod&#34;&gt;Using the annotation for a specific container or whole pod&lt;/h3&gt;
&lt;p&gt;CRI-O needs to be configured adequately before it can utilize the annotation. To
do this, add the annotation to the &lt;code&gt;allowed_annotations&lt;/code&gt; array for the runtime.
This can be done by using a drop-in configuration
&lt;code&gt;/etc/crio/crio.conf.d/10-crun.conf&lt;/code&gt; like this:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-toml&#34; data-lang=&#34;toml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;[crio.runtime]
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;default_runtime = &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;crun&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;[crio.runtime.runtimes.crun]
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;allowed_annotations = [
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;seccomp-profile.kubernetes.cri-o.io&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;]
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Now, let&#39;s run CRI-O from the latest &lt;code&gt;main&lt;/code&gt; commit. This can be done by either
building it from source, using the &lt;a href=&#34;https://github.com/cri-o/packaging?tab=readme-ov-file#using-the-static-binary-bundles-directly&#34;&gt;static binary bundles&lt;/a&gt;
or &lt;a href=&#34;https://github.com/cri-o/packaging?tab=readme-ov-file#usage&#34;&gt;the prerelease packages&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To demonstrate this, I ran the &lt;code&gt;crio&lt;/code&gt; binary from my command line using a single
node Kubernetes cluster via &lt;a href=&#34;https://github.com/cri-o/cri-o?tab=readme-ov-file#running-kubernetes-with-cri-o&#34;&gt;&lt;code&gt;local-up-cluster.sh&lt;/code&gt;&lt;/a&gt;.
Now that the cluster is up and running, let&#39;s try a pod without the annotation
running as seccomp &lt;code&gt;Unconfined&lt;/code&gt;:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;cat pod.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;container&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;nginx:1.25.3&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;securityContext&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;seccompProfile&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Unconfined&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl apply -f pod.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The workload is up and running:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl get pods
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;NAME   READY   STATUS    RESTARTS   AGE
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;pod    1/1     Running   0          15s
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;And no seccomp profile got applied if I inspect the container using
&lt;a href=&#34;https://sigs.k8s.io/cri-tools&#34;&gt;&lt;code&gt;crictl&lt;/code&gt;&lt;/a&gt;:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f&#34;&gt;export&lt;/span&gt; &lt;span style=&#34;color:#b8860b&#34;&gt;CONTAINER_ID&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;$(&lt;/span&gt;sudo crictl ps --name container -q&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;sudo crictl inspect &lt;span style=&#34;color:#b8860b&#34;&gt;$CONTAINER_ID&lt;/span&gt; | jq .info.runtimeSpec.linux.seccomp
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;null
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Now, let&#39;s modify the pod to apply the profile &lt;code&gt;quay.io/crio/seccomp:v2&lt;/code&gt; to the
container:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;annotations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;seccomp-profile.kubernetes.cri-o.io/container&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;quay.io/crio/seccomp:v2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;container&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;nginx:1.25.3&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;I have to delete and recreate the Pod, because only recreation will apply a new
seccomp profile:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl delete pod/pod
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;pod &amp;#34;pod&amp;#34; deleted
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl apply -f pod.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;pod/pod created
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The CRI-O logs will now indicate that the runtime pulled the artifact:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;WARN[…] Allowed annotations are specified for workload [seccomp-profile.kubernetes.cri-o.io]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;INFO[…] Found container specific seccomp profile annotation: seccomp-profile.kubernetes.cri-o.io/container=quay.io/crio/seccomp:v2  id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;INFO[…] Pulling OCI artifact from ref: quay.io/crio/seccomp:v2  id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;INFO[…] Retrieved OCI artifact seccomp profile of len: 18853  id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;And the container is finally using the profile:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f&#34;&gt;export&lt;/span&gt; &lt;span style=&#34;color:#b8860b&#34;&gt;CONTAINER_ID&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;$(&lt;/span&gt;sudo crictl ps --name container -q&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;sudo crictl inspect &lt;span style=&#34;color:#b8860b&#34;&gt;$CONTAINER_ID&lt;/span&gt; | jq .info.runtimeSpec.linux.seccomp | head
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;{&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;defaultAction&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ACT_ERRNO&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;defaultErrnoRet&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;38&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;architectures&amp;#34;: &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ARCH_X86_64&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ARCH_X86&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ARCH_X32&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;],&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;syscalls&amp;#34;: &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;{&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The same would work for every container in the pod, if users replace the
&lt;code&gt;/container&lt;/code&gt; suffix with the reserved name &lt;code&gt;/POD&lt;/code&gt;, for example:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;annotations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;seccomp-profile.kubernetes.cri-o.io/POD&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;quay.io/crio/seccomp:v2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;container&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;nginx:1.25.3&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id=&#34;using-the-annotation-for-a-container-image&#34;&gt;Using the annotation for a container image&lt;/h3&gt;
&lt;p&gt;While specifying seccomp profiles as OCI artifacts on certain workloads is a
cool feature, the majority of end users would like to link seccomp profiles to
published container images. This can be done by using a container image
annotation; instead of being applied to a Kubernetes Pod, the annotation is some
metadata applied at the container image itself. For example,
&lt;a href=&#34;https://podman.io&#34;&gt;Podman&lt;/a&gt; can be used to add the image annotation directly
during image build:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;podman build &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    --annotation seccomp-profile.kubernetes.cri-o.io&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;quay.io/crio/seccomp:v2 &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    -t quay.io/crio/nginx-seccomp:v2 .
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The pushed image then contains the annotation:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;skopeo inspect --raw docker://quay.io/crio/nginx-seccomp:v2 |
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    jq &lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;.annotations.&amp;#34;seccomp-profile.kubernetes.cri-o.io&amp;#34;&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;&amp;#34;quay.io/crio/seccomp:v2&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;If I now use that image in an CRI-O test pod definition:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# no Pod annotations set&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;container&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;quay.io/crio/nginx-seccomp:v2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Then the CRI-O logs will indicate that the image annotation got evaluated and
the profile got applied:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl delete pod/pod
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;pod &amp;#34;pod&amp;#34; deleted
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl apply -f pod.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;pod/pod created
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;INFO[…] Found image specific seccomp profile annotation: seccomp-profile.kubernetes.cri-o.io=quay.io/crio/seccomp:v2  id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;INFO[…] Pulling OCI artifact from ref: quay.io/crio/seccomp:v2  id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;INFO[…] Retrieved OCI artifact seccomp profile of len: 18853  id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;INFO[…] Created container 116a316cd9a11fe861dd04c43b94f45046d1ff37e2ed05a4e4194fcaab29ee63: default/pod/container  id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f&#34;&gt;export&lt;/span&gt; &lt;span style=&#34;color:#b8860b&#34;&gt;CONTAINER_ID&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;$(&lt;/span&gt;sudo crictl ps --name container -q&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;sudo crictl inspect &lt;span style=&#34;color:#b8860b&#34;&gt;$CONTAINER_ID&lt;/span&gt; | jq .info.runtimeSpec.linux.seccomp | head
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;{&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;defaultAction&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ACT_ERRNO&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;defaultErrnoRet&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;38&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;architectures&amp;#34;: &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ARCH_X86_64&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ARCH_X86&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;SCMP_ARCH_X32&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;],&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;syscalls&amp;#34;: &lt;/span&gt;[&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;{&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;For container images, the annotation &lt;code&gt;seccomp-profile.kubernetes.cri-o.io&lt;/code&gt; will
be treated in the same way as &lt;code&gt;seccomp-profile.kubernetes.cri-o.io/POD&lt;/code&gt; and
applies to the whole pod. In addition to that, the whole feature also works when
using the container specific annotation on an image, for example if a container
is named &lt;code&gt;container1&lt;/code&gt;:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;skopeo inspect --raw docker://quay.io/crio/nginx-seccomp:v2-container |
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    jq &lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;.annotations.&amp;#34;seccomp-profile.kubernetes.cri-o.io/container1&amp;#34;&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;&amp;#34;quay.io/crio/seccomp:v2&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The cool thing about this whole feature is that users can now create seccomp
profiles for specific container images and store them side by side in the same
registry. Linking the images to the profiles provides a great flexibility to
maintain them over the whole application&#39;s life cycle.&lt;/p&gt;
&lt;h3 id=&#34;pushing-profiles-using-oras&#34;&gt;Pushing profiles using ORAS&lt;/h3&gt;
&lt;p&gt;The actual creation of the OCI object that contains a seccomp profile requires a
bit more work when using ORAS. I have the hope that tools like Podman will
simplify the overall process in the future. Right now, the container registry
needs to be &lt;a href=&#34;https://oras.land/docs/compatible_oci_registries/#registries-supporting-oci-artifacts&#34;&gt;OCI compatible&lt;/a&gt;,
which is also the case for &lt;a href=&#34;https://quay.io&#34;&gt;Quay.io&lt;/a&gt;. CRI-O expects the seccomp
profile object to have a container image media type
(&lt;code&gt;application/vnd.cncf.seccomp-profile.config.v1+json&lt;/code&gt;), while ORAS uses
&lt;code&gt;application/vnd.oci.empty.v1+json&lt;/code&gt; per default. To achieve all of that, the
following commands can be executed:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f&#34;&gt;echo&lt;/span&gt; &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;{}&amp;#34;&lt;/span&gt; &amp;gt; config.json
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;oras push &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    --config config.json:application/vnd.cncf.seccomp-profile.config.v1+json &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;     quay.io/crio/seccomp:v2 seccomp.json
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The resulting image contains the &lt;code&gt;mediaType&lt;/code&gt; that CRI-O expects. ORAS pushes a
single layer &lt;code&gt;seccomp.json&lt;/code&gt; to the registry. The name of the profile does not
matter much. CRI-O will pick the first layer and check if that can act as a
seccomp profile.&lt;/p&gt;
&lt;h2 id=&#34;future-work&#34;&gt;Future work&lt;/h2&gt;
&lt;p&gt;CRI-O internally manages the OCI artifacts like regular files. This provides the
benefit of moving them around, removing them if not used any more or having any
other data available than seccomp profiles. This enables future enhancements in
CRI-O on top of OCI artifacts, but also allows thinking about stacking seccomp
profiles as part of having multiple layers in an OCI artifact. The limitation
that it only works for &lt;code&gt;Unconfined&lt;/code&gt; workloads for v1.30.x releases is something
different CRI-O would like to address in the future. Simplifying the overall
user experience by not compromising security seems to be the key for a
successful future of seccomp in container workloads.&lt;/p&gt;
&lt;p&gt;The CRI-O maintainers will be happy to listen to any feedback or suggestions on
the new feature! Thank you for reading this blog post, feel free to reach out
to the maintainers via the Kubernetes &lt;a href=&#34;https://kubernetes.slack.com/messages/CAZH62UR1&#34;&gt;Slack channel #crio&lt;/a&gt;
or create an issue in the &lt;a href=&#34;https://github.com/cri-o/cri-o&#34;&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Spotlight on SIG Cloud Provider</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/03/01/sig-cloud-provider-spotlight-2024/</link>
      <pubDate>Fri, 01 Mar 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/03/01/sig-cloud-provider-spotlight-2024/</guid>
      <description>
        
        
        &lt;p&gt;One of the most popular ways developers use Kubernetes-related services is via cloud providers, but
have you ever wondered how cloud providers can do that? How does this whole process of integration
of Kubernetes to various cloud providers happen? To answer that, let&#39;s put the spotlight on &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md&#34;&gt;SIG
Cloud Provider&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;SIG Cloud Provider works to create seamless integrations between Kubernetes and various cloud
providers. Their mission? Keeping the Kubernetes ecosystem fair and open for all. By setting clear
standards and requirements, they ensure every cloud provider plays nicely with Kubernetes. It is
their responsibility to configure cluster components to enable cloud provider integrations.&lt;/p&gt;
&lt;p&gt;In this blog of the SIG Spotlight series, &lt;a href=&#34;https://twitter.com/arujjval&#34;&gt;Arujjwal Negi&lt;/a&gt; interviews
&lt;a href=&#34;https://github.com/elmiko&#34;&gt;Michael McCune&lt;/a&gt; (Red Hat), also known as &lt;em&gt;elmiko&lt;/em&gt;, co-chair of SIG Cloud
Provider, to give us an insight into the workings of this group.&lt;/p&gt;
&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arujjwal&lt;/strong&gt;: Let&#39;s start by getting to know you. Can you give us a small intro about yourself and
how you got into Kubernetes?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Michael&lt;/strong&gt;: Hi, I’m Michael McCune, most people around the community call me by my handle,
&lt;em&gt;elmiko&lt;/em&gt;. I’ve been a software developer for a long time now (Windows 3.1 was popular when I
started!), and I’ve been involved with open-source software for most of my career. I first got
involved with Kubernetes as a developer of machine learning and data science applications; the team
I was on at the time was creating tutorials and examples to demonstrate the use of technologies like
Apache Spark on Kubernetes. That said, I’ve been interested in distributed systems for many years
and when an opportunity arose to join a team working directly on Kubernetes, I jumped at it!&lt;/p&gt;
&lt;h2 id=&#34;functioning-and-working&#34;&gt;Functioning and working&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arujjwal&lt;/strong&gt;: Can you give us an insight into what SIG Cloud Provider does and how it functions?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Michael&lt;/strong&gt;: SIG Cloud Provider was formed to help ensure that Kubernetes provides a neutral
integration point for all infrastructure providers. Our largest task to date has been the extraction
and migration of in-tree cloud controllers to out-of-tree components. The SIG meets regularly to
discuss progress and upcoming tasks and also to answer questions and bugs that
arise. Additionally, we act as a coordination point for cloud provider subprojects such as the cloud
provider framework, specific cloud controller implementations, and the &lt;a href=&#34;https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/&#34;&gt;Konnectivity proxy
project&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arujjwal:&lt;/strong&gt; After going through the project
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md&#34;&gt;README&lt;/a&gt;, I
learned that SIG Cloud Provider works with the integration of Kubernetes with cloud providers. How
does this whole process go?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Michael:&lt;/strong&gt; One of the most common ways to run Kubernetes is by deploying it to a cloud environment
(AWS, Azure, GCP, etc). Frequently, the cloud infrastructures have features that enhance the
performance of Kubernetes, for example, by providing elastic load balancing for Service objects. To
ensure that cloud-specific services can be consistently consumed by Kubernetes, the Kubernetes
community has created cloud controllers to address these integration points. Cloud providers can
create their own controllers either by using the framework maintained by the SIG or by following
the API guides defined in the Kubernetes code and documentation. One thing I would like to point out
is that SIG Cloud Provider does not deal with the lifecycle of nodes in a Kubernetes cluster;
for those types of topics, SIG Cluster Lifecycle and the Cluster API project are more appropriate
venues.&lt;/p&gt;
&lt;h2 id=&#34;important-subprojects&#34;&gt;Important subprojects&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arujjwal:&lt;/strong&gt; There are a lot of subprojects within this SIG. Can you highlight some of the most
important ones and what job they do?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Michael:&lt;/strong&gt; I think the two most important subprojects today are the &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md#kubernetes-cloud-provider&#34;&gt;cloud provider
framework&lt;/a&gt;
and the &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md#cloud-provider-extraction-migration&#34;&gt;extraction/migration
project&lt;/a&gt;. The
cloud provider framework is a common library to help infrastructure integrators build a cloud
controller for their infrastructure. This project is most frequently the starting point for new
people coming to the SIG. The extraction and migration project is the other big subproject and a
large part of why the framework exists. A little history might help explain further: for a long
time, Kubernetes needed some integration with the underlying infrastructure, not
necessarily to add features but to be aware of cloud events like instance termination. The cloud
provider integrations were built into the Kubernetes code tree, and thus the term &amp;quot;in-tree&amp;quot; was
created (check out this &lt;a href=&#34;https://kaslin.rocks/out-of-tree/&#34;&gt;article on the topic&lt;/a&gt; for more
info). The activity of maintaining provider-specific code in the main Kubernetes source tree was
considered undesirable by the community. The community’s decision inspired the creation of the
extraction and migration project to remove the &amp;quot;in-tree&amp;quot; cloud controllers in favor of
&amp;quot;out-of-tree&amp;quot; components.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arujjwal:&lt;/strong&gt; What makes [the cloud provider framework] a good place to start? Does it have consistent good beginner work? What
kind?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Michael:&lt;/strong&gt; I feel that the cloud provider framework is a good place to start as it encodes the
community’s preferred practices for cloud controller managers and, as such, will give a newcomer a
strong understanding of how and what the managers do. Unfortunately, there is not a consistent
stream of beginner work on this component; this is due in part to the mature nature of the framework
and that of the individual providers as well. For folks who are interested in getting more involved,
having some &lt;a href=&#34;https://go.dev/&#34;&gt;Go language&lt;/a&gt; knowledge is good and also having an understanding of
how at least one cloud API (e.g., AWS, Azure, GCP) works is also beneficial. In my personal opinion,
being a newcomer to SIG Cloud Provider can be challenging as most of the code around this project
deals directly with specific cloud provider interactions. My best advice to people wanting to do
more work on cloud providers is to grow your familiarity with one or two cloud APIs, then look
for open issues on the controller managers for those clouds, and always communicate with the other
contributors as much as possible.&lt;/p&gt;
&lt;h2 id=&#34;accomplishments&#34;&gt;Accomplishments&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arujjwal:&lt;/strong&gt; Can you share about an accomplishment(s) of the SIG that you are proud of?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Michael:&lt;/strong&gt; Since I joined the SIG, more than a year ago, we have made great progress in advancing
the extraction and migration subproject. We have moved from an alpha status on the defining
&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/README.md&#34;&gt;KEP&lt;/a&gt; to a beta status and
are inching ever closer to removing the old provider code from the Kubernetes source tree. I&#39;ve been
really proud to see the active engagement from our community members and to see the progress we have
made towards extraction. I have a feeling that, within the next few releases, we will see the final
removal of the in-tree cloud controllers and the completion of the subproject.&lt;/p&gt;
&lt;h2 id=&#34;advice-for-new-contributors&#34;&gt;Advice for new contributors&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arujjwal:&lt;/strong&gt; Is there any suggestion or advice for new contributors on how they can start at SIG
Cloud Provider?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Michael:&lt;/strong&gt; This is a tricky question in my opinion. SIG Cloud Provider is focused on the code
pieces that integrate between Kubernetes and an underlying infrastructure. It is very common, but
not necessary, for members of the SIG to be representing a cloud provider in an official capacity. I
recommend that anyone interested in this part of Kubernetes should come to an SIG meeting to see how
we operate and also to study the cloud provider framework project. We have some interesting ideas
for future work, such as a common testing framework, that will cut across all cloud providers and
will be a great opportunity for anyone looking to expand their Kubernetes involvement.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arujjwal:&lt;/strong&gt; Are there any specific skills you&#39;re looking for that we should highlight? To give you
an example from our own [SIG ContribEx]
(&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md)&#34;&gt;https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md)&lt;/a&gt;:
if you&#39;re an expert in &lt;a href=&#34;https://gohugo.io/&#34;&gt;Hugo&lt;/a&gt;, we can always use some help with k8s.dev!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Michael:&lt;/strong&gt; The SIG is currently working through the final phases of our extraction and migration
process, but we are looking toward the future and starting to plan what will come next. One of the
big topics that the SIG has discussed is testing. Currently, we do not have a generic common set of
tests that can be exercised by each cloud provider to confirm the behaviour of their controller
manager. If you are an expert in Ginkgo and the Kubetest framework, we could probably use your help
in designing and implementing the new tests.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;This is where the conversation ends. I hope this gave you some insights about SIG Cloud Provider&#39;s
aim and working. This is just the tip of the iceberg. To know more and get involved with SIG Cloud
Provider, try attending their meetings
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md#meetings&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>A look into the Kubernetes Book Club</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/02/22/k8s-book-club/</link>
      <pubDate>Thu, 22 Feb 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/02/22/k8s-book-club/</guid>
      <description>
        
        
        &lt;p&gt;Learning Kubernetes and the entire ecosystem of technologies around it is not without its
challenges. In this interview, we will talk with &lt;a href=&#34;https://www.linkedin.com/in/csantanapr/&#34;&gt;Carlos Santana
(AWS)&lt;/a&gt; to learn a bit more about how he created the
&lt;a href=&#34;https://community.cncf.io/kubernetes-virtual-book-club/&#34;&gt;Kubernetes Book Club&lt;/a&gt;, how it works, and
how anyone can join in to take advantage of a community-based learning experience.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;csantana_k8s_book_club.jpg&#34; alt=&#34;Carlos Santana speaking at KubeCon NA 2023&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Frederico Muñoz (FSM)&lt;/strong&gt;: Hello Carlos, thank you so much for your availability. To start with,
could you tell us a bit about yourself?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Carlos Santana (CS)&lt;/strong&gt;: Of course. My experience in deploying Kubernetes in production six
years ago opened the door for me to join &lt;a href=&#34;https://knative.dev/&#34;&gt;Knative&lt;/a&gt; and then contribute to
Kubernetes through the Release Team. Working on upstream Kubernetes has been one of the best
experiences I&#39;ve had in open-source. Over the past two years, in my role as a Senior Specialist
Solutions Architect at AWS, I have been assisting large enterprises build their internal developer
platforms (IDP) on top of Kubernetes. Going forward, my open source contributions are directed
towards &lt;a href=&#34;https://cnoe.io/&#34;&gt;CNOE&lt;/a&gt; and CNCF projects like &lt;a href=&#34;https://github.com/argoproj&#34;&gt;Argo&lt;/a&gt;,
&lt;a href=&#34;https://www.crossplane.io/&#34;&gt;Crossplane&lt;/a&gt;, and &lt;a href=&#34;https://www.cncf.io/projects/backstage/&#34;&gt;Backstage&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;creating-the-book-club&#34;&gt;Creating the Book Club&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: So your path led you to Kubernetes, and at that point what was the motivating factor for
starting the Book Club?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;: The idea for the Kubernetes Book Club sprang from a casual suggestion during a
&lt;a href=&#34;https://github.com/vmware-archive/tgik&#34;&gt;TGIK&lt;/a&gt; livestream. For me, it was more than just about
reading a book; it was about creating a learning community. This platform has not only been a source
of knowledge but also a support system, especially during the challenging times of the
pandemic. It&#39;s gratifying to see how this initiative has helped members cope and grow. The first
book &lt;a href=&#34;https://www.oreilly.com/library/view/production-kubernetes/9781492092292/&#34;&gt;Production
Kubernetes&lt;/a&gt; took 36
weeks, when we started on March 5th 2021. Currently don&#39;t take that long to cover a book, one or two
chapters per week.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: Could you describe the way the Kubernetes Book Club works? How do you select the books and how
do you go through them?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;: We collectively choose books based on the interests and needs of the group. This practical
approach helps members, especially beginners, grasp complex concepts more easily. We have two weekly
series, one for the EMEA timezone, and I organize the US one. Each organizer works with their co-host
and picks a book on Slack, then sets up a lineup of hosts for a couple of weeks to discuss each
chapter.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: If I’m not mistaken, the Kubernetes Book Club is in its 17th book, which is significant: is
there any secret recipe for keeping things active?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;: The secret to keeping the club active and engaging lies in a couple of key factors.&lt;/p&gt;
&lt;p&gt;Firstly, consistency has been crucial. We strive to maintain a regular schedule, only cancelling
meetups for major events like holidays or KubeCon. This regularity helps members stay engaged and
builds a reliable community.&lt;/p&gt;
&lt;p&gt;Secondly, making the sessions interesting and interactive has been vital. For instance, I often
introduce pop-up quizzes during the meetups, which not only tests members&#39; understanding but also
adds an element of fun. This approach keeps the content relatable and helps members understand how
theoretical concepts are applied in real-world scenarios.&lt;/p&gt;
&lt;h2 id=&#34;topics-covered-in-the-book-club&#34;&gt;Topics covered in the Book Club&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: The main topics of the books have been Kubernetes, GitOps, Security, SRE, and
Observability: is this a reflection of the cloud native landscape, especially in terms of
popularity?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;: Our journey began with &#39;Production Kubernetes&#39;, setting the tone for our focus on practical,
production-ready solutions. Since then, we&#39;ve delved into various aspects of the CNCF landscape,
aligning our books with a different theme.  Each theme, whether it be Security, Observability, or
Service Mesh, is chosen based on its relevance and demand within the community. For instance, in our
recent themes on Kubernetes Certifications, we brought the book authors into our fold as active
hosts, enriching our discussions with their expertise.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: I know that the project had recent changes, namely being integrated into the CNCF as a
&lt;a href=&#34;https://community.cncf.io/&#34;&gt;Cloud Native Community Group&lt;/a&gt;. Could you talk a bit about this change?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;: The CNCF graciously accepted the book club as a Cloud Native Community Group. This is a
significant development that has streamlined our operations and expanded our reach. This alignment
has been instrumental in enhancing our administrative capabilities, similar to those used by
Kubernetes Community Days (KCD) meetups. Now, we have a more robust structure for memberships, event
scheduling, mailing lists, hosting web conferences, and recording sessions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: How has your involvement with the CNCF impacted the growth and engagement of the Kubernetes
Book Club over the past six months?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;: Since becoming part of the CNCF community six months ago, we&#39;ve witnessed significant
quantitative changes within the Kubernetes Book Club. Our membership has surged to over 600 members,
and we&#39;ve successfully organized and conducted more than 40 events during this period. What&#39;s even
more promising is the consistent turnout, with an average of 30 attendees per event. This growth and
engagement are clear indicators of the positive influence of our CNCF affiliation on the Kubernetes
Book Club&#39;s reach and impact in the community.&lt;/p&gt;
&lt;h2 id=&#34;joining-the-book-club&#34;&gt;Joining the Book Club&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: For anyone wanting to join, what should they do?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;: There are three steps to join:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;First, join the &lt;a href=&#34;https://community.cncf.io/kubernetes-virtual-book-club/&#34;&gt;Kubernetes Book Club Community&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Then RSVP to the
&lt;a href=&#34;https://community.cncf.io/kubernetes-virtual-book-club/&#34;&gt;events&lt;/a&gt;
on the community page&lt;/li&gt;
&lt;li&gt;Lastly, join the CNCF Slack channel
&lt;a href=&#34;https://cloud-native.slack.com/archives/C05EYA14P37&#34;&gt;#kubernetes-book-club&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;: Excellent, thank you! Any final comments you would like to share?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;: The Kubernetes Book Club is more than just a group of professionals discussing books; it&#39;s a
vibrant community and amazing volunteers that help organize and host
&lt;a href=&#34;https://www.linkedin.com/in/neependra/&#34;&gt;Neependra Khare&lt;/a&gt;,
&lt;a href=&#34;https://www.linkedin.com/in/ericsmalling/&#34;&gt;Eric Smalling&lt;/a&gt;,
&lt;a href=&#34;https://www.linkedin.com/in/sevikarakulak/&#34;&gt;Sevi Karakulak&lt;/a&gt;,
&lt;a href=&#34;https://www.linkedin.com/in/chadmcrowell/&#34;&gt;Chad M. Crowell&lt;/a&gt;,
and &lt;a href=&#34;https://www.linkedin.com/in/walidshaari/&#34;&gt;Walid (CNJ) Shaari&lt;/a&gt;.
Look us up at KubeCon and get your Kubernetes Book Club sticker!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Image Filesystem: Configuring Kubernetes to store containers on a separate filesystem</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/01/23/kubernetes-separate-image-filesystem/</link>
      <pubDate>Tue, 23 Jan 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/01/23/kubernetes-separate-image-filesystem/</guid>
      <description>
        
        
        &lt;p&gt;A common issue in running/operating Kubernetes clusters is running out of disk space.
When the node is provisioned, you should aim to have a good amount of storage space for your container images and running containers.
The &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/setup/production-environment/container-runtimes/&#34;&gt;container runtime&lt;/a&gt; usually writes to &lt;code&gt;/var&lt;/code&gt;.
This can be located as a separate partition or on the root filesystem.
CRI-O, by default, writes its containers and images to &lt;code&gt;/var/lib/containers&lt;/code&gt;, while containerd writes its containers and images to &lt;code&gt;/var/lib/containerd&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In this blog post, we want to bring attention to ways that you can configure your container runtime to store its content separately from the default partition.&lt;br&gt;
This allows for more flexibility in configuring Kubernetes and provides support for adding a larger disk for the container storage while keeping the default filesystem untouched.&lt;/p&gt;
&lt;p&gt;One area that needs more explaining is where/what Kubernetes is writing to disk.&lt;/p&gt;
&lt;h2 id=&#34;understanding-kubernetes-disk-usage&#34;&gt;Understanding Kubernetes disk usage&lt;/h2&gt;
&lt;p&gt;Kubernetes has persistent data and ephemeral data.  The base path for the kubelet and local
Kubernetes-specific storage is configurable, but it is usually assumed to be &lt;code&gt;/var/lib/kubelet&lt;/code&gt;.
In the Kubernetes docs, this is sometimes referred to as the root or node filesystem. The bulk of this data can be categorized into:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ephemeral storage&lt;/li&gt;
&lt;li&gt;logs&lt;/li&gt;
&lt;li&gt;and container runtime&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is different from most POSIX systems as the root/node filesystem is not &lt;code&gt;/&lt;/code&gt; but the disk that &lt;code&gt;/var/lib/kubelet&lt;/code&gt; is on.&lt;/p&gt;
&lt;h3 id=&#34;ephemeral-storage&#34;&gt;Ephemeral storage&lt;/h3&gt;
&lt;p&gt;Pods and containers can require temporary or transient local storage for their operation.
The lifetime of the ephemeral storage does not extend beyond the life of the individual pod, and the ephemeral storage cannot be shared across pods.&lt;/p&gt;
&lt;h3 id=&#34;logs&#34;&gt;Logs&lt;/h3&gt;
&lt;p&gt;By default, Kubernetes stores the logs of each running container, as files within &lt;code&gt;/var/log&lt;/code&gt;.
These logs are ephemeral and are monitored by the kubelet to make sure that they do not grow too large while the pods are running.&lt;/p&gt;
&lt;p&gt;You can customize the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/cluster-administration/logging/#log-rotation&#34;&gt;log rotation&lt;/a&gt; settings
for each node to manage the size of these logs, and configure log shipping (using a 3rd party solution)
to avoid relying on the node-local storage.&lt;/p&gt;
&lt;h3 id=&#34;container-runtime&#34;&gt;Container runtime&lt;/h3&gt;
&lt;p&gt;The container runtime has two different areas of storage for containers and images.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;read-only layer: Images are usually denoted as the read-only layer, as they are not modified when containers are running.
The read-only layer can consist of multiple layers that are combined into a single read-only layer.
There is a thin layer on top of containers that provides ephemeral storage for containers if the container is writing to the filesystem.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;writeable layer: Depending on your container runtime, local writes might be
implemented as a layered write mechanism (for example, &lt;code&gt;overlayfs&lt;/code&gt; on Linux or CimFS on Windows).
This is referred to as the writable layer.
Local writes could also use a writeable filesystem that is initialized with a full clone of the container
image; this is used for some runtimes based on hypervisor virtualisation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The container runtime filesystem contains both the read-only layer and the writeable layer.
This is considered the &lt;code&gt;imagefs&lt;/code&gt; in Kubernetes documentation.&lt;/p&gt;
&lt;h2 id=&#34;container-runtime-configurations&#34;&gt;Container runtime configurations&lt;/h2&gt;
&lt;h3 id=&#34;cri-o&#34;&gt;CRI-O&lt;/h3&gt;
&lt;p&gt;CRI-O uses a storage configuration file in TOML format that lets you control how the container runtime stores persistent and temporary data.
CRI-O utilizes the &lt;a href=&#34;https://github.com/containers/storage&#34;&gt;storage library&lt;/a&gt;.&lt;br&gt;
Some Linux distributions have a manual entry for storage (&lt;code&gt;man 5 containers-storage.conf&lt;/code&gt;).
The main configuration for storage is located in &lt;code&gt;/etc/containers/storage.conf&lt;/code&gt; and one can control the location for temporary data and the root directory.&lt;br&gt;
The root directory is where CRI-O stores the persistent data.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-toml&#34; data-lang=&#34;toml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;[storage]
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Default storage driver&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;driver = &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;overlay&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Temporary storage location&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;runroot = &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;/var/run/containers/storage&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Primary read/write location of container storage &lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;graphroot = &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;/var/lib/containers/storage&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;ul&gt;
&lt;li&gt;&lt;code&gt;graphroot&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Persistent data stored from the container runtime&lt;/li&gt;
&lt;li&gt;If SELinux is enabled, this must match the &lt;code&gt;/var/lib/containers/storage&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;runroot&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Temporary read/write access for container&lt;/li&gt;
&lt;li&gt;Recommended to have this on a temporary filesystem&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here is a quick way to relabel your graphroot directory to match &lt;code&gt;/var/lib/containers/storage&lt;/code&gt;:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;semanage fcontext -a -e /var/lib/containers/storage &amp;lt;YOUR-STORAGE-PATH&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;restorecon -R -v &amp;lt;YOUR-STORAGE-PATH&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id=&#34;containerd&#34;&gt;containerd&lt;/h3&gt;
&lt;p&gt;The containerd runtime uses a TOML configuration file to control where persistent and ephemeral data is stored.
The default path for the config file is located at &lt;code&gt;/etc/containerd/config.toml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The relevant fields for containerd storage are &lt;code&gt;root&lt;/code&gt; and &lt;code&gt;state&lt;/code&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;root&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;The root directory for containerd metadata&lt;/li&gt;
&lt;li&gt;Default is &lt;code&gt;/var/lib/containerd&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Root also requires SELinux labels if your OS requires it&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;state&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Temporary data for containerd&lt;/li&gt;
&lt;li&gt;Default is &lt;code&gt;/run/containerd&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;kubernetes-node-pressure-eviction&#34;&gt;Kubernetes node pressure eviction&lt;/h2&gt;
&lt;p&gt;Kubernetes will automatically detect if the container filesystem is split from the node filesystem.
When one separates the filesystem, Kubernetes is responsible for monitoring both the node filesystem and the container runtime filesystem.
Kubernetes documentation refers to the node filesystem and the container runtime filesystem as nodefs and imagefs.
If either nodefs or the imagefs are running out of disk space, then the overall node is considered to have disk pressure.
Kubernetes will first reclaim space by deleting unusued containers and images, and then it will resort to evicting pods.
On a node that has a nodefs and an imagefs, the kubelet will
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/architecture/garbage-collection/#containers-images&#34;&gt;garbage collect&lt;/a&gt; unused container images
on imagefs and will remove dead pods and their containers from the nodefs.
If there is only a nodefs, then Kubernetes garbage collection includes dead containers, dead pods and unused images.&lt;/p&gt;
&lt;p&gt;Kubernetes allows more configurations for determining if your disk is full.&lt;br&gt;
The eviction manager within the kubelet has some configuration settings that let you control
the relevant thresholds.
For filesystems, the relevant measurements are &lt;code&gt;nodefs.available&lt;/code&gt;, &lt;code&gt;nodefs.inodesfree&lt;/code&gt;, &lt;code&gt;imagefs.available&lt;/code&gt;, and &lt;code&gt;imagefs.inodesfree&lt;/code&gt;.
If there is not a dedicated disk for the container runtime then imagefs is ignored.&lt;/p&gt;
&lt;p&gt;Users can use the existing defaults:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;memory.available&lt;/code&gt; &amp;lt; 100MiB&lt;/li&gt;
&lt;li&gt;&lt;code&gt;nodefs.available&lt;/code&gt; &amp;lt; 10%&lt;/li&gt;
&lt;li&gt;&lt;code&gt;imagefs.available&lt;/code&gt; &amp;lt; 15%&lt;/li&gt;
&lt;li&gt;&lt;code&gt;nodefs.inodesFree&lt;/code&gt; &amp;lt; 5% (Linux nodes)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Kubernetes allows you to set user defined values in &lt;code&gt;EvictionHard&lt;/code&gt; and &lt;code&gt;EvictionSoft&lt;/code&gt; in the kubelet configuration file.&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;EvictionHard&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;defines limits; once these limits are exceeded, pods will be evicted without any grace period.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;EvictionSoft&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;defines limits; once these limits are exceeded, pods will be evicted with a grace period that can be set per signal.&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;If you specify a value for &lt;code&gt;EvictionHard&lt;/code&gt;, it will replace the defaults.&lt;br&gt;
This means it is important to set all signals in your configuration.&lt;/p&gt;
&lt;p&gt;For example, the following kubelet configuration could be used to configure &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals-and-thresholds&#34;&gt;eviction signals&lt;/a&gt; and grace period options.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;kubelet.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;KubeletConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;address&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;192.168.0.8&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;20250&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;serializeImagePulls&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;false&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;evictionHard&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;memory.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;100Mi&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;10%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;5%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;15%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;5%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;evictionSoft&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;memory.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;100Mi&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;10%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;5%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;15%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;5%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;evictionSoftGracePeriod&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;memory.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;1m30s&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2m&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2m&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2m&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2m&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;evictionMaxPodGracePeriod&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;60s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id=&#34;problems&#34;&gt;Problems&lt;/h3&gt;
&lt;p&gt;The Kubernetes project recommends that you either use the default settings for eviction or you set all the fields for eviction.
You can use the default settings or specify your own &lt;code&gt;evictionHard&lt;/code&gt; settings. If you miss a signal, then Kubernetes will not monitor that resource.
One common misconfiguration administrators or users can hit is mounting a new filesystem to &lt;code&gt;/var/lib/containers/storage&lt;/code&gt; or &lt;code&gt;/var/lib/containerd&lt;/code&gt;.
Kubernetes will detect a separate filesystem, so you want to make sure to check that &lt;code&gt;imagefs.inodesfree&lt;/code&gt; and &lt;code&gt;imagefs.available&lt;/code&gt; match your needs if you&#39;ve done this.&lt;/p&gt;
&lt;p&gt;Another area of confusion is that ephemeral storage reporting does not change if you define an image
filesystem for your node. The image filesystem (&lt;code&gt;imagefs&lt;/code&gt;) is used to store container image layers; if a
container writes to its own root filesystem, that local write doesn&#39;t count towards the size of the container image. The place where the container runtime stores those local modifications is runtime-defined, but is often
the image filesystem.
If a container in a pod is writing to a filesystem-backed &lt;code&gt;emptyDir&lt;/code&gt; volume, then this uses space from the
&lt;code&gt;nodefs&lt;/code&gt; filesystem.
The kubelet always reports ephemeral storage capacity and allocations based on the filesystem represented
by &lt;code&gt;nodefs&lt;/code&gt;; this can be confusing when ephemeral writes are actually going to the image filesystem.&lt;/p&gt;
&lt;h3 id=&#34;future-work&#34;&gt;Future work&lt;/h3&gt;
&lt;p&gt;To fix the ephemeral storage reporting limitations and provide more configuration options to the container runtime, SIG Node are working on &lt;a href=&#34;http://kep.k8s.io/4191&#34;&gt;KEP-4191&lt;/a&gt;.
In KEP-4191, Kubernetes will detect if the writeable layer is separated from the read-only layer (images).
This would allow us to have all ephemeral storage, including the writeable layer, on the same disk as well as allowing for a separate disk for images.&lt;/p&gt;
&lt;h3 id=&#34;getting-involved&#34;&gt;Getting involved&lt;/h3&gt;
&lt;p&gt;If you would like to get involved, you can
join &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node&#34;&gt;Kubernetes Node Special-Interest-Group&lt;/a&gt; (SIG).&lt;/p&gt;
&lt;p&gt;If you would like to share feedback, you can do so on our
&lt;a href=&#34;https://kubernetes.slack.com/archives/C0BP8PW9G&#34;&gt;#sig-node&lt;/a&gt; Slack channel.
If you&#39;re not already part of that Slack workspace, you can visit &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;https://slack.k8s.io/&lt;/a&gt; for an invitation.&lt;/p&gt;
&lt;p&gt;Special thanks to all the contributors who provided great reviews, shared valuable insights or suggested the topic idea.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Peter Hunt&lt;/li&gt;
&lt;li&gt;Mrunal Patel&lt;/li&gt;
&lt;li&gt;Ryan Phillips&lt;/li&gt;
&lt;li&gt;Gaurav Singh&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Spotlight on SIG Release (Release Team Subproject)</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/01/15/sig-release-spotlight-2023/</link>
      <pubDate>Mon, 15 Jan 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/01/15/sig-release-spotlight-2023/</guid>
      <description>
        
        
        &lt;p&gt;The Release Special Interest Group (SIG Release), where Kubernetes sharpens its blade
with cutting-edge features and bug fixes every 4 months. Have you ever considered how such a big
project like Kubernetes manages its timeline so efficiently to release its new version, or how
the internal workings of the Release Team look like? If you&#39;re curious about these questions or
want to know more and get involved with the work SIG Release does, read on!&lt;/p&gt;
&lt;p&gt;SIG Release plays a crucial role in the development and evolution of Kubernetes.
Its primary responsibility is to manage the release process of new versions of Kubernetes.
It operates on a regular release cycle, &lt;a href=&#34;https://www.kubernetes.dev/resources/release/&#34;&gt;typically every three to four months&lt;/a&gt;.
During this cycle, the Kubernetes Release Team works closely with other SIGs and contributors
to ensure a smooth and well-coordinated release. This includes planning the release schedule, setting deadlines for code freeze and testing
phases, as well as creating release artefacts like binaries, documentation, and release notes.&lt;/p&gt;
&lt;p&gt;Before you read further, it is important to note that there are two subprojects under SIG
Release - &lt;em&gt;Release Engineering&lt;/em&gt; and &lt;em&gt;Release Team&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In this blog post, &lt;a href=&#34;https://twitter.com/nitishfy&#34;&gt;Nitish Kumar&lt;/a&gt; interviews Verónica
López (PlanetScale), Technical Lead of SIG Release, with the spotlight on the Release Team
subproject, how the release process looks like, and ways to get involved.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What is the typical release process for a      new version of Kubernetes, from initial planning
to the final release? Are there any specific methodologies and tools that you use to ensure a smooth release?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The release process for a new Kubernetes version is a well-structured and community-driven
effort. There are no specific methodologies or
tools as such that we follow, except a calendar with a series of steps to keep things organised.
The complete release process looks like this:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Release Team Onboarding:&lt;/strong&gt; We start with the formation of a Release Team, which includes
volunteers from the Kubernetes community who will be responsible for managing different
components of the new release. This is typically done before the previous release is about to
wrap up. Once the team is formed, new members are onboarded while the Release Team Lead and
the Branch Manager propose a calendar for the usual deliverables. As an example, you can take a look
at &lt;a href=&#34;https://github.com/kubernetes/sig-release/issues/2307&#34;&gt;the v1.29 team formation issue&lt;/a&gt; created at the SIG Release
repository. For a contributor to be the part of Release Team, they typically go through the
Release Shadow program, but that&#39;s not the only way to get involved with SIG Release.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Beginning Phase:&lt;/strong&gt; In the initial weeks of each release cycle, SIG Release diligently
tracks the progress of new features and enhancements outlined in Kubernetes Enhancement
Proposals (KEPs). While not all of these features are entirely new, they often commence
their journey in the alpha phase, subsequently advancing to the beta stage, and ultimately
attaining the status of stability.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Feature Maturation Phase:&lt;/strong&gt; We usually cut a couple of Alpha releases, containing new
features in an experimental state, to gather feedback from the community, followed by a
couple of Beta releases, where features are more stable and the focus is on fixing bugs. Feedback
from users is critical at this stage, to the point where sometimes we need to cut an
additional Beta release to address bugs or other concerns that may arise during this phase. Once
this is cleared, we cut a &lt;em&gt;release candidate&lt;/em&gt; (RC) before the actual release. Throughout
the cycle, efforts are made to update and improve documentation, including release notes
and user guides, a process that, in my opinion, deserves its own post.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Stabilisation Phase:&lt;/strong&gt; A few weeks before the new release, we implement a &lt;em&gt;code freeze&lt;/em&gt;, and
no new features are allowed after this point: this allows the focus to shift towards testing
and stabilisation. In parallel to the main release, we keep cutting monthly patches of old,
officially supported versions of Kubernetes, so you could say that the lifecycle of a Kubernetes
version extends for several months afterwards. Throughout the complete release cycle, efforts
are made to update and improve documentation, including release notes and user guides, a
process that, in our opinion, deserves its own post.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;sig-release-overview.png&#34; alt=&#34;Release team onboarding; beginning phase; stabalization phase; feature maturation phase&#34;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;How do you handle the balance between   stability and introducing new features in each
release? What criteria are used to determine which features make it into a release?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It’s a neverending mission, however, we think
that the key is in respecting our process and guidelines. Our guidelines are the result of
hours of discussions and feedback from dozens of members of the community who bring a wealth of knowledge and experience to the project. If we
didn’t have strict guidelines, we would keep having the same discussions over and over again,
instead of using our time for more productive topics that needs our attention. All the
critical exceptions require consensus from most of the team members, so we can ensure quality.&lt;/p&gt;
&lt;p&gt;The process of deciding what makes it into a release starts way before the Release Teams
takes over the workflows. Each individual SIG along with the most experienced contributors
gets to decide whether they’d like to include a feature or change, so the planning and ultimate
approval usually belongs to them. Then, the Release Team makes sure those contributions meet
the requirements of documentation, testing, backwards compatibility, among others, before
officially allowing them in. A similar process happens with cherry-picks for the monthly patch
releases, where we have strict policies about not accepting PRs that would require a full KEP,
or fixes that don’t include all the affected branches.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What are some of the most significant challenges you’ve encountered while developing
and releasing Kubernetes? How have you overcome these challenges?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Every cycle of release brings its own array of
challenges. It might involve tackling last-minute concerns like newly discovered Common Vulnerabilities and Exposures (CVEs),
resolving bugs within our internal tools, or addressing unexpected regressions caused by
features from previous releases. Another obstacle we often face is that, although our
team is substantial, most of us contribute on a volunteer basis. Sometimes it can feel like
we’re a bit understaffed, however we always manage to get organised and make it work.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;As a new contributor, what should be my ideal path to get involved with SIG Release? In
a community where everyone is busy with their own tasks, how can I find the right set of tasks to contribute effectively to it?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Everyone&#39;s way of getting involved within the Open Source community is different. SIG Release
is a self-serving team, meaning that we write our own tools to be able to ship releases. We
collaborate a lot with other SIGs, such as &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-k8s-infra/README.md&#34;&gt;SIG K8s Infra&lt;/a&gt;, but all the tools that we used needs to be
tailor-made for our massive technical needs, while reducing costs. This means that we are
constantly looking for volunteers who’d like to help with different types of projects, beyond “just” cutting a release.&lt;/p&gt;
&lt;p&gt;Our current project requires a mix of skills like &lt;a href=&#34;https://go.dev/&#34;&gt;Go&lt;/a&gt; programming,
understanding Kubernetes internals, Linux packaging, supply chain security, technical
writing, and general open-source project maintenance. This skill set is always evolving as our project grows.&lt;/p&gt;
&lt;p&gt;For an ideal path, this is what we suggest:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Get yourself familiar with the code, including how features are managed, the release calendar, and the overall structure of the Release Team.&lt;/li&gt;
&lt;li&gt;Join the Kubernetes community communication channels, such as &lt;a href=&#34;https://communityinviter.com/apps/kubernetes/community&#34;&gt;Slack&lt;/a&gt; (#sig-release), where we are particularly active.&lt;/li&gt;
&lt;li&gt;Join the &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-release#meetings&#34;&gt;SIG Release weekly meetings&lt;/a&gt;
which are open to all in the community. Participating in these meetings is a great way to learn about ongoing and future projects that
you might find relevant for your skillset and interests.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Remember, every experienced contributor was once in your shoes, and the community is often more than willing to guide and support newcomers.
Don&#39;t hesitate to ask questions, engage in discussions, and take small steps to contribute.
&lt;img src=&#34;sig-release-meetings.png&#34; alt=&#34;sig-release-questions&#34;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What is the Release Shadow Program and how is it different from other shadow programs included in various other SIGs?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Release Shadow Program offers a chance for interested individuals to shadow experienced
members of the Release Team throughout a Kubernetes release cycle. This is a unique chance to see all the hard work that a
Kubernetes release requires across sub-teams. A lot of people think that all we do is cut a release every three months, but that’s just the
top of the iceberg.&lt;/p&gt;
&lt;p&gt;Our program typically aligns with a specific Kubernetes release cycle, which has a
predictable timeline of approximately three months. While this program doesn’t involve writing new Kubernetes features, it still
requires a high sense of responsibility since the Release Team is the last step between a new release and thousands of contributors, so it’s a
great opportunity to learn a lot about modern software development cycles at an accelerated pace.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What are the qualifications that you generally look for in a person to volunteer as a release shadow/release lead for the next Kubernetes release?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While all the roles require some degree of technical ability, some require more hands-on
experience with Go and familiarity with the Kubernetes API while others require people who
are good at communicating technical content in a clear and concise way. It’s important to mention that we value enthusiasm and commitment over
technical expertise from day 1. If you have the right attitude and show us that you enjoy working with Kubernetes and or/release
engineering, even if it’s only through a personal project that you put together in your spare time, the team will make sure to guide
you. Being a self-starter and not being afraid to ask questions can take you a long way in our team.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What will you suggest to someone who has got rejected from being a part of the Release Shadow Program several times?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Keep applying.&lt;/p&gt;
&lt;p&gt;With every release cycle we have had an exponential growth in the number of applicants,
so it gets harder to be selected, which can be discouraging, but please know that getting rejected doesn’t mean you’re not talented. It’s
just practically impossible to accept every applicant, however here&#39;s an alternative that we suggest:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Start attending our weekly Kubernetes SIG Release meetings to introduce yourself and get familiar with the team and the projects we are working on.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The Release Team is one of the way to join SIG Release, but we are always looking for more hands to help. Again, in addition to certain
technical ability, the most sought after trait that we look for is people we can trust, and that requires time.
&lt;img src=&#34;sig-release-motivation.png&#34; alt=&#34;sig-release-motivation&#34;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Can you discuss any ongoing initiatives or upcoming features that the release team is particularly excited about for Kubernetes v1.28? How do these advancements align with the long-term vision of Kubernetes?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We are excited about finally publishing Kubernetes packages on community infrastructure. It has been something that we have been wanting to do for a few years now, but it’s a project
with many technical implications that must be in place before doing the transition. Once that’s done, we’ll be able to increase our productivity and take control of the entire workflows.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;final-thoughts&#34;&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;Well, this conversation ends here but not the learning. I hope this interview has given you some idea about what SIG Release does and how to
get started in helping out. It is important to mention again that this article covers the first subproject under SIG Release, the Release Team.
In the next Spotlight blog on SIG Release, we will provide a spotlight on the Release Engineering subproject, what it does and how to
get involved. Finally, you can go through the &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-release&#34;&gt;SIG Release charter&lt;/a&gt; to get a more in-depth understanding of how SIG Release operates.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Contextual logging in Kubernetes 1.29: Better troubleshooting and enhanced logging</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/20/contextual-logging-in-kubernetes-1-29/</link>
      <pubDate>Wed, 20 Dec 2023 09:30:00 -0800</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/20/contextual-logging-in-kubernetes-1-29/</guid>
      <description>
        
        
        &lt;p&gt;On behalf of the &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md&#34;&gt;Structured Logging Working Group&lt;/a&gt;
and &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme&#34;&gt;SIG Instrumentation&lt;/a&gt;,
we are pleased to announce that the contextual logging feature
introduced in Kubernetes v1.24 has now been successfully migrated to
two components (kube-scheduler and kube-controller-manager)
as well as some directories. This feature aims to provide more useful logs
for better troubleshooting of Kubernetes and to empower developers to enhance Kubernetes.&lt;/p&gt;
&lt;h2 id=&#34;what-is-contextual-logging&#34;&gt;What is contextual logging?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging&#34;&gt;Contextual logging&lt;/a&gt;
is based on the &lt;a href=&#34;https://github.com/go-logr/logr#a-minimal-logging-api-for-go&#34;&gt;go-logr&lt;/a&gt; API.
The key idea is that libraries are passed a logger instance by their caller
and use that for logging instead of accessing a global logger.
The binary decides the logging implementation, not the libraries.
The go-logr API is designed around structured logging and supports attaching
additional information to a logger.&lt;/p&gt;
&lt;p&gt;This enables additional use cases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The caller can attach additional information to a logger:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://pkg.go.dev/github.com/go-logr/logr#Logger.WithName&#34;&gt;WithName&lt;/a&gt; adds a &amp;quot;logger&amp;quot; key with the names concatenated by a dot as value&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://pkg.go.dev/github.com/go-logr/logr#Logger.WithValues&#34;&gt;WithValues&lt;/a&gt; adds key/value pairs&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When passing this extended logger into a function, and the function uses it
instead of the global logger, the additional information is then included
in all log entries, without having to modify the code that generates the log entries.
This is useful in highly parallel applications where it can become hard to identify
all log entries for a certain operation, because the output from different operations gets interleaved.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;When running unit tests, log output can be associated with the current test.
Then, when a test fails, only the log output of the failed test gets shown by go test.
That output can also be more verbose by default because it will not get shown for successful tests.
Tests can be run in parallel without interleaving their output.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One of the design decisions for contextual logging was to allow attaching a logger as value to a &lt;code&gt;context.Context&lt;/code&gt;.
Since the logger encapsulates all aspects of the intended logging for the call,
it is &lt;em&gt;part&lt;/em&gt; of the context, and not just &lt;em&gt;using&lt;/em&gt; it. A practical advantage is that many APIs
already have a &lt;code&gt;ctx&lt;/code&gt; parameter or can add one. This provides additional advantages, like being able to
get rid of &lt;code&gt;context.TODO()&lt;/code&gt; calls inside the functions.&lt;/p&gt;
&lt;h2 id=&#34;how-to-use-it&#34;&gt;How to use it&lt;/h2&gt;
&lt;p&gt;The contextual logging feature is alpha starting from Kubernetes v1.24,
so it requires the &lt;code&gt;ContextualLogging&lt;/code&gt; &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;feature gate&lt;/a&gt; to be enabled.
If you want to test the feature while it is alpha, you need to enable this feature gate
on the &lt;code&gt;kube-controller-manager&lt;/code&gt; and the &lt;code&gt;kube-scheduler&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;For the &lt;code&gt;kube-scheduler&lt;/code&gt;, there is one thing to note, in addition to enabling
the &lt;code&gt;ContextualLogging&lt;/code&gt; feature gate, instrumentation also depends on log verbosity.
To avoid slowing down the scheduler with the logging instrumentation for contextual logging added for 1.29,
it is important to choose carefully when to add additional information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;At &lt;code&gt;-v3&lt;/code&gt; or lower, only &lt;code&gt;WithValues(&amp;quot;pod&amp;quot;)&lt;/code&gt; is used once per scheduling cycle.
This has the intended effect that all log messages for the cycle include the pod information.
Once contextual logging is GA, &amp;quot;pod&amp;quot; key/value pairs can be removed from all log calls.&lt;/li&gt;
&lt;li&gt;At &lt;code&gt;-v4&lt;/code&gt; or higher, richer log entries get produced where &lt;code&gt;WithValues&lt;/code&gt; is also used for the node (when applicable)
and &lt;code&gt;WithName&lt;/code&gt; is used for the current operation and plugin.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here is an example that demonstrates the effect:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I1113 08:43:37.029524   87144 default_binder.go:53] &amp;quot;Attempting to bind pod to node&amp;quot; &lt;strong&gt;logger=&amp;quot;Bind.DefaultBinder&amp;quot;&lt;/strong&gt; &lt;strong&gt;pod&lt;/strong&gt;=&amp;quot;kube-system/coredns-69cbfb9798-ms4pq&amp;quot; &lt;strong&gt;node&lt;/strong&gt;=&amp;quot;127.0.0.1&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The immediate benefit is that the operation and plugin name are visible in &lt;code&gt;logger&lt;/code&gt;.
&lt;code&gt;pod&lt;/code&gt; and &lt;code&gt;node&lt;/code&gt; are already logged as parameters in individual log calls in &lt;code&gt;kube-scheduler&lt;/code&gt; code.
Once contextual logging is supported by more packages outside of &lt;code&gt;kube-scheduler&lt;/code&gt;,
they will also be visible there (for example, client-go). Once it is GA,
log calls can be simplified to avoid repeating those values.&lt;/p&gt;
&lt;p&gt;In &lt;code&gt;kube-controller-manager&lt;/code&gt;, &lt;code&gt;WithName&lt;/code&gt; is used to add the user-visible controller name to log output,
for example:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I1113 08:43:29.284360   87141 graph_builder.go:285] &amp;quot;garbage controller monitor not synced: no monitors&amp;quot; &lt;strong&gt;logger=&amp;quot;garbage-collector-controller&amp;quot;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The &lt;code&gt;logger=”garbage-collector-controller”&lt;/code&gt; was added by the &lt;code&gt;kube-controller-manager&lt;/code&gt; core
when instantiating that controller and appears in all of its log entries - at least as long as the code
that it calls supports contextual logging. Further work is needed to convert shared packages like client-go.&lt;/p&gt;
&lt;h2 id=&#34;performance-impact&#34;&gt;Performance impact&lt;/h2&gt;
&lt;p&gt;Supporting contextual logging in a package, i.e. accepting a logger from a caller, is cheap.
No performance impact was observed for the &lt;code&gt;kube-scheduler&lt;/code&gt;. As noted above,
adding &lt;code&gt;WithName&lt;/code&gt; and &lt;code&gt;WithValues&lt;/code&gt; needs to be done more carefully.&lt;/p&gt;
&lt;p&gt;In Kubernetes 1.29, enabling contextual logging at production verbosity (&lt;code&gt;-v3&lt;/code&gt; or lower)
caused no measurable slowdown for the &lt;code&gt;kube-scheduler&lt;/code&gt; and is not expected for the &lt;code&gt;kube-controller-manager&lt;/code&gt; either.
At debug levels, a 28% slowdown for some test cases is still reasonable given that the resulting logs make debugging easier.
For details, see the &lt;a href=&#34;https://github.com/kubernetes/enhancements/pull/4219#issuecomment-1807811995&#34;&gt;discussion around promoting the feature to beta&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;impact-on-downstream-users&#34;&gt;Impact on downstream users&lt;/h2&gt;
&lt;p&gt;Log output is not part of the Kubernetes API and changes regularly in each release,
whether it is because developers work on the code or because of the ongoing conversion
to structured and contextual logging.&lt;/p&gt;
&lt;p&gt;If downstream users have dependencies on specific logs,
they need to be aware of how this change affects them.&lt;/p&gt;
&lt;h2 id=&#34;further-reading&#34;&gt;Further reading&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Read the &lt;a href=&#34;https://www.kubernetes.dev/blog/2022/05/25/contextual-logging/&#34;&gt;Contextual Logging in Kubernetes 1.24&lt;/a&gt; article.&lt;/li&gt;
&lt;li&gt;Read the &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging&#34;&gt;KEP-3077: contextual logging&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;get-involved&#34;&gt;Get involved&lt;/h2&gt;
&lt;p&gt;If you&#39;re interested in getting involved, we always welcome new contributors to join us.
Contextual logging provides a fantastic opportunity for you to contribute to Kubernetes development and make a meaningful impact.
By joining &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/wg-structured-logging&#34;&gt;Structured Logging WG&lt;/a&gt;,
you can actively participate in the development of Kubernetes and make your first contribution.
It&#39;s a great way to learn and engage with the community while gaining valuable experience.&lt;/p&gt;
&lt;p&gt;We encourage you to explore the repository and familiarize yourself with the ongoing discussions and projects.
It&#39;s a collaborative environment where you can exchange ideas, ask questions, and work together with other contributors.&lt;/p&gt;
&lt;p&gt;If you have any questions or need guidance, don&#39;t hesitate to reach out to us
and you can do so on our &lt;a href=&#34;https://kubernetes.slack.com/messages/wg-structured-logging&#34;&gt;public Slack channel&lt;/a&gt;.
If you&#39;re not already part of that Slack workspace, you can visit &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;https://slack.k8s.io/&lt;/a&gt;
for an invitation.&lt;/p&gt;
&lt;p&gt;We would like to express our gratitude to all the contributors who provided excellent reviews,
shared valuable insights, and assisted in the implementation of this feature (in alphabetical order):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Aldo Culquicondor (&lt;a href=&#34;https://github.com/alculquicondor&#34;&gt;alculquicondor&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Andy Goldstein (&lt;a href=&#34;https://github.com/ncdc&#34;&gt;ncdc&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Feruzjon Muyassarov (&lt;a href=&#34;https://github.com/fmuyassarov&#34;&gt;fmuyassarov&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Freddie (&lt;a href=&#34;https://github.com/freddie400&#34;&gt;freddie400&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;JUN YANG (&lt;a href=&#34;https://github.com/yangjunmyfm192085&#34;&gt;yangjunmyfm192085&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Kante Yin (&lt;a href=&#34;https://github.com/kerthcet&#34;&gt;kerthcet&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Kiki (&lt;a href=&#34;https://github.com/carlory&#34;&gt;carlory&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Lucas Severo Alve (&lt;a href=&#34;https://github.com/knelasevero&#34;&gt;knelasevero&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Maciej Szulik (&lt;a href=&#34;https://github.com/soltysh&#34;&gt;soltysh&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Mengjiao Liu (&lt;a href=&#34;https://github.com/mengjiao-liu&#34;&gt;mengjiao-liu&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Naman Lakhwani (&lt;a href=&#34;https://github.com/Namanl2001&#34;&gt;Namanl2001&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Oksana Baranova (&lt;a href=&#34;https://github.com/oxxenix&#34;&gt;oxxenix&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Patrick Ohly (&lt;a href=&#34;https://github.com/pohly&#34;&gt;pohly&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;songxiao-wang87 (&lt;a href=&#34;https://github.com/songxiao-wang87&#34;&gt;songxiao-wang87&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Tim Allclai (&lt;a href=&#34;https://github.com/tallclair&#34;&gt;tallclair&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;ZhangYu (&lt;a href=&#34;https://github.com/Octopusjust&#34;&gt;Octopusjust&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Ziqi Zhao (&lt;a href=&#34;https://github.com/fatsheep9146&#34;&gt;fatsheep9146&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Zac (&lt;a href=&#34;https://github.com/249043822&#34;&gt;249043822&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29: Decoupling taint-manager from node-lifecycle-controller</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/19/kubernetes-1-29-taint-eviction-controller/</link>
      <pubDate>Tue, 19 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/19/kubernetes-1-29-taint-eviction-controller/</guid>
      <description>
        
        
        &lt;p&gt;This blog discusses a new feature in Kubernetes 1.29 to improve the handling of taint-based pod eviction.&lt;/p&gt;
&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;In Kubernetes 1.29, an improvement has been introduced to enhance the taint-based pod eviction handling on nodes.
This blog discusses the changes made to node-lifecycle-controller
to separate its responsibilities and improve overall code maintainability.&lt;/p&gt;
&lt;h2 id=&#34;summary-of-changes&#34;&gt;Summary of changes&lt;/h2&gt;
&lt;p&gt;node-lifecycle-controller previously combined two independent functions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Adding a pre-defined set of &lt;code&gt;NoExecute&lt;/code&gt; taints to Node based on Node&#39;s condition.&lt;/li&gt;
&lt;li&gt;Performing pod eviction on &lt;code&gt;NoExecute&lt;/code&gt; taint.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With the Kubernetes 1.29 release, the taint-based eviction implementation has been
moved out of node-lifecycle-controller into a separate and independent component called taint-eviction-controller.
This separation aims to disentangle code, enhance code maintainability,
and facilitate future extensions to either component.&lt;/p&gt;
&lt;p&gt;As part of the change, additional metrics were introduced to help you monitor taint-based pod evictions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;pod_deletion_duration_seconds&lt;/code&gt; measures the latency between the time when a taint effect
has been activated for the Pod and its deletion via taint-eviction-controller.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pod_deletions_total&lt;/code&gt; reports the total number of Pods deleted by taint-eviction-controller since its start.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;how-to-use-the-new-feature&#34;&gt;How to use the new feature?&lt;/h2&gt;
&lt;p&gt;A new feature gate, &lt;code&gt;SeparateTaintEvictionController&lt;/code&gt;, has been added. The feature is enabled by default as Beta in Kubernetes 1.29.
Please refer to the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;feature gate document&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When this feature is enabled, users can optionally disable taint-based eviction by setting &lt;code&gt;--controllers=-taint-eviction-controller&lt;/code&gt;
in kube-controller-manager.&lt;/p&gt;
&lt;p&gt;To disable the new feature and use the old taint-manager within node-lifecylecycle-controller , users can set the feature gate &lt;code&gt;SeparateTaintEvictionController=false&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;use-cases&#34;&gt;Use cases&lt;/h2&gt;
&lt;p&gt;This new feature will allow cluster administrators to extend and enhance the default
taint-eviction-controller and even replace the default taint-eviction-controller with a
custom implementation to meet different needs. An example is to better support
stateful workloads that use PersistentVolume on local disks.&lt;/p&gt;
&lt;h2 id=&#34;faq&#34;&gt;FAQ&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Does this feature change the existing behavior of taint-based pod evictions?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;No, the taint-based pod eviction behavior remains unchanged. If the feature gate
&lt;code&gt;SeparateTaintEvictionController&lt;/code&gt; is turned off, the legacy node-lifecycle-controller with taint-manager will continue to be used.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Will enabling/using this feature result in an increase in the time taken by any operations covered by existing SLIs/SLOs?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;No.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Will enabling/using this feature result in an increase in resource usage (CPU, RAM, disk, IO, ...)?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The increase in resource usage by running a separate &lt;code&gt;taint-eviction-controller&lt;/code&gt; will be negligible.&lt;/p&gt;
&lt;h2 id=&#34;learn-more&#34;&gt;Learn more&lt;/h2&gt;
&lt;p&gt;For more details, refer to the &lt;a href=&#34;http://kep.k8s.io/3902&#34;&gt;KEP&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;acknowledgments&#34;&gt;Acknowledgments&lt;/h2&gt;
&lt;p&gt;As with any Kubernetes feature, multiple community members have contributed, from
writing the KEP to implementing the new controller and reviewing the KEP and code. Special thanks to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Aldo Culquicondor (@alculquicondor)&lt;/li&gt;
&lt;li&gt;Maciej Szulik (@soltysh)&lt;/li&gt;
&lt;li&gt;Filip Křepinský (@atiratree)&lt;/li&gt;
&lt;li&gt;Han Kang (@logicalhan)&lt;/li&gt;
&lt;li&gt;Wei Huang (@Huang-Wei)&lt;/li&gt;
&lt;li&gt;Sergey Kanzhelevi (@SergeyKanzhelev)&lt;/li&gt;
&lt;li&gt;Ravi Gudimetla (@ravisantoshgudimetla)&lt;/li&gt;
&lt;li&gt;Deep Debroy (@ddebroy)&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29: PodReadyToStartContainers Condition Moves to Beta</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/19/pod-ready-to-start-containers-condition-now-in-beta/</link>
      <pubDate>Tue, 19 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/19/pod-ready-to-start-containers-condition-now-in-beta/</guid>
      <description>
        
        
        &lt;p&gt;With the recent release of Kubernetes 1.29, the &lt;code&gt;PodReadyToStartContainers&lt;/code&gt;
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions&#34;&gt;condition&lt;/a&gt; is
available by default.
The kubelet manages the value for that condition throughout a Pod&#39;s lifecycle,
in the status field of a Pod. The kubelet will use the &lt;code&gt;PodReadyToStartContainers&lt;/code&gt;
condition to accurately surface the initialization state of a Pod,
from the perspective of Pod sandbox creation and network configuration by a container runtime.&lt;/p&gt;
&lt;h2 id=&#34;what-s-the-motivation-for-this-feature&#34;&gt;What&#39;s the motivation for this feature?&lt;/h2&gt;
&lt;p&gt;Cluster administrators did not have a clear and easily accessible way to view the completion of Pod&#39;s sandbox creation
and initialization. As of 1.28, the &lt;code&gt;Initialized&lt;/code&gt; condition in Pods tracks the execution of init containers.
However, it has limitations in accurately reflecting the completion of sandbox creation and readiness to start containers for all Pods in a cluster.
This distinction is particularly important in multi-tenant clusters where tenants own the Pod specifications, including the set of init containers,
while cluster administrators manage storage plugins, networking plugins, and container runtime handlers.
Therefore, there is a need for an improved mechanism to provide cluster administrators with a clear and
comprehensive view of Pod sandbox creation completion and container readiness.&lt;/p&gt;
&lt;h2 id=&#34;what-s-the-benefit&#34;&gt;What&#39;s the benefit?&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Improved Visibility: Cluster administrators gain a clearer and more comprehensive view of Pod sandbox
creation completion and container readiness.
This enhanced visibility allows them to make better-informed decisions and troubleshoot issues more effectively.&lt;/li&gt;
&lt;li&gt;Metric Collection and Monitoring: Monitoring services can leverage the fields associated with
the &lt;code&gt;PodReadyToStartContainers&lt;/code&gt; condition to report sandbox creation state and latency.
Metrics can be collected at per-Pod cardinality or aggregated based on various
properties of the Pod, such as &lt;code&gt;volumes&lt;/code&gt;, &lt;code&gt;runtimeClassName&lt;/code&gt;, custom annotations for CNI
and IPAM plugins or arbitrary labels and annotations, and &lt;code&gt;storageClassName&lt;/code&gt; of
PersistentVolumeClaims.
This enables comprehensive monitoring and analysis of Pod readiness across the cluster.&lt;/li&gt;
&lt;li&gt;Enhanced Troubleshooting: With a more accurate representation of Pod sandbox creation and container readiness,
cluster administrators can quickly identify and address any issues that may arise during the initialization process.
This leads to improved troubleshooting capabilities and reduced downtime.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;what-s-next&#34;&gt;What’s next?&lt;/h3&gt;
&lt;p&gt;Due to feedback and adoption, the Kubernetes team promoted &lt;code&gt;PodReadyToStartContainersCondition&lt;/code&gt; to Beta in 1.29.
Your comments will help determine if this condition continues forward to get promoted to GA,
so please submit additional feedback on this feature!&lt;/p&gt;
&lt;h3 id=&#34;how-can-i-learn-more&#34;&gt;How can I learn more?&lt;/h3&gt;
&lt;p&gt;Please check out the
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/workloads/pods/pod-lifecycle/&#34;&gt;documentation&lt;/a&gt; for the
&lt;code&gt;PodReadyToStartContainersCondition&lt;/code&gt; to learn more about it and how it fits in relation to
other Pod conditions.&lt;/p&gt;
&lt;h3 id=&#34;how-to-get-involved&#34;&gt;How to get involved?&lt;/h3&gt;
&lt;p&gt;This feature is driven by the SIG Node community. Please join us to connect with
the community and share your ideas and feedback around the above feature and
beyond. We look forward to hearing from you!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29: New (alpha) Feature, Load Balancer IP Mode for Services</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/</link>
      <pubDate>Mon, 18 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/</guid>
      <description>
        
        
        &lt;p&gt;This blog introduces a new alpha feature in Kubernetes 1.29.
It provides a configurable approach to define how Service implementations,
exemplified in this blog by kube-proxy,
handle traffic from pods to the Service, within the cluster.&lt;/p&gt;
&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;In older Kubernetes releases, the kube-proxy would intercept traffic that was destined for the IP
address associated with a Service of &lt;code&gt;type: LoadBalancer&lt;/code&gt;. This happened whatever mode you used
for &lt;code&gt;kube-proxy&lt;/code&gt;.
The interception implemented the expected behavior (traffic eventually reaching the expected
endpoints behind the Service). The mechanism to make that work depended on the mode for kube-proxy;
on Linux, kube-proxy in iptables mode would redirecting packets directly to the endpoint; in ipvs mode,
kube-proxy would configure the load balancer&#39;s IP address to one interface on the node.
The motivation for implementing that interception was for two reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Traffic path optimization:&lt;/strong&gt; Efficiently redirecting pod traffic - when a container in a pod sends an outbound
packet that is destined for the load balancer&#39;s IP address -
directly to the backend service by bypassing the load balancer.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Handling load balancer packets:&lt;/strong&gt; Some load balancers send packets with the destination IP set to
the load balancer&#39;s IP address. As a result, these packets need to be routed directly to the correct backend (which
might not be local to that node), in order to avoid loops.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;problems&#34;&gt;Problems&lt;/h2&gt;
&lt;p&gt;However, there are several problems with the aforementioned behavior:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/79783&#34;&gt;Source IP&lt;/a&gt;:&lt;/strong&gt;
Some cloud providers use the load balancer&#39;s IP as the source IP when
transmitting packets to the node. In the ipvs mode of kube-proxy,
there is a problem that health checks from the load balancer never return. This occurs because the reply packets
would be forward to the local interface &lt;code&gt;kube-ipvs0&lt;/code&gt;(where the load balancer&#39;s IP is bound to)
and be subsequently ignored.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/66607&#34;&gt;Feature loss at load balancer level&lt;/a&gt;:&lt;/strong&gt;
Certain cloud providers offer features(such as TLS termination, proxy protocol, etc.) at the
load balancer level.
Bypassing the load balancer results in the loss of these features when the packet reaches the service
(leading to protocol errors).&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Even with the new alpha behaviour disabled (the default), there is a
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/66607#issuecomment-474513060&#34;&gt;workaround&lt;/a&gt;
that involves setting &lt;code&gt;.status.loadBalancer.ingress.hostname&lt;/code&gt; for the Service, in order
to bypass kube-proxy binding.
But this is just a makeshift solution.&lt;/p&gt;
&lt;h2 id=&#34;solution&#34;&gt;Solution&lt;/h2&gt;
&lt;p&gt;In summary, providing an option for cloud providers to disable the current behavior would be highly beneficial.&lt;/p&gt;
&lt;p&gt;To address this, Kubernetes v1.29 introduces a new (alpha) &lt;code&gt;.status.loadBalancer.ingress.ipMode&lt;/code&gt;
field for a Service.
This field specifies how the load balancer IP behaves and can be specified only when
the &lt;code&gt;.status.loadBalancer.ingress.ip&lt;/code&gt; field is also specified.&lt;/p&gt;
&lt;p&gt;Two values are possible for &lt;code&gt;.status.loadBalancer.ingress.ipMode&lt;/code&gt;: &lt;code&gt;&amp;quot;VIP&amp;quot;&lt;/code&gt; and &lt;code&gt;&amp;quot;Proxy&amp;quot;&lt;/code&gt;.
The default value is &amp;quot;VIP&amp;quot;, meaning that traffic delivered to the node
with the destination set to the load balancer&#39;s IP and port will be redirected to the backend service by kube-proxy.
This preserves the existing behavior of kube-proxy.
The &amp;quot;Proxy&amp;quot; value is intended to prevent kube-proxy from binding the load balancer&#39;s IP address
to the node in both ipvs and iptables modes.
Consequently, traffic is sent directly to the load balancer and then forwarded to the destination node.
The destination setting for forwarded packets varies depending on how the cloud provider&#39;s load balancer delivers traffic:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If the traffic is delivered to the node then DNATed to the pod, the destination would be set to the node&#39;s IP and node port;&lt;/li&gt;
&lt;li&gt;If the traffic is delivered directly to the pod, the destination would be set to the pod&#39;s IP and port.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;usage&#34;&gt;Usage&lt;/h2&gt;
&lt;p&gt;Here are the necessary steps to enable this feature:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Download the &lt;a href=&#34;https://kubernetes.io/releases/download/&#34;&gt;latest Kubernetes project&lt;/a&gt; (version &lt;code&gt;v1.29.0&lt;/code&gt; or later).&lt;/li&gt;
&lt;li&gt;Enable the feature gate with the command line flag &lt;code&gt;--feature-gates=LoadBalancerIPMode=true&lt;/code&gt;
on kube-proxy, kube-apiserver, and cloud-controller-manager.&lt;/li&gt;
&lt;li&gt;For Services with &lt;code&gt;type: LoadBalancer&lt;/code&gt;, set &lt;code&gt;ipMode&lt;/code&gt; to the appropriate value.
This step is likely handled by your chosen cloud-controller-manager during the &lt;code&gt;EnsureLoadBalancer&lt;/code&gt; process.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;more-information&#34;&gt;More information&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Read &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/services-networking/service/#load-balancer-ip-mode&#34;&gt;Specifying IPMode of load balancer status&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Read &lt;a href=&#34;https://kep.k8s.io/1860&#34;&gt;KEP-1860&lt;/a&gt; - &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/b103a6b0992439f996be4314caf3bf7b75652366/keps/sig-network/1860-kube-proxy-IP-node-binding#kep-1860-make-kubernetes-aware-of-the-loadbalancer-behaviour&#34;&gt;Make Kubernetes aware of the LoadBalancer behaviour&lt;/a&gt; &lt;em&gt;(sic)&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;getting-involved&#34;&gt;Getting involved&lt;/h2&gt;
&lt;p&gt;Reach us on &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;Slack&lt;/a&gt;: &lt;a href=&#34;https://kubernetes.slack.com/messages/sig-network&#34;&gt;#sig-network&lt;/a&gt;,
or through the &lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-network&#34;&gt;mailing list&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;acknowledgments&#34;&gt;Acknowledgments&lt;/h2&gt;
&lt;p&gt;Huge thanks to &lt;a href=&#34;https://github.com/Sh4d1&#34;&gt;@Sh4d1&lt;/a&gt; for the original KEP and initial implementation code.
I took over midway and completed the work. Similarly, immense gratitude to other contributors
who have assisted in the design, implementation, and review of this feature (alphabetical order):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/aojea&#34;&gt;@aojea&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/danwinship&#34;&gt;@danwinship&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/sftim&#34;&gt;@sftim&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/tengqm&#34;&gt;@tengqm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/thockin&#34;&gt;@thockin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/wojtek-t&#34;&gt;@wojtek-t&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29: Single Pod Access Mode for PersistentVolumes Graduates to Stable</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/18/read-write-once-pod-access-mode-ga/</link>
      <pubDate>Mon, 18 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/18/read-write-once-pod-access-mode-ga/</guid>
      <description>
        
        
        &lt;p&gt;With the release of Kubernetes v1.29, the &lt;code&gt;ReadWriteOncePod&lt;/code&gt; volume access mode
has graduated to general availability: it&#39;s part of Kubernetes&#39; stable API. In
this blog post, I&#39;ll take a closer look at this access mode and what it does.&lt;/p&gt;
&lt;h2 id=&#34;what-is-readwriteoncepod&#34;&gt;What is &lt;code&gt;ReadWriteOncePod&lt;/code&gt;?&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;ReadWriteOncePod&lt;/code&gt; is an access mode for
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#persistent-volumes&#34;&gt;PersistentVolumes&lt;/a&gt; (PVs)
and &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims&#34;&gt;PersistentVolumeClaims&lt;/a&gt; (PVCs)
introduced in Kubernetes v1.22. This access mode enables you to restrict volume
access to a single pod in the cluster, ensuring that only one pod can write to
the volume at a time. This can be particularly useful for stateful workloads
that require single-writer access to storage.&lt;/p&gt;
&lt;p&gt;For more context on access modes and how &lt;code&gt;ReadWriteOncePod&lt;/code&gt; works read
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#what-are-access-modes-and-why-are-they-important&#34;&gt;What are access modes and why are they important?&lt;/a&gt;
in the &lt;em&gt;Introducing Single Pod Access Mode for PersistentVolumes&lt;/em&gt; article from 2021.&lt;/p&gt;
&lt;h2 id=&#34;how-can-i-start-using-readwriteoncepod&#34;&gt;How can I start using &lt;code&gt;ReadWriteOncePod&lt;/code&gt;?&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;ReadWriteOncePod&lt;/code&gt; volume access mode is available by default in Kubernetes
versions v1.27 and beyond. In Kubernetes v1.29 and later, the Kubernetes API
always recognizes this access mode.&lt;/p&gt;
&lt;p&gt;Note that &lt;code&gt;ReadWriteOncePod&lt;/code&gt; is
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#access-modes&#34;&gt;only supported for CSI volumes&lt;/a&gt;,
and before using this feature, you will need to update the following
&lt;a href=&#34;https://kubernetes-csi.github.io/docs/sidecar-containers.html&#34;&gt;CSI sidecars&lt;/a&gt;
to these versions or greater:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes-csi/external-provisioner/releases/tag/v3.0.0&#34;&gt;csi-provisioner:v3.0.0+&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.3.0&#34;&gt;csi-attacher:v3.3.0+&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.3.0&#34;&gt;csi-resizer:v1.3.0+&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To start using &lt;code&gt;ReadWriteOncePod&lt;/code&gt;, you need to create a PVC with the
&lt;code&gt;ReadWriteOncePod&lt;/code&gt; access mode:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;PersistentVolumeClaim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;single-writer-only&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;accessModes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- ReadWriteOncePod&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# Allows only a single pod to access single-writer-only.&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requests&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storage&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;1Gi&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;If your storage plugin supports
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/dynamic-provisioning/&#34;&gt;Dynamic provisioning&lt;/a&gt;, then
new PersistentVolumes will be created with the &lt;code&gt;ReadWriteOncePod&lt;/code&gt; access mode
applied.&lt;/p&gt;
&lt;p&gt;Read &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes&#34;&gt;Migrating existing PersistentVolumes&lt;/a&gt;
for details on migrating existing volumes to use &lt;code&gt;ReadWriteOncePod&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;how-can-i-learn-more&#34;&gt;How can I learn more?&lt;/h2&gt;
&lt;p&gt;Please see the blog posts &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/09/13/read-write-once-pod-access-mode-alpha&#34;&gt;alpha&lt;/a&gt;,
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/04/20/read-write-once-pod-access-mode-beta&#34;&gt;beta&lt;/a&gt;, and
&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2485-read-write-once-pod-pv-access-mode/README.md&#34;&gt;KEP-2485&lt;/a&gt;
for more details on the &lt;code&gt;ReadWriteOncePod&lt;/code&gt; access mode and motivations for CSI
spec changes.&lt;/p&gt;
&lt;h2 id=&#34;how-do-i-get-involved&#34;&gt;How do I get involved?&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&#34;https://kubernetes.slack.com/messages/csi&#34;&gt;Kubernetes #csi Slack channel&lt;/a&gt;
and any of the standard
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact&#34;&gt;SIG Storage communication channels&lt;/a&gt;
are great methods to reach out to the SIG Storage and the CSI teams.&lt;/p&gt;
&lt;p&gt;Special thanks to the following people whose thoughtful reviews and feedback helped shape this feature:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Abdullah Gharaibeh (ahg-g)&lt;/li&gt;
&lt;li&gt;Aldo Culquicondor (alculquicondor)&lt;/li&gt;
&lt;li&gt;Antonio Ojea (aojea)&lt;/li&gt;
&lt;li&gt;David Eads (deads2k)&lt;/li&gt;
&lt;li&gt;Jan Šafránek (jsafrane)&lt;/li&gt;
&lt;li&gt;Joe Betz (jpbetz)&lt;/li&gt;
&lt;li&gt;Kante Yin (kerthcet)&lt;/li&gt;
&lt;li&gt;Michelle Au (msau42)&lt;/li&gt;
&lt;li&gt;Tim Bannister (sftim)&lt;/li&gt;
&lt;li&gt;Xing Yang (xing-yang)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you’re interested in getting involved with the design and development of CSI
or any part of the Kubernetes storage system, join the
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;Kubernetes Storage Special Interest Group&lt;/a&gt; (SIG).
We’re rapidly growing and always welcome new contributors.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29: CSI Storage Resizing Authenticated and Generally Available in v1.29</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/15/csi-node-expand-secret-support-ga/</link>
      <pubDate>Fri, 15 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/15/csi-node-expand-secret-support-ga/</guid>
      <description>
        
        
        &lt;p&gt;Kubernetes version v1.29 brings generally available support for authentication
during CSI (Container Storage Interface) storage resize operations.&lt;/p&gt;
&lt;p&gt;Let&#39;s embark on the evolution of this feature, initially introduced in alpha in
Kubernetes v1.25, and unravel the changes accompanying its transition to GA.&lt;/p&gt;
&lt;h2 id=&#34;authenticated-csi-storage-resizing-unveiled&#34;&gt;Authenticated CSI storage resizing unveiled&lt;/h2&gt;
&lt;p&gt;Kubernetes harnesses the capabilities of CSI to seamlessly integrate with third-party
storage systems, empowering your cluster to seamlessly expand storage volumes
managed by the CSI driver. The recent elevation of authentication secret support
for resizes from Beta to GA ushers in new horizons, enabling volume expansion in
scenarios where the underlying storage operation demands credentials for backend
cluster operations – such as accessing a SAN/NAS fabric. This enhancement addresses
a critical limitation for CSI drivers, allowing volume expansion at the node level,
especially in cases necessitating authentication for resize operations.&lt;/p&gt;
&lt;p&gt;The challenges extend beyond node-level expansion. Within the Special Interest
Group (SIG) Storage, use cases have surfaced, including scenarios where the
CSI driver needs to validate the actual size of backend block storage before
initiating a node-level filesystem expand operation. This validation prevents
false positive returns from the backend storage cluster during file system expansion.
Additionally, for PersistentVolumes representing encrypted block storage (e.g., using LUKS),
a passphrase is mandated to expand the device and grow the filesystem, underscoring
the necessity for authenticated resizing.&lt;/p&gt;
&lt;h2 id=&#34;what-s-new-for-kubernetes-v1-29&#34;&gt;What&#39;s new for Kubernetes v1.29&lt;/h2&gt;
&lt;p&gt;With the graduation to GA, the feature remains enabled by default. Support for
node-level volume expansion secrets has been seamlessly integrated into the CSI
external-provisioner sidecar controller. To take advantage, ensure your external
CSI storage provisioner sidecar controller is operating at v3.3.0 or above.&lt;/p&gt;
&lt;h2 id=&#34;navigating-authenticated-csi-storage-resizing&#34;&gt;Navigating Authenticated CSI Storage Resizing&lt;/h2&gt;
&lt;p&gt;Assuming all requisite components, including the CSI driver, are deployed and operational
on your cluster, and you have a CSI driver supporting resizing, you can initiate a
&lt;code&gt;NodeExpand&lt;/code&gt; operation on a CSI volume. Credentials for the CSI &lt;code&gt;NodeExpand&lt;/code&gt; operation
can be conveniently provided as a Kubernetes Secret, specifying the Secret via the
StorageClass. Here&#39;s an illustrative manifest for a Secret holding credentials:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Secret&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test-secret&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;default&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;data&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;stringData&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;username&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admin&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;password&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;t0p-Secret&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;And here&#39;s an example manifest for a StorageClass referencing those credentials:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;StorageClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;csi-blockstorage-sc&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;csi.storage.k8s.io/node-expand-secret-name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test-secret&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;csi.storage.k8s.io/node-expand-secret-namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;default&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioner&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;blockstorage.cloudprovider.example&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;reclaimPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Delete&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeBindingMode&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Immediate&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;allowVolumeExpansion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Upon successful creation of the PersistentVolumeClaim (PVC), you can verify the
configuration within the .spec.csi field of the PersistentVolume. To confirm,
execute &lt;code&gt;kubectl get persistentvolume &amp;lt;pv_name&amp;gt; -o yaml&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;engage-with-the-evolution&#34;&gt;Engage with the Evolution!&lt;/h2&gt;
&lt;p&gt;For those enthusiastic about contributing or delving deeper into the technical
intricacies, the enhancement proposal comprises exhaustive details about the
feature&#39;s history and implementation. Explore the realms of StorageClass-based
dynamic provisioning in Kubernetes by referring to the [storage class documentation]
(&lt;a href=&#34;https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class&#34;&gt;https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class&lt;/a&gt;)
and the overarching &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/&#34;&gt;PersistentVolumes&lt;/a&gt; documentation.&lt;/p&gt;
&lt;p&gt;Join the Kubernetes Storage SIG (Special Interest Group) to actively participate
in elevating this feature. Your insights are invaluable, and we eagerly anticipate
welcoming more contributors to shape the future of Kubernetes storage!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29: VolumeAttributesClass for Volume Modification</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/15/kubernetes-1-29-volume-attributes-class/</link>
      <pubDate>Fri, 15 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/15/kubernetes-1-29-volume-attributes-class/</guid>
      <description>
        
        
        &lt;p&gt;The v1.29 release of Kubernetes introduced an alpha feature to support modifying a volume
by changing the &lt;code&gt;volumeAttributesClassName&lt;/code&gt; that was specified for a PersistentVolumeClaim (PVC).
With the feature enabled, Kubernetes can handle updates of volume attributes other than capacity.
Allowing volume attributes to be changed without managing it through different
provider&#39;s APIs directly simplifies the current flow.&lt;/p&gt;
&lt;p&gt;You can read about VolumeAttributesClass usage details in the Kubernetes documentation
or you can read on to learn about why the Kubernetes project is supporting this feature.&lt;/p&gt;
&lt;h2 id=&#34;volumeattributesclass&#34;&gt;VolumeAttributesClass&lt;/h2&gt;
&lt;p&gt;The new &lt;code&gt;storage.k8s.io/v1alpha1&lt;/code&gt; API group provides two new types:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;VolumeAttributesClass&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Represents a specification of mutable volume attributes defined by the CSI driver.
The class can be specified during dynamic provisioning of PersistentVolumeClaims,
and changed in the PersistentVolumeClaim spec after provisioning.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ModifyVolumeStatus&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Represents the status object of &lt;code&gt;ControllerModifyVolume&lt;/code&gt; operation.&lt;/p&gt;
&lt;p&gt;With this alpha feature enabled, the spec of PersistentVolumeClaim defines VolumeAttributesClassName
that is used in the PVC. At volume provisioning, the &lt;code&gt;CreateVolume&lt;/code&gt; operation will apply the parameters in the
VolumeAttributesClass along with the parameters in the StorageClass.&lt;/p&gt;
&lt;p&gt;When there is a change of volumeAttributesClassName in the PVC spec,
the external-resizer sidecar will get an informer event. Based on the current state of the configuration,
the resizer will trigger a CSI ControllerModifyVolume.
More details can be found in &lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md&#34;&gt;KEP-3751&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;how-to-use-it&#34;&gt;How to use it&lt;/h2&gt;
&lt;p&gt;If you want to test the feature whilst it&#39;s alpha, you need to enable the relevant feature gate
in the &lt;code&gt;kube-controller-manager&lt;/code&gt; and the &lt;code&gt;kube-apiserver&lt;/code&gt;. Use the &lt;code&gt;--feature-gates&lt;/code&gt; command line argument:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;--feature-gates=&amp;#34;...,VolumeAttributesClass=true&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;It also requires that the CSI driver has implemented the ModifyVolume API.&lt;/p&gt;
&lt;h3 id=&#34;user-flow&#34;&gt;User flow&lt;/h3&gt;
&lt;p&gt;If you would like to see the feature in action and verify it works fine in your cluster, here&#39;s what you can try:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Define a StorageClass and VolumeAttributesClass&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;StorageClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;csi-sc-example&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioner&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pd.csi.storage.gke.io&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;disk-type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;hyperdisk-balanced&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeBindingMode&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;WaitForFirstConsumer&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1alpha1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeAttributesClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;silver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driverName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pd.csi.storage.gke.io&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-iops&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;3000&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-throughput&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;50&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Define and create the PersistentVolumeClaim&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;PersistentVolumeClaim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test-pv-claim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storageClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;csi-sc-example&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeAttributesClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;silver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;accessModes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- ReadWriteOnce&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requests&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storage&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;64Gi&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify that the PersistentVolumeClaim is now provisioned correctly with:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl get pvc
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new VolumeAttributesClass gold:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1alpha1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeAttributesClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gold&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driverName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pd.csi.storage.gke.io&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;iops&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;4000&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;throughput&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;60&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Update the PVC with the new VolumeAttributesClass and apply:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;PersistentVolumeClaim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test-pv-claim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storageClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;csi-sc-example&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeAttributesClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gold&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;accessModes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- ReadWriteOnce&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requests&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storage&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;64Gi&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify that PersistentVolumeClaims has the updated VolumeAttributesClass parameters with:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl describe pvc &amp;lt;PVC_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;next-steps&#34;&gt;Next steps&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;See the &lt;a href=&#34;https://kep.k8s.io/3751&#34;&gt;VolumeAttributesClass KEP&lt;/a&gt; for more information on the design&lt;/li&gt;
&lt;li&gt;You can view or comment on the &lt;a href=&#34;https://github.com/orgs/kubernetes-csi/projects/72&#34;&gt;project board&lt;/a&gt; for VolumeAttributesClass&lt;/li&gt;
&lt;li&gt;In order to move this feature towards beta, we need feedback from the community,
so here&#39;s a call to action: add support to the CSI drivers, try out this feature,
consider how it can help with problems that your users are having…&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;getting-involved&#34;&gt;Getting involved&lt;/h2&gt;
&lt;p&gt;We always welcome new contributors. So, if you would like to get involved, you can join our &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;Kubernetes Storage Special Interest Group&lt;/a&gt; (SIG).&lt;/p&gt;
&lt;p&gt;If you would like to share feedback, you can do so on our &lt;a href=&#34;https://app.slack.com/client/T09NY5SBT/C09QZFCE5&#34;&gt;public Slack channel&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Baofa Fan (calory)&lt;/li&gt;
&lt;li&gt;Ben Swartzlander (bswartz)&lt;/li&gt;
&lt;li&gt;Connor Catlett (ConnorJC3)&lt;/li&gt;
&lt;li&gt;Hemant Kumar (gnufied)&lt;/li&gt;
&lt;li&gt;Jan Šafránek (jsafrane)&lt;/li&gt;
&lt;li&gt;Joe Betz (jpbetz)&lt;/li&gt;
&lt;li&gt;Jordan Liggitt (liggitt)&lt;/li&gt;
&lt;li&gt;Matthew Cary (mattcary)&lt;/li&gt;
&lt;li&gt;Michelle Au (msau42)&lt;/li&gt;
&lt;li&gt;Xing Yang (xing-yang)&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29: Cloud Provider Integrations Are Now Separate Components</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/14/cloud-provider-integration-changes/</link>
      <pubDate>Thu, 14 Dec 2023 09:30:00 -0800</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/14/cloud-provider-integration-changes/</guid>
      <description>
        
        
        &lt;p&gt;For Kubernetes v1.29, you need to use additional components to integrate your
Kubernetes cluster with a cloud infrastructure provider. By default, Kubernetes
v1.29 components &lt;strong&gt;abort&lt;/strong&gt; if you try to specify integration with any cloud provider using
one of the legacy compiled-in cloud provider integrations. If you want to use a legacy
integration, you have to opt back in - and a future release will remove even that option.&lt;/p&gt;
&lt;p&gt;In 2018, the &lt;a href=&#34;https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/&#34;&gt;Kubernetes community agreed to form the Cloud Provider Special
Interest Group (SIG)&lt;/a&gt;, with a mission to externalize all cloud provider
integrations and remove all the existing in-tree cloud provider integrations.
In January 2019, the Kubernetes community approved the initial draft of
&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers&#34;&gt;KEP-2395: Removing In-Tree Cloud Provider Code&lt;/a&gt;. This KEP defines a
process by which we can remove cloud provider specific code from the core
Kubernetes source tree. From the KEP:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Motiviation [sic] behind this effort is to allow cloud providers to develop and
make releases independent from the core Kubernetes release cycle. The
de-coupling of cloud provider code allows for separation of concern between
&amp;quot;Kubernetes core&amp;quot; and the cloud providers within the ecosystem. In addition,
this ensures all cloud providers in the ecosystem are integrating with
Kubernetes in a consistent and extendable way.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;After many years of development and collaboration across many contributors,
the default behavior for legacy cloud provider integrations is changing.
This means that users will need to confirm their Kubernetes configurations,
and in some cases run external cloud controller managers. These changes are
taking effect in Kubernetes version 1.29; read on to learn if you are affected
and what changes you will need to make.&lt;/p&gt;
&lt;p&gt;These updated default settings affect a large proportion of Kubernetes users,
and &lt;strong&gt;will require changes&lt;/strong&gt; for users who were previously using the in-tree
provider integrations. The legacy integrations offered compatibility with
Azure, AWS, GCE, OpenStack, and vSphere; however for AWS and OpenStack the
compiled-in integrations were removed in Kubernetes versions 1.27 and 1.26,
respectively.&lt;/p&gt;
&lt;h2 id=&#34;what-has-changed&#34;&gt;What has changed?&lt;/h2&gt;
&lt;p&gt;At the most basic level, two &lt;a href=&#34;https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;feature gates&lt;/a&gt; are changing their default
value from false to true. Those feature gates, &lt;code&gt;DisableCloudProviders&lt;/code&gt; and
&lt;code&gt;DisableKubeletCloudCredentialProviders&lt;/code&gt;, control the way that the
&lt;a href=&#34;https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/&#34;&gt;kube-apiserver&lt;/a&gt;, &lt;a href=&#34;https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/&#34;&gt;kube-controller-manager&lt;/a&gt;, and &lt;a href=&#34;https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/&#34;&gt;kubelet&lt;/a&gt;
invoke the cloud provider related code that is included in those components.
When these feature gates are true (the default), the only recognized value for
the &lt;code&gt;--cloud-provider&lt;/code&gt; command line argument is &lt;code&gt;external&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Let&#39;s see what the &lt;a href=&#34;https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;official Kubernetes documentation&lt;/a&gt; says about these
feature gates:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;DisableCloudProviders&lt;/code&gt;: Disables any functionality in &lt;code&gt;kube-apiserver&lt;/code&gt;,
&lt;code&gt;kube-controller-manager&lt;/code&gt; and &lt;code&gt;kubelet&lt;/code&gt; related to the &lt;code&gt;--cloud-provider&lt;/code&gt;
component flag.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;DisableKubeletCloudCredentialProviders&lt;/code&gt;: Disable the in-tree functionality
in kubelet to authenticate to a cloud provider container registry for image
pull credentials.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The next stage beyond beta will be full removal; for that release onwards, you
won&#39;t be able to override those feature gates back to false.&lt;/p&gt;
&lt;h2 id=&#34;what-do-you-need-to-do&#34;&gt;What do you need to do?&lt;/h2&gt;
&lt;p&gt;If you are upgrading from Kubernetes 1.28+ and are not on Azure, GCE, or
vSphere then there are no changes you will need to make. If
you &lt;strong&gt;are&lt;/strong&gt; on Azure, GCE, or vSphere, or you are upgrading from a version
older than 1.28, then read on.&lt;/p&gt;
&lt;p&gt;Historically, Kubernetes has included code for a set of cloud providers that
included AWS, Azure, GCE, OpenStack, and vSphere. Since the inception of
&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers&#34;&gt;KEP-2395&lt;/a&gt; the community has been moving towards removal of that
cloud provider code. The OpenStack provider code was removed in version 1.26,
and the AWS provider code was removed in version 1.27. This means that users
who are upgrading from one of the affected cloud providers and versions will
need to modify their deployments.&lt;/p&gt;
&lt;h3 id=&#34;upgrading-on-azure-gce-or-vsphere&#34;&gt;Upgrading on Azure, GCE, or vSphere&lt;/h3&gt;
&lt;p&gt;There are two options for upgrading in this configuration: migrate to external
cloud controller managers, or continue using the in-tree provider code.
Although migrating to external cloud controller managers is recommended,
there are scenarios where continuing with the current behavior is desired.
Please choose the best option for your needs.&lt;/p&gt;
&lt;h4 id=&#34;migrate-to-external-cloud-controller-managers&#34;&gt;Migrate to external cloud controller managers&lt;/h4&gt;
&lt;p&gt;Migrating to use external cloud controller managers is the recommended upgrade
path, when possible in your situation. To do this you will need to
enable the &lt;code&gt;--cloud-provider=external&lt;/code&gt; command line flag for the
&lt;code&gt;kube-apiserver&lt;/code&gt;, &lt;code&gt;kube-controller-manager&lt;/code&gt;, and &lt;code&gt;kubelet&lt;/code&gt; components. In
addition you will need to deploy a cloud controller manager for your provider.&lt;/p&gt;
&lt;p&gt;Installing and running cloud controller managers is a larger topic than this
post can address; if you would like more information on this process please
read the documentation for &lt;a href=&#34;https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/&#34;&gt;Cloud Controller Manager Administration&lt;/a&gt;
and &lt;a href=&#34;https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/&#34;&gt;Migrate Replicated Control Plane To Use Cloud Controller Manager&lt;/a&gt;.
See &lt;a href=&#34;#cloud-provider-integrations&#34;&gt;below&lt;/a&gt; for links to specific cloud provider
implementations.&lt;/p&gt;
&lt;h4 id=&#34;continue-using-in-tree-provider-code&#34;&gt;Continue using in-tree provider code&lt;/h4&gt;
&lt;p&gt;If you wish to continue using Kubernetes with the in-tree cloud provider code,
you will need to modify the command line parameters for &lt;code&gt;kube-apiserver&lt;/code&gt;,
&lt;code&gt;kube-controller-manager&lt;/code&gt;, and &lt;code&gt;kubelet&lt;/code&gt; to disable the feature gates for
&lt;code&gt;DisableCloudProviders&lt;/code&gt; and &lt;code&gt;DisableKubeletCloudCredentialProviders&lt;/code&gt;. To do
this, add the following command line flag to the arguments for the previously
listed commands:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;--feature-gates=DisableCloudProviders=false,DisableKubeletCloudCredentialProviders=false
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;em&gt;Please note that if you have other feature gate modifications on the command
line, they will need to include these 2 feature gates.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: These feature gates will be locked to &lt;code&gt;true&lt;/code&gt; in an upcoming
release. Setting these feature gates to &lt;code&gt;false&lt;/code&gt; should be used as a last
resort. It is highly recommended to migrate to an external cloud controller
manager as the in-tree providers are planned for removal as early as Kubernetes
version 1.31.&lt;/p&gt;
&lt;h3 id=&#34;upgrading-on-other-providers&#34;&gt;Upgrading on other providers&lt;/h3&gt;
&lt;p&gt;For providers other than Azure, GCE, or vSphere, good news, the external cloud
controller manager should already be in use. You can confirm this by inspecting
the &lt;code&gt;--cloud-provider&lt;/code&gt; flag for the kubelets in your cluster, they will have
the value &lt;code&gt;external&lt;/code&gt; if using external providers. The code for AWS and OpenStack
providers was removed from Kubernetes before version 1.27 was released.
Other providers beyond the AWS, Azure, GCE, OpenStack, and vSphere were never
included in Kubernetes and as such they began their life as external cloud
controller managers.&lt;/p&gt;
&lt;h3 id=&#34;upgrading-from-older-kubernetes-versions&#34;&gt;Upgrading from older Kubernetes versions&lt;/h3&gt;
&lt;p&gt;If you are upgrading from a Kubernetes release older than 1.26, and you are on
AWS, Azure, GCE, OpenStack, or vSphere then you will need to enable the
&lt;code&gt;--cloud-provider=external&lt;/code&gt; flag, and follow the advice for installing and
running a cloud controller manager for your provider.&lt;/p&gt;
&lt;p&gt;Please read the documentation for
&lt;a href=&#34;https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/&#34;&gt;Cloud Controller Manager Administration&lt;/a&gt; and
&lt;a href=&#34;https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/&#34;&gt;Migrate Replicated Control Plane To Use Cloud Controller Manager&lt;/a&gt;. See
below for links to specific cloud provider implementations.&lt;/p&gt;
&lt;h2 id=&#34;where-to-find-a-cloud-controller-manager&#34;&gt;Where to find a cloud controller manager?&lt;/h2&gt;
&lt;p&gt;At its core, this announcement is about the cloud provider integrations that
were previously included in Kubernetes. As these components move out of the
core Kubernetes code and into their own repositories, it is important to note
a few things:&lt;/p&gt;
&lt;p&gt;First, SIG Cloud Provider offers a reference framework for developers who
wish to create cloud controller managers for any provider. See the
&lt;a href=&#34;https://github.com/kubernetes/cloud-provider&#34;&gt;cloud-provider repository&lt;/a&gt; for more information about how
these controllers work and how to get started creating your own.&lt;/p&gt;
&lt;p&gt;Second, there are many cloud controller managers available for Kubernetes.
This post is addressing the provider integrations that have been historically
included with Kubernetes but are now in the process of being removed. If you
need a cloud controller manager for your provider and do not see it listed here,
please reach out to the cloud provider you are integrating with or the
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cloud-provider&#34;&gt;Kubernetes SIG Cloud Provider community&lt;/a&gt; for help and advice. It is
worth noting that while most cloud controller managers are open source today,
this may not always be the case. Users should always contact their cloud
provider to learn if there are preferred solutions to utilize on their
infrastructure.&lt;/p&gt;
&lt;h3 id=&#34;cloud-provider-integrations&#34;&gt;Cloud provider integrations provided by the Kubernetes project&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;AWS - &lt;a href=&#34;https://github.com/kubernetes/cloud-provider-aws&#34;&gt;https://github.com/kubernetes/cloud-provider-aws&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Azure - &lt;a href=&#34;https://github.com/kubernetes-sigs/cloud-provider-azure&#34;&gt;https://github.com/kubernetes-sigs/cloud-provider-azure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;GCE - &lt;a href=&#34;https://github.com/kubernetes/cloud-provider-gcp&#34;&gt;https://github.com/kubernetes/cloud-provider-gcp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;OpenStack - &lt;a href=&#34;https://github.com/kubernetes/cloud-provider-openstack&#34;&gt;https://github.com/kubernetes/cloud-provider-openstack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;vSphere - &lt;a href=&#34;https://github.com/kubernetes/cloud-provider-vsphere&#34;&gt;https://github.com/kubernetes/cloud-provider-vsphere&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you are looking for an automated approach to installing cloud controller
managers in your clusters, the &lt;a href=&#34;https://github.com/kubernetes/kops&#34;&gt;kOps&lt;/a&gt; project provides a convenient
solution for managing production-ready clusters.&lt;/p&gt;
&lt;h2 id=&#34;want-to-learn-more&#34;&gt;Want to learn more?&lt;/h2&gt;
&lt;p&gt;Cloud providers and cloud controller managers serve a core function in
Kubernetes. Cloud providers are often the substrate upon which Kubernetes is
operated, and the cloud controller managers supply the essential lifeline
between Kubernetes clusters and their physical infrastructure.&lt;/p&gt;
&lt;p&gt;This post covers one aspect of how the Kubernetes community interacts with
the world of cloud infrastructure providers. If you are curious about this
topic and want to learn more, the Cloud Provider Special Interest Group (SIG)
is the place to go. SIG Cloud Provider hosts bi-weekly meetings to discuss all
manner of topics related to cloud providers and cloud controller managers in
Kubernetes.&lt;/p&gt;
&lt;h3 id=&#34;sig-cloud-provider&#34;&gt;SIG Cloud Provider&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Regular SIG Meeting: &lt;a href=&#34;https://zoom.us/j/508079177?pwd=ZmEvMksxdTFTc0N1eXFLRm91QUlyUT09&#34;&gt;Wednesdays at 9:00 PT (Pacific Time)&lt;/a&gt; (biweekly). &lt;a href=&#34;http://www.thetimezoneconverter.com/?t=9:00&amp;amp;tz=PT%20%28Pacific%20Time%29&#34;&gt;Convert to your timezone&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kubernetes.slack.com&#34;&gt;Kubernetes slack&lt;/a&gt; channel &lt;code&gt;#sig-cloud-provider&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cloud-provider&#34;&gt;SIG Community page&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.29: Mandala</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/13/kubernetes-v1-29-release/</link>
      <pubDate>Wed, 13 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/12/13/kubernetes-v1-29-release/</guid>
      <description>
        
        
        &lt;p&gt;&lt;strong&gt;Editors:&lt;/strong&gt; Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley&lt;/p&gt;
&lt;p&gt;Announcing the release of Kubernetes v1.29: Mandala (The Universe), the last release of 2023!&lt;/p&gt;
&lt;p&gt;Similar to previous releases, the release of Kubernetes v1.29 introduces new stable, beta, and alpha features. The consistent delivery of top-notch releases underscores the strength of our development cycle and the vibrant support from our community.&lt;/p&gt;
&lt;p&gt;This release consists of 49 enhancements. Of those enhancements, 11 have graduated to Stable, 19 are entering Beta and 19 have graduated to Alpha.&lt;/p&gt;
&lt;h2 id=&#34;release-theme-and-logo&#34;&gt;Release theme and logo&lt;/h2&gt;
&lt;p&gt;Kubernetes v1.29: &lt;em&gt;Mandala (The Universe)&lt;/em&gt; ✨🌌&lt;/p&gt;


&lt;figure class=&#34;release-logo &#34;&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/blog/2023-12-13-kubernetes-1.29-release/k8s-1.29.png&#34;
         alt=&#34;Kubernetes 1.29 Mandala logo&#34;/&gt; 
&lt;/figure&gt;
&lt;p&gt;Join us on a cosmic journey with Kubernetes v1.29!&lt;/p&gt;
&lt;p&gt;This release is inspired by the beautiful art form that is Mandala—a symbol of the universe in its perfection. Our tight-knit universe of around 40 Release Team members, backed by hundreds of community contributors, has worked tirelessly to turn challenges into joy for millions worldwide.&lt;/p&gt;
&lt;p&gt;The Mandala theme reflects our community’s interconnectedness—a vibrant tapestry woven by enthusiasts and experts alike. Each contributor is a crucial part, adding their unique energy, much like the diverse patterns in Mandala art. Kubernetes thrives on collaboration, echoing the harmony in Mandala creations.&lt;/p&gt;
&lt;p&gt;The release logo, made by &lt;a href=&#34;https://janusworx.com&#34;&gt;Mario Jason Braganza&lt;/a&gt; (base Mandala art, courtesy - &lt;a href=&#34;https://pixabay.com/users/fibrel-3502541/&#34;&gt;Fibrel Ojalá&lt;/a&gt;), symbolizes the little universe that is the Kubernetes project and all its people.&lt;/p&gt;
&lt;p&gt;In the spirit of Mandala’s transformative symbolism, Kubernetes v1.29 celebrates our project’s evolution. Like stars in the Kubernetes universe, each contributor, user, and supporter lights the way. Together, we create a universe of possibilities—one release at a time.&lt;/p&gt;
&lt;h2 id=&#34;graduations-to-stable&#34;&gt;Improvements that graduated to stable in Kubernetes v1.29&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;This is a selection of some of the improvements that are now stable following the v1.29 release.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=&#34;readwriteoncepod-pv-access-mode&#34;&gt;ReadWriteOncePod PersistentVolume access mode (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;SIG Storage&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;In Kubernetes, volume &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/storage/persistent-volumes/#access-modes&#34;&gt;access modes&lt;/a&gt;
are the way you can define how durable storage is consumed. These access modes are a part of the spec for PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs). When using storage, there are different ways to model how that storage is consumed. For example, a storage system like a network file share can have many users all reading and writing data simultaneously. In other cases maybe everyone is allowed to read data but not write it. For highly sensitive data, maybe only one user is allowed to read and write data but nobody else.&lt;/p&gt;
&lt;p&gt;Before v1.22, Kubernetes offered three access modes for PVs and PVCs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ReadWriteOnce – the volume can be mounted as read-write by a single node&lt;/li&gt;
&lt;li&gt;ReadOnlyMany – the volume can be mounted read-only by many nodes&lt;/li&gt;
&lt;li&gt;ReadWriteMany – the volume can be mounted as read-write by many nodes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The ReadWriteOnce access mode restricts volume access to a single node, which means it is possible for multiple pods on the same node to read from and write to the same volume. This could potentially be a major problem for some applications, especially if they require at most one writer for data safety guarantees.&lt;/p&gt;
&lt;p&gt;To address this problem, a fourth access mode ReadWriteOncePod was introduced as an Alpha feature in v1.22 for CSI volumes. If you create a pod with a PVC that uses the ReadWriteOncePod access mode, Kubernetes ensures that pod is the only pod across your whole cluster that can read that PVC or write to it. In v1.29, this feature became Generally Available.&lt;/p&gt;
&lt;h3 id=&#34;csi-node-volume-expansion-secrets&#34;&gt;Node volume expansion Secret support for CSI drivers (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;SIG Storage&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;In Kubernetes, a volume expansion operation may include the expansion of the volume on the node, which involves filesystem resize. Some CSI drivers require secrets, for example a credential for accessing a SAN fabric, during the node expansion for the following use cases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;When a PersistentVolume represents encrypted block storage, for example using LUKS, you may need to provide a passphrase in order to expand the device.&lt;/li&gt;
&lt;li&gt;For various validations, the CSI driver needs to have credentials to communicate with the backend storage system at time of node expansion.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To meet this requirement, the CSI Node Expand Secret feature was introduced in Kubernetes v1.25. This allows an optional secret field to be sent as part of the NodeExpandVolumeRequest by the CSI drivers so that node volume expansion operation can be performed with the underlying storage system. In Kubernetes v1.29, this feature became generally available.&lt;/p&gt;
&lt;h3 id=&#34;kms-v2-api-encryption&#34;&gt;KMS v2 encryption at rest generally available (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-auth&#34;&gt;SIG Auth&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;One of the first things to consider when securing a Kubernetes cluster is encrypting persisted
API data at rest. KMS provides an interface for a provider to utilize a key stored in an external
key service to perform this encryption. With the Kubernetes v1.29, KMS v2 has become
a stable feature bringing numerous improvements in performance, key rotation,
health check &amp;amp; status, and observability.
These enhancements provide users with a reliable solution to encrypt all resources in their Kubernetes clusters. You can read more about this in &lt;a href=&#34;https://kep.k8s.io/3299&#34;&gt;KEP-3299&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It is recommended to use KMS v2. KMS v1 feature gate is disabled by default. You will have to opt in to continue to use it.&lt;/p&gt;
&lt;h2 id=&#34;graduations-to-beta&#34;&gt;Improvements that graduated to beta in Kubernetes v1.29&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;This is a selection of some of the improvements that are now beta following the v1.29 release.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The throughput of the scheduler is our eternal challenge. This QueueingHint feature brings a new possibility to optimize the efficiency of requeueing, which could reduce useless scheduling retries significantly.&lt;/p&gt;
&lt;h3 id=&#34;node-lifecycle-separated-from-taint-management-sig-scheduling-https-github-com-kubernetes-community-tree-master-sig-scheduling&#34;&gt;Node lifecycle separated from taint management (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-scheduling&#34;&gt;SIG Scheduling&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;As title describes, it&#39;s to decouple &lt;code&gt;TaintManager&lt;/code&gt; that performs taint-based pod eviction from &lt;code&gt;NodeLifecycleController&lt;/code&gt; and make them two separate controllers: &lt;code&gt;NodeLifecycleController&lt;/code&gt; to add taints to unhealthy nodes and &lt;code&gt;TaintManager&lt;/code&gt; to perform pod deletion on nodes tainted with NoExecute effect.&lt;/p&gt;
&lt;h3 id=&#34;serviceaccount-token-clean-up&#34;&gt;Clean up for legacy Secret-based ServiceAccount tokens (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-auth&#34;&gt;SIG Auth&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Kubernetes switched to using more secure service account tokens, which were time-limited and bound to specific pods by 1.22. Stopped auto-generating legacy secret-based service account tokens in 1.24. Then started labeling remaining auto-generated secret-based tokens still in use with their last-used date in 1.27.&lt;/p&gt;
&lt;p&gt;In v1.29, to reduce potential attack surface, the LegacyServiceAccountTokenCleanUp feature labels legacy auto-generated secret-based tokens as invalid if they have not been used for a long time (1 year by default), and automatically removes them if use is not attempted for a long time after being marked as invalid (1 additional year by default). &lt;a href=&#34;https://kep.k8s.io/2799&#34;&gt;KEP-2799&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;new-alpha-features&#34;&gt;New alpha features&lt;/h2&gt;
&lt;h3 id=&#34;match-label-keys-pod-affinity&#34;&gt;Define Pod affinity or anti-affinity using &lt;code&gt;matchLabelKeys&lt;/code&gt; (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-scheduling&#34;&gt;SIG Scheduling&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;One enhancement will be introduced in PodAffinity/PodAntiAffinity as alpha. It will increase the accuracy of calculation during rolling updates.&lt;/p&gt;
&lt;h3 id=&#34;kube-proxy-nftables&#34;&gt;nftables backend for kube-proxy (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-network&#34;&gt;SIG Network&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;The default kube-proxy implementation on Linux is currently based on iptables. This was the preferred packet filtering and processing system in the Linux kernel for many years (starting with the 2.4 kernel in 2001). However, unsolvable problems with iptables led to the development of a successor, nftables. Development on iptables has mostly stopped, with new features and performance improvements primarily going into nftables instead.&lt;/p&gt;
&lt;p&gt;This feature adds a new backend to kube-proxy based on nftables, since some Linux distributions already started to deprecate and remove iptables, and nftables claims to solve the main performance problems of iptables.&lt;/p&gt;
&lt;h3 id=&#34;ip-address-range-apis&#34;&gt;APIs to manage IP address ranges for Services (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-network&#34;&gt;SIG Network&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Services are an abstract way to expose an application running on a set of Pods. Services can have a cluster-scoped virtual IP address, that is allocated from a predefined CIDR defined in the kube-apiserver flags. However, users may want to add, remove, or resize existing IP ranges allocated for Services without having to restart the kube-apiserver.&lt;/p&gt;
&lt;p&gt;This feature implements a new allocator logic that uses 2 new API Objects: ServiceCIDR and IPAddress, allowing users to dynamically increase the number of Services IPs available by creating new ServiceCIDRs. This helps to resolve problems like IP exhaustion or IP renumbering.&lt;/p&gt;
&lt;h3 id=&#34;image-pull-per-runtimeclass&#34;&gt;Add support to containerd/kubelet/CRI to support image pull per runtime class (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-windows&#34;&gt;SIG Windows&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Kubernetes v1.29 adds support to pull container images based on the RuntimeClass of the Pod that uses them.
This feature is off by default in v1.29 under a feature gate called &lt;code&gt;RuntimeClassInImageCriApi&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Container images can either be a manifest or an index. When the image being pulled is an index (image index has a list of image manifests ordered by platform), platform matching logic in the container runtime is used to pull an appropriate image manifest from the index. By default, the platform matching logic picks a manifest that matches the host that the image pull is being executed from. This can be limiting for VM-based containers where a user could pull an image with the intention of running it as a VM-based container, for example, Windows Hyper-V containers.&lt;/p&gt;
&lt;p&gt;The image pull per runtime class feature adds support to pull different images based the runtime class specified. This is achieved by referencing an image by a tuple of (&lt;code&gt;imageID&lt;/code&gt;, &lt;code&gt;runtimeClass&lt;/code&gt;), instead of just the &lt;code&gt;imageName&lt;/code&gt; or &lt;code&gt;imageID&lt;/code&gt;. Container runtimes could choose to add support for this feature if they&#39;d like. If they do not, the default behavior of kubelet that existed prior to Kubernetes v1.29 will be retained.&lt;/p&gt;
&lt;h3 id=&#34;in-place-updates-for-pod-resources-for-windows-pods-sig-windows-https-github-com-kubernetes-community-tree-master-sig-windows&#34;&gt;In-place updates for Pod resources, for Windows Pods (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-windows&#34;&gt;SIG Windows&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;As an alpha feature, Kubernetes Pods can be mutable with respect to their &lt;code&gt;resources&lt;/code&gt;, allowing users to change the &lt;em&gt;desired&lt;/em&gt; resource requests and limits for a Pod without the need to restart the Pod. With v1.29, this feature is now supported for Windows containers.&lt;/p&gt;
&lt;h2 id=&#34;graduations-deprecations-and-removals-for-kubernetes-v1-29&#34;&gt;Graduations, deprecations and removals for Kubernetes v1.29&lt;/h2&gt;
&lt;h3 id=&#34;graduated-to-stable&#34;&gt;Graduated to stable&lt;/h3&gt;
&lt;p&gt;This lists all the features that graduated to stable (also known as &lt;em&gt;general availability&lt;/em&gt;).
For a full list of updates including new features and graduations from alpha to beta, see the
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md&#34;&gt;release notes&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This release includes a total of 11 enhancements promoted to Stable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3458&#34;&gt;Remove transient node predicates from KCCM&#39;s service controller&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3668&#34;&gt;Reserve nodeport ranges for dynamic and static allocation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/1040&#34;&gt;Priority and Fairness for API Server Requests&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3299&#34;&gt;KMS v2 Improvements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/365&#34;&gt;Support paged LIST queries from the Kubernetes API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/2485&#34;&gt;ReadWriteOncePod PersistentVolume Access Mode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3466&#34;&gt;Kubernetes Component Health SLIs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/2876&#34;&gt;CRD Validation Expression Language&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/3107&#34;&gt;Introduce nodeExpandSecret in CSI PV source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/2879&#34;&gt;Track Ready Pods in Job status&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kep.k8s.io/727&#34;&gt;Kubelet Resource Metrics Endpoint&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;deprecations-and-removals&#34;&gt;Deprecations and removals&lt;/h3&gt;
&lt;h4 id=&#34;in-tree-cloud-provider-integration-removal&#34;&gt;Removal of in-tree integrations with cloud providers (&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cloud-provider&#34;&gt;SIG Cloud Provider&lt;/a&gt;)&lt;/h4&gt;
&lt;p&gt;Kubernetes v1.29 defaults to operating &lt;em&gt;without&lt;/em&gt; a built-in integration to any cloud provider.
If you have previously been relying on in-tree cloud provider integrations (with Azure, GCE, or vSphere) then you can either:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;enable an equivalent external &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/architecture/cloud-controller/&#34;&gt;cloud controller manager&lt;/a&gt;
integration &lt;em&gt;(recommended)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;opt back in to the legacy integration by setting the associated feature gates to &lt;code&gt;false&lt;/code&gt;; the feature
gates to change are &lt;code&gt;DisableCloudProviders&lt;/code&gt; and &lt;code&gt;DisableKubeletCloudCredentialProviders&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Enabling external cloud controller managers means you must run a suitable cloud controller manager within your cluster&#39;s control plane; it also requires setting the command line argument &lt;code&gt;--cloud-provider=external&lt;/code&gt; for the kubelet (on every relevant node), and across the control plane (kube-apiserver and kube-controller-manager).&lt;/p&gt;
&lt;p&gt;For more information about how to enable and run external cloud controller managers, read &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/administer-cluster/running-cloud-controller/&#34;&gt;Cloud Controller Manager Administration&lt;/a&gt; and &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tasks/administer-cluster/controller-manager-leader-migration/&#34;&gt;Migrate Replicated Control Plane To Use Cloud Controller Manager&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you need a cloud controller manager for one of the legacy in-tree providers, please see the following links:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/cloud-provider-aws&#34;&gt;Cloud provider AWS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes-sigs/cloud-provider-azure&#34;&gt;Cloud provider Azure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/cloud-provider-gcp&#34;&gt;Cloud provider GCE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/cloud-provider-openstack&#34;&gt;Cloud provider OpenStack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/cloud-provider-vsphere&#34;&gt;Cloud provider vSphere&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There are more details in &lt;a href=&#34;https://kep.k8s.io/2395&#34;&gt;KEP-2395&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id=&#34;removal-of-the-v1beta2-flow-control-api-group&#34;&gt;Removal of the &lt;code&gt;v1beta2&lt;/code&gt; flow control API group&lt;/h4&gt;
&lt;p&gt;The deprecated &lt;em&gt;flowcontrol.apiserver.k8s.io/v1beta2&lt;/em&gt; API version of FlowSchema and
PriorityLevelConfiguration are no longer served in Kubernetes v1.29.&lt;/p&gt;
&lt;p&gt;If you have manifests or client software that uses the deprecated beta API group, you should change
these before you upgrade to v1.29.
See the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/deprecation-guide/#v1-29&#34;&gt;deprecated API migration guide&lt;/a&gt;
for details and advice.&lt;/p&gt;
&lt;h4 id=&#34;deprecation-of-the-status-nodeinfo-kubeproxyversion-field-for-node&#34;&gt;Deprecation of the &lt;code&gt;status.nodeInfo.kubeProxyVersion&lt;/code&gt; field for Node&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;.status.kubeProxyVersion&lt;/code&gt; field for Node objects is now deprecated, and the Kubernetes project
is proposing to remove that field in a future release. The deprecated field is not accurate and has historically
been managed by kubelet - which does not actually know the kube-proxy version, or even whether kube-proxy
is running.&lt;/p&gt;
&lt;p&gt;If you&#39;ve been using this field in client software, stop - the information isn&#39;t reliable and the field is now
deprecated.&lt;/p&gt;
&lt;h4 id=&#34;legacy-linux-package-repositories&#34;&gt;Legacy Linux package repositories&lt;/h4&gt;
&lt;p&gt;Please note that in August of 2023, the legacy package repositories (&lt;code&gt;apt.kubernetes.io&lt;/code&gt; and
&lt;code&gt;yum.kubernetes.io&lt;/code&gt;) were formally deprecated and the Kubernetes project announced the
general availability of the community-owned package repositories for Debian and RPM packages,
available at &lt;code&gt;https://pkgs.k8s.io&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;These legacy repositories were frozen in September of 2023, and
will go away entirely in January of 2024. If you are currently relying on them, you &lt;strong&gt;must&lt;/strong&gt; migrate.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This deprecation is not directly related to the v1.29 release.&lt;/em&gt; For more details, including how these changes may affect you and what to do if you are affected, please read the &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/08/31/legacy-package-repository-deprecation/&#34;&gt;legacy package repository deprecation announcement&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;release-notes&#34;&gt;Release notes&lt;/h2&gt;
&lt;p&gt;Check out the full details of the Kubernetes v1.29 release in our &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md&#34;&gt;release notes&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;availability&#34;&gt;Availability&lt;/h2&gt;
&lt;p&gt;Kubernetes v1.29 is available for download on &lt;a href=&#34;https://github.com/kubernetes/kubernetes/releases/tag/v1.29.0&#34;&gt;GitHub&lt;/a&gt;. To get started with Kubernetes, check out these &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/tutorials&#34;&gt;interactive tutorials&lt;/a&gt; or run local Kubernetes clusters using &lt;a href=&#34;https://minikube.sigs.k8s.io/&#34;&gt;minikube&lt;/a&gt;. You can also easily install v1.29 using &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/setup/independent/create-cluster-kubeadm&#34;&gt;kubeadm&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;release-team&#34;&gt;Release team&lt;/h2&gt;
&lt;p&gt;Kubernetes is only possible with the support, commitment, and hard work of its community. Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up the Kubernetes releases you rely on. This requires the specialized skills of people from all corners of our community, from the code itself to its documentation and project management.&lt;/p&gt;
&lt;p&gt;We would like to thank the entire &lt;a href=&#34;https://github.com/kubernetes/sig-release/blob/master/releases/release-1.29/release-team.md&#34;&gt;release team&lt;/a&gt; for the hours spent hard at work to deliver the Kubernetes v1.29 release for our community. A very special thanks is in order for our release lead, &lt;a href=&#34;https://github.com/Priyankasaggu11929&#34;&gt;Priyanka Saggu&lt;/a&gt;, for supporting and guiding us through a successful release cycle, making sure that we could all contribute in the best way possible, and challenging us to improve the release process.&lt;/p&gt;
&lt;h2 id=&#34;project-velocity&#34;&gt;Project velocity&lt;/h2&gt;
&lt;p&gt;The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.&lt;/p&gt;
&lt;p&gt;In the v1.29 release cycle, which &lt;a href=&#34;https://github.com/kubernetes/sig-release/tree/master/releases/release-1.29&#34;&gt;ran for 14 weeks&lt;/a&gt; (September 6 to December 13), we saw contributions from &lt;a href=&#34;https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&amp;amp;var-period_name=v1.28.0%20-%20now&amp;amp;var-metric=contributions&#34;&gt;888 companies&lt;/a&gt; and &lt;a href=&#34;https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&amp;amp;var-period_name=v1.28.0%20-%20now&amp;amp;var-metric=contributions&amp;amp;var-repogroup_name=Kubernetes&amp;amp;var-repo_name=kubernetes%2Fkubernetes&amp;amp;var-country_name=All&amp;amp;var-companies=All&#34;&gt;1422 individuals&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;ecosystem-updates&#34;&gt;Ecosystem updates&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;KubeCon + CloudNativeCon Europe 2024 will take in Paris, France, from &lt;strong&gt;19 – 22 March 2024&lt;/strong&gt;! You can find more information about the conference and registration on the &lt;a href=&#34;https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/&#34;&gt;event site&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;release-webinar&#34;&gt;Upcoming release webinar&lt;/h2&gt;
&lt;p&gt;Join members of the Kubernetes v1.29 release team on Friday, December 15th, 2023, at 11am PT (2pm eastern) to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades. For more information and registration, visit the &lt;a href=&#34;https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-129-release/&#34;&gt;event page&lt;/a&gt; on the CNCF Online Programs site.&lt;/p&gt;
&lt;h3 id=&#34;get-involved&#34;&gt;Get involved&lt;/h3&gt;
&lt;p&gt;The simplest way to get involved with Kubernetes is by joining one of the many &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-list.md&#34;&gt;Special Interest Groups&lt;/a&gt; (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/communication&#34;&gt;community meeting&lt;/a&gt;, and through the channels below. Thank you for your continued feedback and support.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Follow us on Twitter &lt;a href=&#34;https://twitter.com/kubernetesio&#34;&gt;@Kubernetesio&lt;/a&gt; for latest updates&lt;/li&gt;
&lt;li&gt;Join the community discussion on &lt;a href=&#34;https://discuss.kubernetes.io/&#34;&gt;Discuss&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Join the community on &lt;a href=&#34;http://slack.k8s.io/&#34;&gt;Slack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Post questions (or answer questions) on &lt;a href=&#34;http://stackoverflow.com/questions/tagged/kubernetes&#34;&gt;Stack Overflow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Share your Kubernetes &lt;a href=&#34;https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform&#34;&gt;story&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Read more about what’s happening with Kubernetes on the &lt;a href=&#34;https://kubernetes.io/blog/&#34;&gt;blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Learn more about the &lt;a href=&#34;https://github.com/kubernetes/sig-release/tree/master/release-team&#34;&gt;Kubernetes Release Team&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
  </channel>
</rss>
